Crossroads The ACM Magazine for Students

Sign In

Association for Computing Machinery

Magazine: Features
Unveiling Patterns of the Earth through Machine Learning and Geospatial Analysis

Unveiling Patterns of the Earth through Machine Learning and Geospatial Analysis

By ,

Full text also available in the ACM Digital Library as PDF | HTML | Digital Edition

Tags: Biographies, Earth and atmospheric sciences, Machine learning, Spatial-temporal systems

back to top 

XRDS: For those unfamiliar with your work, can you describe the field you are working in, and what types of projects you have been involved in?

Konstantin Klemmer: I work on machine learning specifically for geospatial data, which has a few properties that make it different from other data modalities. For instance, many models build on the statistical assumption that data are i.i.d (independent and identically distributed). However, this does not hold for geospatial data. Another challenge is that the training and test data could be close together. For instance, two weather stations located close to each other might have very similar readings. In machine learning, this might lead to the trained model being over-confident that it performs well on interpolation tasks but fails tremendously for all sorts of extrapolation. This kind of spatial dependency, or spatial relatedness of data, is a key phenomenon of a few spatial data that makes it challenging to make predictions.

Another big motivation in my work is that geospatial data is very fast and multi-model. You may have large image data sets from all sorts of different satellites and resolutions. You may also have derived products, such as digital elevation maps or some vegetation measures, and something completely different like a text message or a tweet that is sent from a specific location. All these different data modalities are in the same space, our planet Earth. These data all have some georeference, and geospatial machine learning offers you a way to align these different data modalities.

XRDS: Does that mean you are usually working with large-scale datasets? Does this pose any challenges in your research?

KK: Yes, this is another key characteristic of geospatial data. Its order of magnitude is much bigger than text data. For example, the data used to train ChatGPT-3 was around 45 TB worth of tokens. However, a single satellite from the European Space Agency produces 1.5 or 2 TB of imagery per orbit, and the satellite has 14 orbits a day. Remember this is just a single source of data. And if you think about climate models that are which cover the whole plane, they are on a mega-big scale. So, the size of data is completely different from very simple text.

XRDS: With multimodality data at this large scale, I imagine it would be very difficult to process the whole dataset for each research project. In practice, do you subsample from the dataset to extract useful features?

KK: That is a key research question: You are constrained in the amount of data you can actually process, how do you create ideal sub-samples of the data to learn a model that generalizes? That is useful while working within the constraints of your computational resources and your memory. The optimality would be dependent on the specific condition.

There's a foundation model in geographic space, which is a big autoencoder for satellite imagery. The authors' data set is sampled according to land use areas so that it represents all diverse land use areas on the planet. This is a practical example of the sampling process.

XRDS: What type of architecture do you work with for the learning algorithms? In particular, does LLM have any impact on the field that you are working in?

KK: This again depends on the type of project. I've worked with graph neural networks for tabular data. I also worked with remote sensing models which are quite large in size. One thing to note is that existing computer vision models are very focused on natural images, which are of three channels—right red, green, blue—but remote sensing imagery is often multi-spectral, which can go from anywhere from 10 tens of channels to hundreds of channels.

XRDS: The machine learning community has been talking about the interpretability and explainability of machine learning models nowadays. Would this discussion be relevant to the research in your field? Would you prefer more explainable models?

KK: Super relevant. For any kind of machine learning that is application-focused, you will always see that downstream at the decision-maker level. They really care about explainability and uncertainty, quantification, and failure modes that the model might have.

It's very challenging. It's also very exciting because we could just get started. There's a lot of unanswered questions and a lot of open research directions you can go towards.

XRDS: What advice would you give to students, or early-stage researchers who want to join the field you are working in? It could be domain-specific or general.

KK: I think there are two directions you can come from. If you come from computer science or machine learning, and you want to get involved, my recommendation would be to reach out to more application-oriented researchers. For example, do an internship with an organization that does biodiversity monitoring, or with a space agency that works with satellite data a lot. This is a great way to learn about the kind of practical perspectives and the challenges that come with dealing with geospatial data. Then you'll have your machine learning background and your experience in the application domain, and that will allow you to do cool research in this field and the other way around.

If you're coming from, not computer science and machine learning, but you're a practitioner—you come from, let's say, ecology—then my recommendation would be to pursue collaborations in computer science and in machine writing with collaborators who are open and interested in this kind of applied domain. I think many tech companies and also professors offer internship opportunities. You don't have to be a computer scientist to do an internship at a tech company,

More generally, the overall recommendation I have for any student is to be proactive and to reach out to people whose research you think is interesting.

back to top  Authors

Konstantin Klemmer is a postdoctoral researcher at Microsoft Research New England. He completed his Ph.D. at the University of Warwick and New York University, supervised by Stephen Jarvis (U Birmingham), Daniel Neill (NYU), and Hongkai Wen (U Warwick / Samsung AI). Klemmer was also an Enrichment student at the Alan Turing Institute and a Beyond Fellow at TUM / DLR. His research focuses on the representation of geospatial phenomena in machine learning methods. Beyond that, he is interested in the application of these methods in urban environments and to tackle climate change. In his free time, Klemmer volunteers for Climate Change AI where he served for two terms on the board until fall 2022. Klemmer hold a bachelor's degree in economics from the University of Freiburg (Germany) and a master's in transportation from Imperial College and University College London.

back to top 

Copyright held by owner/author.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2024 ACM, Inc.