Monday, April 14, 2025
Monday April 14, 2025
Monday April 14, 2025

Google unleashes AI for Geospatial Reasoning to revolutionise crisis and climate response

PUBLISHED ON

|

Geospatial reasoning blends AI, satellite imagery and user data to transform real-world decision-making

Google is ramping up its geospatial prowess with a bold new initiative called Geospatial Reasoning — a research-driven framework that fuses its latest generative AI with advanced satellite modelling to transform how analysts interpret the planet.

In a major step forward, Google unveiled a suite of new geospatial foundation models trained on high-resolution satellite and aerial imagery. These models are now being deployed with a select group of trusted testers, including industry leaders Airbus, Maxar and Planet Labs.

At the heart of this initiative lies a strikingly ambitious goal: to simplify and accelerate geospatial problem-solving across crisis response, climate resilience, public health, and urban planning. The tools aim to take the friction out of decoding massive, complex datasets traditionally reserved for expert geographers and analysts.

“Geospatial data is big and messy — just like the real world,” said Google Research product managers David Schottlander and Tomer Shekel. “We’re using AI to make that information meaningful and accessible.”

This new framework combines large language models like Gemini with Google’s powerful remote sensing systems. When integrated into agentic workflows — step-by-step processes driven by AI agents — the system can interpret satellite images, weather forecasts, and structured datasets to respond to natural language queries and generate powerful insights.

Take the case of a hurricane. A disaster manager could upload pre- and post-storm aerial images, then prompt Gemini to identify damaged infrastructure, estimate losses using census and housing price data, and prioritise relief efforts using a social vulnerability index. All of this is delivered in moments through a natural language interface.

Embed from Getty Images

This capability hinges on two flagship models. First, the Population Dynamics Foundation Model (PDFM), which analyses how populations interact with their environments — now expanding beyond the U.S. to the UK, Canada, Australia, Japan and Malawi. Second, the newly launched remote sensing foundation models, use advanced architectures like OWL-ViT and masked autoencoders to classify images, assess damage, or even locate infrastructure — with no fine-tuning required.

Users can search for “impassable roads” or “buildings with solar panels” and get results in seconds. Evaluation benchmarks have already shown promising results in object detection and segmentation tasks across diverse scenarios.

Google Earth has begun piloting these AI tools too, allowing users to create custom data layers and run geospatial analyses without any coding. Behind the scenes, the Geospatial Reasoning framework connects a Python-based interface to Google’s cloud-hosted models via the Vertex AI Agent Engine.

Airbus will be using the tools to unlock faster insights from decades of Earth observation data. Maxar aims to merge the AI models with its own near real-time “living globe” of satellite information. Planet Labs, which already delivers daily Earth imagery, will use Google’s models to fast-track decision-making for its government and commercial clients.

Geospatial Reasoning isn’t just a technical leap. It’s a new philosophy of interpreting the planet. And though it’s still in early development, the tools are available now via a trusted tester programme.

Google believes this approach will “think bigger” — helping scientists, analysts, and businesses bridge the gap between raw satellite data and the decisions that shape our world.

You might also like