
- The Digital Planet
- What is Geographic Information Systems (GIS)?
- The Raw Material: Satellite Data Explained
- Google's Data Pipeline: From Orbit to Screen
- The Google Geospatial Ecosystem: A Comparison
- Google Earth: The World's Digital Globe
- The Developer's Toolkit: Google Maps Platform
- The Scientific Supercomputer: Google Earth Engine (GEE)
- The Glue and the Power-Ups: Other Google Tools
- A Practical Example: Building a Wildfire Risk App
- The Broader Context: Where Google Fits
- Challenges and the Road Ahead
- Summary
The Digital Planet
The globe on your desk is static. It’s a representation of the world, fixed in time, showing political boundaries and major landforms. For centuries, this was the best humanity could do. Today, we carry a living, breathing model of the planet in our pockets. This model, powered by a constant stream of data from space, doesn’t just show us where things are; it shows us how they’re changing, minute by minute, year over year.
This revolution in understanding our world is often broadly associated with Google Earth, the remarkable application that lets anyone “fly” from a global view down to their own street. But Google Earth is just the tip of the iceberg. It’s the public-facing lobby of a vast data factory. Behind the curtain lies a complex ecosystem of satellite data, processing engines, and developer tools that allow scientists, businesses, and even hobbyists to build their own geographic applications. These tools are changing everything from how we track deforestation to how a restaurant delivery gets to your door.
Understanding this ecosystem doesn’t require a background in coding. It requires understanding three key components: the raw data (where it comes from), the viewers (how we see it), and the engines (how we analyze it).
What is Geographic Information Systems (GIS)?
Before exploring Google’s tools, it’s helpful to understand the field they operate in: Geographic Information Systems, or GIS. At its simplest, a GIS is a system designed to capture, store, analyze, manage, and present all types of spatial or geographical data.
The easiest way to think about GIS is to imagine a stack of transparent sheets.
- On the bottom sheet, you have a base map of the world, showing oceans and continents.
- On the next sheet, you draw all the country borders.
- On the next, you draw all the rivers and lakes.
- On top of that, you draw all the roads.
- Then, you add a sheet with all the city locations.
- Finally, you add a sheet with data from yesterday, showing areas that received rainfall.
If you look down through this stack, you see a complete picture. But the real power of GIS comes from asking questions between the layers. You can ask: “Which cities are within 10 miles of a major river and also received rainfall yesterday?” A GIS can answer this instantly. It connects where something is (its spatial data) with whatit is (its attribute data).
Google’s tools have taken this core concept, which was once restricted to expensive software in government and university labs, and made it accessible to billions.
The Raw Material: Satellite Data Explained
You can’t have a digital globe without pictures of the globe. Google’s most visible data source is satellite imagery. It’s important to know that Google does not own or operate most of the satellites that collect this imagery. Instead, it aggregates and processes data from a vast network of public and commercial partners. To understand the tools, one must first appreciate the data.
How Satellites See the World: Spatial Resolution
All digital images are made of pixels, or tiny squares of color. In satellite imagery, each pixel represents a real-world area. The size of this area is the image’s spatial resolution.
- Low-Resolution imagery might have a 250-meter resolution. This means each pixel represents a square on the ground that is 250 meters by 250 meters. This is useful for seeing planet-wide patterns, like weather systems or large-scale climate trends, but you can’t see a building.
- Medium-Resolution imagery, like that from the Landsat program, often has a 30-meter resolution. One pixel is about the size of a baseball infield. You can’t see a car, but you can clearly distinguish a forest from a field, or see a new housing development being built.
- High-Resolution imagery, often fromcommercial providers, can have a resolution of 50 centimeters or less. At this level, you can see individual cars, trees, and even people (though not in identifiable detail).
Google’s base map is a mosaic of different resolutions. In remote areas like the Siberian tundra, it uses medium-resolution data. For a major city, it uses extremely high-resolution aerial and satellite imagery to give you a clear view.
More Than Just a Picture: Spectral Resolution
Our eyes see visible light in three bands: red, green, and blue. Satellites are far more powerful. They can “see” in many different bands of the light spectrum, including bands invisible to humans, like near-infrared (NIR) and short-wave infrared (SWIR). This is an image’s spectral resolution.
This “super-vision” is what makes satellite data so useful for science.
- Plant Health: Healthy, photosynthesizing plants reflect a lot of near-infrared light. Stressed or dead plants don’t. By comparing the red light (which plants absorb) and the NIR light (which they reflect), scientists create an index called NDVI (Normalized Difference Vegetation Index). This gives a direct, measurable indicator of vegetation health.
- Fire Detection: Satellites can see in thermal infrared bands, allowing them to detect the heat signature of wildfires, even through thick smoke.
- Geology: Different minerals reflect different bands of SWIR light. Geologists can use satellite imagery to map mineral deposits from orbit.
Watching the Clock: Temporal Resolution
Temporal resolution is a simple concept: how often does the satellite return to the same spot? This “revisit rate” is vital.
A satellite with a 16-day temporal resolution (like Landsat) is excellent for monitoring long-term changes, like a forest growing over decades or a city expanding.
A satellite constellation with a daily temporal resolution (like those from Planet Labs) is built for monitoring rapid change. It can see the day-by-day progress of a construction project, track ships moving through a port, or monitor the immediate aftermath of a flood.
The Workhorses: Key Satellite Constellations
Google’s platforms ingest data from dozens of sources, but a few stand out as the backbone of modern Earth observation.
- Landsat: This is the most important archive of Earth imagery. A joint mission between NASA and the USGS, the Landsat program has been continuously imaging the Earth since 1972. This unbroken, 50-plus-year record is the single most powerful tool we have for studying long-term environmental change. Its data is public and free.
- Sentinel: Part of the Copernicus Programme from the European Space Agency (ESA), the Sentinel family of satellites provides a massive amount of high-quality, free, and open data. Sentinel-2 is similar to Landsat but has a higher spatial resolution (10 meters) and a faster revisit rate (every 5 days). Sentinel-1is a radar satellite. Radar is special because it can see through clouds and at night, making it perfect for monitoring floods, oil spills, and ship movements in all weather conditions.
- Commercial Providers: Companies like Maxar Technologies provide the “spy-satellite-quality” imagery you see in cities. This data is collected for commercial and government clients but is often licensed by Google to be part of its high-resolution base map.
Google’s Data Pipeline: From Orbit to Screen
Google’s first major challenge isn’t analysis; it’s data management. It must ingest petabytes (thousands of terabytes) of this raw data and make it usable.
Stitching the Quilt: Creating the Base Map
Raw satellite images don’t look like Google Earth. They are offset, have different color balances, and are full of clouds. Google’s pipeline performs a process called mosaicking. It ingests millions of images, uses algorithms to pick the best cloud-free pixels from each one, stitches them together like a seamless quilt, and color-corrects the entire globe so it looks like a single, unified photograph. This “base map” is a constantly updated product.
Adding Dimensions: 3D and Street View
Google’s data isn’t just “top-down.”
- Street View: This data is collected by a fleet of cars, backpacks, and even snowmobiles equipped with 360-degree cameras. It provides a human-level perspective of the world.
- 3D Imagery: The realistic 3D models of cities and mountains aren’t drawn by hand. They are generated using a process called photogrammetry. By taking thousands of pictures of a city from airplanes (flying in a grid pattern) and from satellites (at different angles), computers can measure the parallax shift between images to build a complex 3D mesh and drape the photographic texture over it.
The Google Geospatial Ecosystem: A Comparison
It’s common to confuse Google’s different mapping products. Each serves a distinct purpose, from casual viewing to complex scientific analysis. The audience and the goal are completely different for each platform.
Key Platform Distinctions
A simple way to understand the ecosystem is to compare the three main pillars: Google Earth, Google Maps Platform, and Google Earth Engine. They answer different fundamental questions.
| Platform | Primary User | Main Purpose | Example Use Case |
| Google Earth | The general public, students, casual users | Viewing & Exploring | “I want to fly to my childhood home and see it in 3D.” |
| Google Maps Platform | Application developers (e.g., for mobile or web apps) | Building & Integrating | “My app needs to show a store locator and calculate delivery routes.” |
| Google Earth Engine | Scientists, GIS analysts, researchers | Analyzing & Discovering | “I need to analyze 30 years of satellite data to map deforestation in the Amazon.” |
Google Earth: The World’s Digital Globe
Google Earth is the application most people are familiar with. It began its life as a product called EarthViewer 3D, created by Keyhole, Inc., which Google acquired in 2004. It is, first and foremost, a viewer. Its job is to let you explore the massive, pre-processed database of imagery and 3D models that Google has curated.
Beyond the Base Map: The Power of Layers
Google Earth isn’t just a 3D picture. It’s a simple GIS. Users can turn on and off various “layers” of information:
- Roads and Labels: The familiar network of streets, highways, and city names.
- Borders: Political boundaries for countries, states, and counties.
- 3D Buildings: The photorealistic models of cities.
- Weather: A layer showing real-time cloud cover and radar.
Voyager: Guided Tours for Everyone
To make its data more accessible, Google built a storytelling feature called Voyager. This provides guided tours curated by partners like BBC Earth, NASA, and National Geographic. A user can follow a tour on “Migrations” and be flown from location to location, with context, videos, and information appearing at each stop. It turns the globe into an interactive documentary.
Historical Imagery: The Time Machine
One of Google Earth’s most powerful, and often overlooked, features is its historical imagery tool. In many locations, users can access a simple slider that lets them “go back in time.” They can watch a city expand over 20 years, see a glacier recede, or view the impact of a natural disaster like a tsunami, all by accessing the vast archive of satellite and aerial photos.
The Developer’s Toolkit: Google Maps Platform
This is where we move from viewing to building. The Google Maps Platform is not a single app. It’s a suite of Application Programming Interfaces (APIs).
An API is a set of rules and tools that lets different software applications talk to each other. A useful analogy is a restaurant menu. As a customer (a developer), you don’t need to know how to cook (build a global map database). You just need to read the menu (the API) and place an order (“I’d like a map of Boston”). The kitchen (Google‘s servers) prepares the order and the waiter (the API) brings it to your table (your application).
The Google Maps Platform is what powers countless apps you use every day. When Uber shows you a map with cars on it, or when Zillow shows you houses for sale on a map, they are using this platform. It is built for developers who need to add location-based features to their own products.
The platform is broadly broken into three categories.
Maps: Putting the World on a Page
This set of APIs lets developers embed Google’s base map into their own property.
- Maps JavaScript API: This is the tool for websites. It allows a developer to put a map on a webpage, customize its style (e.g., make it dark mode, or only show roads), and add markers, lines, and shapes. This is the workhorse of every “store locator” page on the internet.
- Maps SDK for Android/iOS: This is the equivalent tool for building native mobile apps. It gives the app developer the code needed to embed the same familiar Google Map interface directly into their application, allowing users to pan, zoom, and interact with it.
- Street View Static API: Allows a developer to request a static 360-degree Street View image. A real-estate website might use this to show a “street-level view” of a property.
Routes: Finding the Best Path
This group of APIs is all about movement from Point A to Point B.
- Directions API: This is the core routing engine. A developer can ask for directions between two (or more) points and Google will return the best path, estimated time, and step-by-step instructions for driving, walking, biking, or public transit.
- Distance Matrix API: This is a more powerful tool for logistics. A developer can provide a list of starting points and a list of ending points, and the API will return the travel time and distance for every possible combination. A delivery company uses this to figure out which driver is closest to which pickup.
- Roads API: This tool helps make sense of messy GPS data. A GPS tracker in a vehicle might “drift,” placing the car in a field next to the highway. The Roads API can take this messy path and “snap” it to the most likely road the vehicle was traveling on.
Places: Understanding What’s There
This is perhaps the most heavily used part of the platform. It’s not about the map; it’s about the data on the map. Google has a massive database of over 200 million businesses, parks, landmarks, and “points of interest.”
- Places API: This is the API behind “search near me.” A developer can use it to build features like:
- Place Autocomplete: As a user types in a search box, this API suggests a list of matching places and addresses, saving time and reducing errors.
- Place Details: A developer can request all known information about a specific place: its address, phone number, user reviews, photos, price rating, and business hours. This powers apps like Tripadvisor and Yelp.
- Current Place: Lets an app ask, “Where is the user right now?” and return a list of likely places.
- Geocoding API: This is a fundamental digital “address book.” It converts a human-readable address (like “1600 Amphitheatre Parkway, Mountain View, CA”) into machine-readable latitude and longitudecoordinates (37.422, -122.084). It also works in reverse (reverse geocoding), turning coordinates back into an address.
- Geolocation API: This API helps find a user’s location without using GPS. It can triangulate a user’s position based on which Wi-Fi networks and cell phone towers their device can see. This is often faster and works better indoors than traditional GPS.
The Scientific Supercomputer: Google Earth Engine (GEE)
If Google Maps Platform is a set of building blocks, Google Earth Engine is a planet-scale supercomputer. This is the least-known but, for science and the environment, the most impactful of Google’s geospatial tools.
It is not Google Earth. It is not a viewer. It is an analysis platform.
A New Paradigm: Computing at Planet-Scale
Earth Engine was built to solve a simple, massive problem: data size. The complete Landsat archive is multiple petabytes. A scientist who wants to study deforestation in the Amazon from 1984 to today would, in the old days, have to spend months ordering, downloading, and processing thousands of individual satellite images on a local university computer.
Google Earth Engine flips this model on its head. It hosts the entire public archive of Landsat, Sentinel, and dozens of other climate, weather, and topography datasets in its own cloud. A scientist no longer downloads the data. Instead, they upload their code to the data.
How It Works (For a Layperson)
A researcher accesses GEE through a web browser.
- The Code Editor: They write a short script in JavaScript or Python. This script is an instruction, like: “Find all Landsat images for the Amazon basin. For every year since 1984, calculate the average ‘greenness’ (NDVI). Plot the result on a chart.”
- The “Server-Side” Magic: When the researcher hits “Run,” their tiny script is sent to Google‘s massive data centers. GEE’s “parallel processing” engine breaks their request into thousands of small pieces. Tens of thousands of computers might work together for a few seconds to process all 40 years of data simultaneously.
- The Result: Instead of downloading petabytes of data, the researcher gets back just the final answer: a small chart or a new map showing the trend.
This “parallel computing” model reduces analytical tasks that once took months to just a few seconds.
The Data Catalog: A Library of the Earth
The core of GGE is its data catalog. It contains the full historical archives of:
- Optical Imagery: All Landsat missions, Sentinel-2, and daily global datasets like MODIS.
- Radar Imagery: The complete Sentinel-1 archive, which is invaluable for flood and agriculture monitoring because it sees through clouds.
- Atmospheric and Weather Data: Decades of data on temperature, precipitation, and climate model predictions.
- Geophysical Data: Topography datasets (like SRTM) that show the slope and elevation of all land on Earth.
- Land Cover and Socio-Economic Data: Pre-classified maps showing forests, urban areas, croplands, population density, and even night-time light emissions.
A developer can fuse any of these datasets together. They can ask: “Show me all areas that have a steep slope, are covered in forest, and received high rainfall last week.”
Real-World Applications of Earth Engine
GEE is the engine behind thousands of critical environmental and social applications.
- Tracking Deforestation: The Global Forest Watch project uses GEE to analyze Sentinel data in near-real-time. It provides weekly alerts to governments and conservation groups when it detects new patterns of tree loss, helping to catch illegal logging.
- Managing Water Resources: The GEE team, in partnership with scientists, created a map of all surface water on Earth, showing how lakes, rivers, and reservoirs have changed every month for 35 years. This helps water managers in places like California or the Nile basin make better decisions about water allocation.
- Revolutionizing Agriculture: GEE is a cornerstone of “precision agriculture.” Food companies and farming co-ops use it to monitor the health of millions of acres of farmland. By analyzing spectral data, they can create “prescription maps” that tell a tractor to apply more fertilizer to one part of a field and less to another, saving money and reducing environmental runoff.
- Disaster Response: When a flood hits, aid agencies use GEE to run algorithms on Sentinel-1 (radar) data. By comparing a “before” and “after” image, the radar can instantly map the full extent of the floodwater, even under cloud cover, allowing responders to identify the hardest-hit villages.
- Public Health: Epidemiologists use GEE to map disease risk. To track malaria, for example, they can combine data on temperature, humidity, and surface water to create a habitat model that predicts where mosquito populations (the disease vectors) are likely to spike.
- Urban Planning: City planners use GEE to study the “urban heat island” effect. They can use thermal data from Landsat to identify the hottest neighborhoods in a city, then overlay that with land cover data to see that these hot spots correlate with a lack of tree canopy, guiding new tree-planting initiatives.
GEE Apps
Finally, GEE allows scientists to wrap their complex analysis into a simple, user-friendly webpage. A researcher can build a complex model for drought prediction, then create a GEE App with a simple dropdown menu for a state governor or a farmer, who can select their region and a date to see the drought forecast without ever seeing a line of code.
The Glue and the Power-Ups: Other Google Tools
Beyond these three main platforms, a few other tools and data types complete the ecosystem.
KML: The Lingua Franca of Geodata
KML (Keyhole Markup Language) is a file format, just like a .pdf or a .txt file. It’s the standard way to save and share geographic data for use in Google Earth. A KML file is a simple text file that describes what to draw on a map. It can contain:
- Placemarks: Simple pins with a name and description.
- Paths: Lines, like a hiking trail or a flight path.
- Polygons: Shapes, like the boundary of a park or a sales territory.
- Image Overlays: A way to “drape” a custom image over the globe.
KML is the glue that connects many platforms. A scientist can run an analysis in GEE, export the result as a KML file, and email it to a colleague who can open it directly in Google Earth to see it in 3D.
Google Earth Pro: The Desktop Powerhouse
While the main Google Earth is now web-based, Google still offers a free, installable desktop version called Google Earth Pro. This is a “power user” tool. It’s not an analysis engine like GEE, but it has more advanced features than the web version. Its main benefits are:
- Data Import: It can import more complex GIS data formats, like Esri Shapefiles (SHP), in addition to KML.
- Advanced Measurements: Users can draw polygons and instantly calculate their area and perimeter, or even draw paths to see an elevation profile.
- High-Resolution Output: It’s the best tool for creating high-resolution, print-quality JPEGs from the 3D globe, which is ideal for reports and presentations.
Photorealistic 3D Tiles
This is one of Google’s newest and most visually impressive data products. Using the same photogrammetry techniques that build the 3D cities in Google Earth, Google now offers this 3D “mesh” as a product for developers. A developer building a flight simulator, a real-estate visualization app, or a game can license these ultra-realistic 3D models and stream them into their own application, providing a “digital twin” of a city.
Integrating Artificial Intelligence
The newest frontier is the combination of geospatial data with artificial intelligence (AI). Google has integrated its AI and machine learning platform (using tools like TensorFlow) directly with Google Earth Engine.
Instead of a scientist writing a script based on spectral indices (like NDVI) to guess where crops are, they can now train a machine learning model. They can point to 500 examples of “cornfields” and 500 examples of “soybean fields” and tell the AI, “Go find all the other ones.” The AI model can then scan an entire country’s worth of satellite data and produce a highly accurate crop map. This same technique is used to “find all the swimming pools in a city” for tax assessment, or “count all the ships in a port” for economic analysis, or “identify all the damaged buildings after an earthquake.”
A Practical Example: Building a Wildfire Risk App
To tie all these concepts together, let’s walk through a non-technical example of how a developer would build a modern GIS application.
The Idea: A website for homeowners in California that shows the wildfire risk for any given address.
Step 1: The Base Map and Search (Google Maps Platform)
The developer starts by using the Maps JavaScript API to embed a simple, familiar Google Map on their homepage. They then use the Places API (specifically, the Place Autocomplete feature) to create the search bar. When a user starts typing their address, this API provides the suggestions. When they select their address, the Geocoding API converts that address into latitude/longitude coordinates and drops a pin on the map.
Step 2: Getting the Risk Data (Google Earth Engine)
This is the hard part. “Wildfire risk” isn’t a single dataset. It’s a model based on other data. The developer goes to Google Earth Engine to build this model. They write a script that combines three key datasets from the GEE catalog:
- Fuel: They use Landsat data to calculate NDVI, creating a map of vegetation density. Drier, denser vegetation is higher-risk.
- Topography: They use the SRTM dataset. Fire moves much faster uphill, so they create a map of “steepness,” or slope.
- Weather: They pull in a weather dataset showing 30-day precipitation, identifying areas that are exceptionally dry.
Step 3: The Analysis (Google Earth Engine)
The developer’s GEE script combines these three layers. It creates a new, final layer where each pixel has a “risk score.” A pixel gets a high score if it is on a steep slope, covered in dry vegetation, and hasn’t seen rain. The script runs this analysis for all of California in seconds. The developer exports this final “risk map” as a data file.
Step 4: The Application (Google Maps Platform)
Back on the website, the developer takes their custom “risk map” and overlays it on top of the Google base map as a semi-transparent layer, styled like a weather map (e.g., green for low risk, red for high risk).
Step 5: The Final Product
A homeowner visits the site and types in their address. The pin drops. The application “reads” the color of the risk layer directly beneath their pin and displays a message: “Your home is in a high-risk area.” To add more value, the developer uses the Places API again, adding a button that says, “Find nearby fire stations,” which then displays the closest ones.
This single application uses Google Maps Platform for the user interface and Google Earth Engine for the heavy-duty scientific analysis.
The Broader Context: Where Google Fits
Google, while a giant, is not the only player in the GIS world. The ecosystem is vast, and interoperability is key.
The Open-Source Community
A massive, global community builds and maintains free, open-source GIS software.
- QGIS: This is the “Photoshop for maps.” It’s a free, downloadable desktop application that is extremely powerful. A GIS analyst might use QGIS to create and edit their own data (like a custom park boundary) before uploading it.
- PostGIS: This is an open-source database extension that gives a standard PostgreSQL database the power to understand and query spatial data.
Commercial GIS Giants
For decades, the undisputed leader in enterprise-level GIS has been Esri. Its flagship platform, ArcGIS, is the industry standard for large organizations, public utilities, and national governments. Esri‘s tools are known for their depth, precision, and robust data management for managing complex internal systems, like a city’s entire water-pipe network.
Interoperability: Playing Together
These systems are not mutually exclusive. A large organization might use Esri‘s ArcGIS as its “system of record” to manage its private data. It might use Google Earth Engine to analyze that data against global-scale public satellite imagery. And it might use the Google Maps Platform to publish the final result to a public-facing website or mobile app, all using open-standard file formats like KML or GeoJSON to communicate between platforms.
Challenges and the Road Ahead
This field is moving at an incredible pace, but challenges remain.
- The Problem with Clouds: The best optical satellites (Landsat, Sentinel-2) can’t see through clouds. This is a major issue in tropical regions, which are often cloud-covered for most of the year. This has driven the increased reliance on radar satellites like Sentinel-1.
- Data Literacy: The tools are becoming easier to use, but the data is complex. An analysis is only as good as the scientist’s understanding of the data’s limitations. A major challenge is training a new generation of analysts, managers, and policymakers who know how to ask the right questions of this data.
- The Future: A “Digital Twin” of Earth: The ultimate goal for many in this field is to create a “Digital Twin” of the planet. This isn’t just a static 3D model like Google Earth. It’s a living, breathing, real-time simulation, constantly fed by data from satellites, ground sensors, and Internet of Things (IoT) devices. Such a model could be used to simulate the impact of a new policy, predict the spread of a wildfire, or model the effect of sea-level rise on a city, block by block.
- The Role of AI: AI will continue to be the biggest driver of change. It will automate the extraction of features from imagery, moving the industry from monitoring (seeing what happened) to prediction(forecasting what will happen).
Summary
Google’s geospatial suite has fundamentally changed our relationship with our planet. It has successfully split a complex field into three accessible parts. Google Earth invites the public to explore a beautiful, curated 3D model of the world. The Google Maps Platform provides the essential building blocks for developers to integrate location into any application. And Google Earth Engine offers a planetary-scale supercomputer to the scientific community, enabling them to analyze decades of data in seconds.
Together, these tools have democratized access to geographic information. What was once the exclusive domain of spy agencies and research universities is now in the hands of everyone, from a student exploring the Roman Forum in 3D to a small non-profit monitoring deforestation in Madagascar. They’ve provided not just a map, but a new lens through which to understand the complex, interconnected systems of our changing world.