Latency: From Orbit to Application
The satellites are fast enough — the ground segment is where the time goes
Latency in satellite systems is the total time between a sensor observing something on Earth and that observation becoming usable information. It is not one delay — it is a chain of them. Propagation through space, queuing at ground stations, decryption, format conversion, atmospheric correction, reprojection, tiling, indexing, and delivery. Some links are governed by physics and cannot be shortened. Others are engineering choices. Understanding where latency lives determines what questions you can answer with the data. A flood map delivered in 15 minutes can direct evacuations. The same map delivered in 48 hours is a historical record.
Why It Matters
The Earth observation industry has invested enormous effort in making satellites see better — higher resolution, more spectral bands, wider swaths, shorter revisit times. Sentinel-2 images the same patch of ground every five days. Planet's SuperDove constellation covers the entire landmass daily. SAR systems like Sentinel-1 and ICEYE operate through cloud and darkness.
But the question that matters to the person waiting for the data is not how often the satellite looks. It is how long after looking before the information arrives in a form they can use. A satellite with a five-day revisit and a two-hour delivery pipeline is more useful for time-sensitive applications than a satellite with a daily revisit and a three-day processing backlog.
Latency is the distance between observation and action. In disaster response, it is the difference between life and death. In agriculture, it is the difference between catching a pest outbreak in one field and catching it after it has spread to twenty. In insurance, it is the difference between assessing a claim with current imagery and assessing it with stale data that a claimant can dispute. In military and intelligence contexts, it is the difference between seeing a transporter-erector-launcher on a road and seeing the road where it used to be.
The industry talks about resolution constantly. It talks about latency far less. This is a problem, because for a growing number of applications, latency is the binding constraint.
The Propagation Segment: What Physics Dictates
The first component of latency is the one you cannot negotiate with. Electromagnetic radiation travels at the speed of light — approximately 299,792 kilometres per second in vacuum. The time it takes a signal to travel from a satellite to a ground station is determined entirely by the distance between them.
For a satellite in low Earth orbit at 700 kilometres altitude, the minimum propagation delay — when the satellite is directly overhead — is about 2.3 milliseconds one way. At the edge of the satellite's visibility window, where the slant range increases to perhaps 2,500 kilometres, propagation delay rises to roughly 8.3 milliseconds. These are trivially small numbers. For LEO Earth observation satellites, propagation delay is not a meaningful contributor to end-to-end latency.
The picture changes at higher orbits. A medium Earth orbit satellite at 20,200 kilometres — the altitude of GPS — has a one-way propagation delay of about 67 milliseconds. A geostationary satellite at 35,786 kilometres introduces approximately 120 milliseconds of one-way delay, or 240 milliseconds round-trip. This is perceptible in voice communications and significant in interactive data sessions, but still small compared to processing delays.
For Earth observation specifically, propagation delay is a rounding error. The data that takes 2.3 milliseconds to reach the ground will then spend minutes, hours, or days in processing queues. The speed of light is not the problem.
The Contact Window: When the Ground Can Listen
This is where latency starts to accumulate in ways that matter.
A LEO satellite does not have continuous contact with any single ground station. It orbits the Earth roughly every 90 minutes, and a given ground station can only see it during a portion of each pass — typically 8 to 12 minutes, depending on the station's latitude, the satellite's orbital inclination, and the minimum elevation angle the station antenna can track to.
If a satellite captures an image and happens to be within view of a ground station, it can begin transmitting almost immediately. If the satellite captured the image over the Amazon basin and the nearest compatible ground station is in Svalbard, the data sits in the satellite's onboard storage until the next contact window — which could be anywhere from 20 minutes to several hours, depending on orbital geometry and the ground station network's coverage.
This is the contact gap, and for many EO missions it is the single largest contributor to latency.
The gap is a function of infrastructure investment. More ground stations means more frequent contact windows. The major ground station networks — KSAT's SvalSat complex in Svalbard (which sees every LEO orbit due to its polar location), ESA's network, NASA's Near Earth Network, and the growing commercial Ground-Station-as-a-Service providers like AWS Ground Station and Azure Orbital — collectively provide decent global coverage. But "decent" still means that for many passes, data waits.
Svalbard is uniquely valuable because its location at 78°N means polar-orbiting satellites — which most EO missions use — pass overhead on nearly every orbit. A polar ground station can achieve 12-14 contacts per day with a typical LEO satellite. A mid-latitude station might manage 4-6. An equatorial station, 2-4. The mathematics of orbital mechanics make polar ground infrastructure disproportionately important for EO latency.
Inter-satellite links change this calculation. If a satellite can relay data to another satellite that is currently in view of a ground station, the contact gap shrinks or disappears. Starlink's laser inter-satellite links demonstrate this at scale for communications. For EO, relay services like the European Data Relay System (EDRS) — which uses geostationary relay satellites with optical links to LEO — can reduce the contact gap to near zero for equipped missions. Sentinel-1 and Sentinel-2 both use EDRS for priority data delivery, achieving ground receipt within minutes of acquisition.
But relay services add their own complexity: the LEO satellite needs a compatible laser terminal, the relay satellite needs capacity, and the ground station receiving from the relay satellite needs to be online. Each hop is a potential queue.
The Downlink: Bandwidth as a Bottleneck
Once the satellite has contact with a ground station, the data needs to get down. The time this takes depends on two things: how much data there is and how fast the link can carry it.
A single Sentinel-2 granule — a 100x100 kilometre tile at full resolution across all 13 spectral bands — is approximately 800 megabytes. Sentinel-2's X-band downlink operates at 520 Mbps. Transmitting one granule takes roughly 12 seconds. In a typical 10-minute contact window, the satellite can downlink approximately 39 gigabytes. That is manageable.
But the satellite collects data continuously along its orbital track, and onboard storage accumulates between contact windows. If the satellite has been imaging for 45 minutes since its last ground contact, it may have tens of gigabytes queued. The contact window becomes a fire hose: everything that was collected must be transmitted in the available minutes.
Higher-resolution commercial satellites face a sharper version of this problem. A satellite imaging at 30-centimetre resolution generates vastly more data per square kilometre than Sentinel-2 at 10 metres. Planet's combined constellation generates over 30 terabytes per day. The downlink becomes the funnel through which all that data must pass.
Optical downlinks — using laser communication rather than radio frequency — promise to break this bottleneck. Laser links can achieve data rates of 1 Gbps and beyond, compared to the hundreds of Mbps typical of RF X-band links. The EDRS laser links to Sentinel satellites operate at 1.8 Gbps. Experimental systems have demonstrated multi-terabit rates.
The catch is that laser links require clear skies. Unlike RF signals, which pass through clouds with manageable attenuation, optical beams are blocked by cloud cover. This means optical ground stations must be sited in low-cloud locations, and a network of geographically diverse stations is needed to ensure availability. A single optical ground station in a cloudy location is less reliable than an RF station anywhere. Cloud cover introduces a probabilistic element to what was previously a deterministic link budget.
Ground Ingest: The First Queue
When the data arrives at the ground station, it enters the terrestrial processing chain. And here the delays become engineering decisions rather than physics constraints.
The raw telemetry stream from the satellite must be decommutated — separated into its constituent data packets, stripped of framing and error-correction overhead, and assembled into coherent data products. For a well-engineered ground system, this takes seconds. For a legacy system designed when data volumes were smaller and real-time processing was not a priority, it can take minutes.
The decommutated data then enters an ingest pipeline. The specifics vary by mission, but the general pattern is: verify data integrity, catalogue the acquisition in the mission database, store the raw data, and trigger downstream processing. Each of these steps can introduce delay, particularly if the pipeline is batch-oriented rather than stream-oriented.
Batch processing was the norm for decades because it was simpler and sufficient. When your satellite captures an image, downlinks it three hours later, and your users expect delivery within 24 hours, there is no urgency in processing each granule the instant it arrives. You collect a batch, run the pipeline overnight, and publish results in the morning.
That architecture is incompatible with near-real-time requirements. Stream-oriented ground systems — which process data continuously as it arrives rather than waiting for a batch — can reduce ingest latency from hours to seconds. ESA's Copernicus Ground Segment has progressively moved toward this model, and the Copernicus Data Space Ecosystem now delivers many Sentinel products within hours of acquisition. But "within hours" is still not "within minutes," and the gap is mostly pipeline architecture, not physics.
Processing: Where Hours Disappear
For Earth observation data, raw downlinked imagery is almost never the final product. It must be processed through a series of corrections before it becomes useful.
Radiometric calibration converts raw sensor counts into physically meaningful values — typically top-of-atmosphere radiance or reflectance. This requires calibration coefficients that are periodically updated as the sensor degrades in orbit.
Geometric correction ensures that pixels map to the correct locations on the Earth's surface. Raw imagery is distorted by the satellite's viewing angle, the Earth's curvature, terrain relief, and the sensor's scan geometry. Correcting this requires a digital elevation model, precise knowledge of the satellite's position and attitude at the time of acquisition (derived from GPS and star tracker data), and a mathematical model of the sensor's optics.
Atmospheric correction removes the distortion introduced by the atmosphere between the surface and the sensor. Atmospheric aerosols, water vapour, and gases scatter and absorb electromagnetic radiation, altering the spectral signature that reaches the sensor. Correction requires an atmospheric model and sometimes ancillary data like meteorological observations or aerosol measurements. The accuracy of the correction depends on the quality of the ancillary data, which may itself have latency — you cannot correct for the atmosphere at the time of acquisition if the atmospheric measurements from that moment have not yet been processed and distributed.
Cloud masking identifies and flags pixels contaminated by cloud, cloud shadow, or haze. This is critical because a cloud-contaminated pixel that is not flagged will produce nonsensical results in any downstream analysis. Cloud masking algorithms range from simple threshold-based methods to machine learning classifiers, and their accuracy directly affects the usability of the final product.
Each of these steps takes computational time. On modern hardware, each step might take seconds to minutes for a single granule. But the steps are sequential — you cannot atmospherically correct an image that has not been radiometrically calibrated and geometrically corrected — and the total time through the chain accumulates.
For a typical Sentinel-2 Level-2A product (surface reflectance with atmospheric correction), ESA's processing chain takes the raw Level-0 data and produces a deliverable Level-2A product. The nominal target is delivery within 3 to 24 hours after acquisition, depending on the processing tier and priority. For emergency services under Copernicus Emergency Management, expedited processing can achieve delivery within about 2 hours.
Commercial providers optimise for speed by running leaner processing chains, sometimes trading accuracy for latency. A rapid-delivery product might skip full atmospheric correction, applying only a fast approximation, on the reasoning that an approximately-corrected image delivered in 15 minutes is more valuable than a precisely-corrected image delivered in 6 hours.
This is a genuine trade-off, not a failure of engineering. The processing chain embeds a decision about what "good enough" means, and that decision depends entirely on the application.
Format Conversion and Tiling: The Invisible Tax
After processing, the data must be packaged for delivery. This involves converting internal processing formats to distribution formats (typically GeoTIFF or Cloud-Optimized GeoTIFF), tiling large acquisitions into manageable pieces, generating metadata files, creating browse images, and registering the products in a catalogue.
None of these steps is individually expensive. Together, they can add minutes to the pipeline. More importantly, they are often where batch-oriented thinking creeps back in: process the imagery, then tile it, then generate metadata, then catalogue it, then make it available. Each "then" is a queue.
Cloud-Optimized GeoTIFF (COG) and the SpatioTemporal Asset Catalog (STAC) specification have reduced this overhead by making the output format and catalogue structure amenable to streaming workflows. A COG can be generated as the processing completes, and STAC metadata can be published as the COG is written. But adoption is uneven, and many operational systems still run sequential pipelines that were designed when the format conversion step was an afterthought at the end of a batch.
Delivery: The Last Mile
The final latency component is getting the processed product from the provider's infrastructure to the user.
For users accessing data through cloud-hosted archives — AWS, Google Earth Engine, Microsoft Planetary Computer, Copernicus Data Space Ecosystem — delivery latency is essentially zero once the product is catalogued. The data is already in the cloud, and the user queries an API and gets a URL.
For users who need data pushed to them — operational disaster response teams, military consumers, automated monitoring systems — delivery depends on the notification and distribution mechanism. A push notification via webhook is near-instantaneous. An email alert is seconds to minutes. A manual check of a catalogue portal is whenever the analyst remembers to look.
The irony is that the final link — which is entirely within engineering control and does not involve any physics constraints — is often where operational latency is longest, simply because it depends on human workflow. An image that was acquired, downlinked, processed, and catalogued within two hours sits in an archive until an analyst runs a query six hours later. The pipeline did its job. The latency was in the last chair.
Where Latency Hides: A Summary
If you trace a single observation from sensor to decision, the latency budget looks roughly like this for a typical EO mission:
Propagation (satellite to ground): milliseconds. Irrelevant.
Contact gap (waiting for ground station): 0 minutes to several hours. The largest variable component. Dominated by orbital geometry and ground station network density.
Downlink transfer: seconds to minutes. A function of data volume and link bandwidth. Increasingly solved by higher-rate links and optical communications.
Ground ingest and decommutation: seconds to minutes. Depends on system architecture — stream vs. batch.
Processing (radiometric, geometric, atmospheric correction, cloud masking): minutes to hours. The core trade-off between accuracy and speed.
Format conversion, tiling, cataloguing: minutes. An engineering choice, not a physics constraint.
Delivery: seconds to hours. Depends entirely on whether the user is pulling from a cloud archive or waiting for a push notification.
Human action (query, download, analyse, decide): minutes to days. Often the dominant term.
The total, for a non-expedited Sentinel-2 product, is typically 3 to 24 hours from acquisition to catalogue availability. For expedited commercial products, it can be under an hour. For tasked emergency response with priority processing, under 30 minutes is achievable.
For a manually-driven GIS workflow where an analyst downloads data, loads it into desktop software, applies additional corrections, and produces a map — add hours to days on top of whatever the pipeline delivered.
The Architecture Implication
Every component of latency between the contact gap and human action is an engineering choice. Batch pipelines can be replaced with streaming architectures. Sequential processing steps can be parallelised where dependencies allow. Format conversion can happen inline rather than as a post-processing step. Cataloguing can be event-driven rather than scheduled.
The reason most systems do not minimise latency is that they were not designed to. They were designed to maximise throughput — to process the largest volume of data per day at the lowest cost per granule. Throughput optimisation and latency optimisation are different problems with different architectures, and the EO industry has historically prioritised throughput because its dominant use case was retrospective analysis, not real-time operations.
That is changing. As the application landscape shifts toward operational monitoring — wildfire detection, maritime surveillance, infrastructure change detection, crop stress alerts — latency is moving from a nice-to-have to a requirement. The satellites are fast enough. The sensors are good enough. The ground segment is where the time goes, and the ground segment is where the time can be recovered.