Friday, January 30, 2026
HomeMarket Segments: ApplicationsDefense And SecurityWhat Are the Past, Present, and Future of Space Traffic?

What Are the Past, Present, and Future of Space Traffic?

The Birth of the Watchers

On October 4, 1957, history changed with a single sound: a simple, repetitive “beep-beep-beep” transmitted from low Earth orbit. The Soviet Union had successfully launched Sputnik I, the world’s first artificial satellite. This 184-pound polished metal sphere was a triumph of engineering. It was also, in an instant, the birth of a new and permanent problem. With one object in orbit, the field of space surveillance was born. The need to know “what’s up there” became a scientific, and military, imperative.

The “beeps” from Sputnik were, by design, easy to track. They were picked up by amateur radio operators around the globe, as well as by professional scientific facilities like the Jodrell Bank Observatory in the United Kingdom and Canada’s Newbrook Observatory, which was the first in North America to photograph the satellite. This represented the first, scientific impulse of space surveillance: tracking for the sake of knowledge, discovery, and communication.

In the United States, the launch was perceived as a significant strategic threat. The same R-7 rocket that lofted Sputnik into orbit was perfectly capable of carrying a nuclear warhead from one continent to another. This event “fueled both the space race and the arms race,” creating a wave of public and political anxiety over a perceived “missile gap” with the Soviets. This immediately and permanently linked the act of tracking objects in space with the urgent need for missile warning.

This dual reaction created a fundamental split in the very DNA of space surveillance, a “split personality” that defines the entire problem of space traffic to this day. From the very first moment, two different groups were tracking objects for two opposite reasons.

One-half of this new field was civilian and scientific. The United States had already been developing tracking assets as part of the International Geophysical Year (IGY), a global scientific collaboration. The first assets developed specifically for space surveillance were the large Baker-Nunn cameras, built by the Smithsonian Astrophysical Observatory (SAO) to track and photograph the U.S. scientific satellites scheduled for launch. The goal of this tracking was discovery, which requires openness and the sharing of data.

The other half was military and secret. The primary U.S. military response to Sputnik was the rapid formation of the North American Aerospace Defense Command (NORAD), a bi-national organization between the United States and Canada, on May 12, 1958. NORAD’s original mission was “aerospace warning” and “aerospace control” for North America. Its initial focus was on threats you could see, like Soviet bombers or Intercontinental Ballistic Missiles (ICBMs).

But the mission expanded almost immediately to include the “monitoring of man-made objects in space”. To accomplish this, NORAD, from its famous command center built deep inside the Cheyenne Mountain Complex in Colorado, became the central collection facility for a “worldwide system of sensors” designed to provide an accurate picture of any aerospace threat. In 1981, this evolution was formalized when NORAD’s mission officially changed from “air defense” to “aerospace defense,” a subtle but significant change that recognized space as a permanent new domain of operations.

Canada was a foundational partner in this military effort from the beginning. Beyond the high-level agreement, Canada was operationally integrated. In 1961, a U.S. Air Force Baker-Nunn camera was transferred to the Royal Canadian Air Force (RCAF) and installed at Cold Lake, Alberta. This cemented Canada’s role in the continental space surveillance mission.

This built-in conflict – open safety versus classified security – was embedded in the foundation of space surveillance. The world’s scientific and, later, commercial operators, would come to rely on a system of tracking provided by a military organization that was, by its very nature, secretive. This original divergence is the seed of the entire governance stalemate that paralyzes modern Space Traffic Management.

An Invisible Junkyard: The Kessler Syndrome

The “Watchers” were busy tracking the new, active satellites being launched in the 1960s and 70s. But every single launch created something else, something that was almost entirely ignored: garbage.

The launch of Sputnik I didn’t just create the first satellite; it also created the world’s first piece of “space debris” – the spent upper stage of its R-7 rocket, which was also in orbit. Every subsequent mission left behind its own trail: spent rocket bodies, bolts, straps, lens covers, and fragments from mysterious on-orbit explosions.

For the first two decades of the Space Age, this orbital junk was considered a non-issue. It was widely assumed that the “big sky” was infinitely big and that any debris would eventually be pulled down by the Earth’s upper atmosphere and burn up on reentry.

This assumption was tragically incorrect.

The orbital environment was becoming “increasingly crowded and complex”. The first “serious” satellite fragmentation event occurred in June 1961. This single breakup instantly increased the total cataloged population of space objects by over 400 percent. The problem wasn’t just litter; the litter was exploding.

The problem was ignored until 1978, when a NASA scientist named Donald J. Kessler, along with his colleague Burton Cour-Palais, published a landmark paper: “Collision frequency of artificial satellites: The creation of a debris belt”. This paper introduced a terrifying, logical, and unavoidable prophecy that would come to be known as the Kessler Syndrome.

The theory describes a “self-perpetuating cascade” of collisions. Kessler’s logic was simple and devastating. He proposed that as the density of objects in Low Earth Orbit (LEO) increases, a “threshold” will eventually be reached. Once that threshold is crossed, the primary source of new debris will no longer be new launches from Earth. The primary source will be the collisions between the objects already in orbit.

It works like this:

  1. Two objects, perhaps a dead satellite and an old rocket body, collide.
  2. This collision, occurring at “hypervelocity,” generates a cloud of thousands of new fragments.
  3. This new cloud of fragments dramatically increases the probability of more collisions.
  4. These new collisions create even more fragments, which in turn cause more collisions.

This is the runaway chain reaction. It’s a self-perpetuating cycle that, once started, could “in 30 to 40 years” (as Kessler predicted in 1978) render certain orbital altitudes completely unusable for generations.

The key to the danger is “hypervelocity.” Objects in LEO travel at speeds ranging from 7 to 15 kilometers per second. That’s over 17,000 miles per hour. At those speeds, kinetic energy is immense. A 1cm piece of debris – the size of a marble – can strike with the force of a bowling ball. A fleck of paint can chip a Space Shuttle window. A slightly larger piece can be catastrophic.

Kessler’s 1978 paper was a paradigm shift. It fundamentally changed the definition of space debris. It was no longer a passive “litter” problem, where the solution was simply to “stop littering.” Kessler’s insight was that the debris population had become its own engine of creation.

This realization was a terrifying inflection point. It meant that even if humanity never launched another rocket, the debris population in certain crowded orbits could continue to grow all by itself. This is the “why” behind all modern efforts to clean up space. It’s not just housekeeping; it’s an active environmental intervention, like cleaning up a chemical spill before it contaminates the entire well.

The impact of the theory was immediate. Just one year later, in 1979, NASA established the Orbital Debris Program Office at Johnson Space Center and made Donald Kessler its head. The term “Kessler Syndrome,” first coined by fellow analyst John Gabbard, entered the public consciousness and has defined the existential threat of our own orbital junkyard ever since.

What is Space Situational Awareness?

As the debris problem grew from a few rocket bodies to a complex, self-replicating environmental threat, the simple act of “tracking” was no longer enough. A new, more holistic discipline was needed: Space Situational Awareness, or SSA.

At its simplest, SSA is the “knowledge of the space domain close to Earth”. It’s the ability to answer the question posed by industry experts: “Do we know where things are in space?”. More formally, SSA is the comprehensive knowledge, characterization, and practice of tracking space objects and understanding their operational environment. It is the absolute foundation for all space safety and coordination activities.

SSA is not one single thing. It’s a process, a pipeline of data analysis that is best understood by breaking it down into its core functions.

Surveillance and Tracking

These two terms are often used interchangeably, but they are distinct and sequential.

Surveillance is the first step. It is the act of “surveying” the sky, much like a security guard scanning a wide, dark parking lot. It’s about finding objects. This includes cataloging new launches, spotting objects that have just broken apart, or finding objects that were previously unknown or “lost.”

Tracking is the second step. Once a surveillance sensor “finds” an object, it must be “tracked”. This is the job of dedicated sensors that repeatedly observe a specific object. By taking multiple observations over time, analysts can determine its precise orbit and, most importantly, predict where it will be in the future. This predictive power is the basis for all collision avoidance.

Characterization

This is the next level of awareness. In the 1960s, it was enough to know where a “dot” was in the sky. In the modern era, you must know what that dot is.

Characterization is the art and science of determining an object’s physical and behavioral properties.

  • Physical Characterization: What is its size and shape? Is it a massive, 10-ton derelict rocket body that’s “tine-shaped”? Is it a small, dense 1kg fragment? Is it a 500kg, intact satellite with large, reflective solar panels? Each object has a different “signature” that affects how it moves and how dangerous it is.
  • Behavioral Characterization: What is it doing? Is the object just passively tumbling, a “dead” piece of junk following the predictable laws of orbital mechanics? Or is it maneuvering? Is it adjusting its orbit? If so, why?. This analysis separates a piece of debris from an active satellite, and a “friendly” satellite from a potential threat.

Data Fusion

This is the “secret sauce” that makes modern SSA possible. No single sensor, or even sensor type, can see everything, 24/7. The SSA system is a “network of networks,” and its data is messy.

  • An optical telescope is excellent for tracking objects in deep, geostationary orbit. But it’s useless in broad daylight, cloudy weather, or bad atmospheric conditions.
  • A ground-based radar is a powerful workhorse. It can “see” day or night, through clouds, and is great at tracking objects in LEO. But it’s expensive and can have trouble distinguishing between two objects that are close together.
  • A Radio Frequency (RF) sensor doesn’t “see” at all; it “listens.” It can only detect active satellites that are transmitting signals. But in doing so, it can provide their precise location and help “characterize” their mission.

Data fusion is the process of taking all this “disparate,” incomplete, and sometimes conflicting data from every available sensor – ground-based, space-based, optical, radar, and RF – and “fusing” it all into one single, reliable, high-confidence “truth”. This fused data product is what allows an analyst to build a complete catalog and confidently issue a collision warning.

From SSA to SDA: A Military Distinction

This is where that original “split personality” of space surveillance reappears. Civilian and commercial satellite operators are interested in SSA for safety – preventing collisions with debris.

The military is interested in Space Domain Awareness (SDA), which is a related, but much broader, concept. SSA is considered a foundational component of SDA.

  • SSA is the “what and where.” It’s the technical practice of tracking and cataloging.
  • SDA is the “what, where, and why.” It’s defined as “the ability of decisionmakers to understand… their current and predicted operational environments”. It’s about intent.

The “characterization” component is the bridge between the two. When an analyst on the military side “fuses” an optical track (where an object is) with a “listened” RF signal (what it’s broadcasting) and a behavioral analysis (it just maneuvered to get suspiciously close to our national security satellite), they have crossed the line from SSA into SDA.

They are no longer just tracking a dot. They are analyzing its mission and intent. This is why the military is the world’s primary provider of SSA data: the same tools and sensors they use to prevent collisions with debris are the primary tools they use to identify and monitor military threats from adversaries.

The Global Sensor Network

The work of Space Situational Awareness is performed by a vast, global network of sensors. For over 60 years, that network has been almost exclusively owned and operated by the United States military, which has served as the de facto space traffic cop for the entire planet.

The Backbone: The U.S. Space Force

The core of the global SSA mission is run by the U.S. Space Force (USSF), the newest branch of the U.S. armed forces. The responsibility for this 24/7, no-fail mission falls to Space Operations Command (SpOC).

Within SpOC, the mission is handled by Mission Delta 2, which is dedicated to Space Domain Awareness. This Delta is further divided into two key squadrons that form the heart of the world’s space safety system:

  1. The 18th Space Defense Squadron (18 SDS): Based at Vandenberg Space Force Base in California, the 18 SDS is the operational workhorse. Its Guardians are responsible for the “detection, tracking, and identification” of all artificial objects in Earth orbit. They are the “catalog keepers,” maintaining the official U.S. space object catalog, which is shared with the public, allied nations, and commercial operators via the website space-track.org.
  2. The 19th Space Defense Squadron (19 SDS): Based in Dahlgren, Virginia, the 19 SDS are the “collision predictors.” They take the catalog from the 18 SDS and run “conjunction assessment” analyses, predicting all potential close approaches. When they spot a high-risk event, they are the ones who provide the “orbital safety activities,” sending warnings to satellite operators – from NASA to commercial companies – so they can maneuver to safety.

This entire system is powered by the Space Surveillance Network (SSN), a global network of over 29 dedicated and contributing sensors. This network includes powerful ground-based radars, optical telescopes, and other listening posts scattered across the globe.

Upgrading the “Eyes”

That network is in a constant state of evolution, replacing Cold War-era technology with modern-day capabilities.

The Space Fence: The most significant upgrade in decades is the “Space Fence,” formally the Space Surveillance Telescope, which was declared operational in March 2020. Located on Kwajalein Atoll in the Marshall Islands, this is a state-of-the-art S-band radar system. Its primary leap in capability is its sensitivity. The old network was largely limited to tracking objects 10cm (softball-sized) or larger. The Space Fence can track objects below this 10cm limit, deep into the “lethal untrackable” population. This has given the 18 SDS an unprecedented new view of the orbital environment, but it has also “flooded” them with data, revealing thousands of new, previously invisible threats.

GBOSS: On the optical side, the U.S. Space Force is upgrading its Ground-based Electro-Optical Deep Space Surveillance (GEODSS) sites – which have been in operation since the 1980s – with a new system called the Ground-Based Optical Sensor System (GBOSS). Upgrades at sites in New Mexico and Maui are increasing the system’s sensitivity, search rate, and overall capacity, allowing Guardians to find and track more objects, especially in the high-value geostationary orbits.

Key International Partners

The United States is not entirely alone in this mission. International partnership is essential for a truly global picture.

Canada and NORAD: The most deeply integrated partner is Canada. This partnership is formalized through the bi-national NORAD command, which has been in place since 1958.

Canada’s key contribution is the Sapphire satellite. Launched in 2013, Sapphire is a 148kg (326-pound) satellite that is essentially a small telescope in space. It is significant because Canada is the only U.S. ally to own and operate its own dedicated military space surveillance satellite.

Sapphire’s mission is to act as a “contributing, deep space sensor” for the SSN. From its orbit, it tracks man-made objects up to 40,000 kilometers away, feeding its precise data directly into the U.S. SSN catalog. This partnership has been so successful that the Canadian Armed Forces are already planning its replacement, the “Surveillance of Space 2 (SofS 2)” project, to ensure this contribution continues.

Other Partners: Other nations play a vital role. The European Space Agency (ESA) operates its own sophisticated SSA program, which it calls Space Surveillance and Tracking (SST), to protect European assets. And nations like Australia are key partners, hosting and jointly operating critical SSN sensors on their soil, including a C-Band radar and the Space Surveillance Telescope, giving the network its necessary global coverage.

This global architecture reveals a fundamental, and perhaps unstable, truth about the space environment. The entire planet’s commercial and civil space industry – every communications company, every weather satellite, every scientific mission – relies on a public good (the 18 SDS catalog) that is provided as a byproduct of a U.S. national security mission.

This dependency means global space safety is tied to the U.S. military’s budget, priorities, and willingness to share data. It also implies the public catalog, while good, is not the best data available. The U.S. military’s own classified catalog, used for its internal Space Domain Awareness mission, is certainly more precise and timely.

This dependency is the primary driver behind the U.S. government’s recent effort to create a new, civil-led SSA system. It’s also why other major spacefaring powers, like Europe, are building their own independent sensor networks. They are seeking to end their dependency on a U.S. military-provided safety service.

The Orbital Environment Today

The picture painted by this global sensor network is objectiveing. The orbits around Earth, once thought to be a vast and empty void, are now more accurately described as a “limited natural resource”. And that resource is becoming dangerously polluted.

As of early 2025, the total mass of human-made objects in orbit exceeds 13,400 tonnes. This is more than the weight of the Eiffel Tower, all moving at hypervelocity.

That mass is broken down into a staggering number of individual objects. The best way to understand the threat is to “see” the environment as the experts do, by breaking the population down by size.

  • Tracked Objects (> 10 cm): These are objects “softball-sized” and larger. They are the population that the SSN can reliably find, track, and catalog. As of 2025, there are approximately 40,000 such objects being tracked.
    • Here is the most telling statistic: of those 40,000 tracked objects, only about 11,000 are active, operational satellites. The other ~30,000 objects are “space junk” – derelict satellites, spent rocket bodies, and large fragments from past breakups.
  • Lethal Untrackable Debris (1 cm to 10 cm): This is the “marble-sized” population. They are generally too small to be reliably tracked and cataloged, yet they are large enough to be catastrophically lethal to any satellite they hit. Statistical models from the European Space Agency (ESA) estimate there are over 1.2 million objects in this size range.
  • Damaging Debris (1 mm to 1 cm): This is the “grain of sand” population. They are not large enough to destroy a satellite, but they are not harmless. Traveling at hypervelocity, they can pit sensitive optical lenses, puncture unshielded components, and degrade solar panels, shortening a satellite’s operational life. Models estimate there are 140 million such objects.

The problem is accelerating. In the year 2024 alone, several major fragmentation events added at least 3,000 new tracked objects to the catalog.

A Hazard by Altitude

This debris is not evenly distributed. It is concentrated in the most useful and valuable orbital “regimes.”

  • Low Earth Orbit (LEO): This region, from the edge of the atmosphere up to about 2,000 km, is the most congested and dangerous place to be. It is the “commuter lane” of space. It’s home to the International Space Station, Earth observation satellites, spy satellites, and, most recently, the tens of thousands of new megaconstellation satellites. About 80% of all active satellites operate here.
  • Medium Earth Orbit (MEO): This is the region between LEO and GEO, from roughly 2,000 to 35,000 km. It is the home of the world’s most critical navigation constellations: the U.S. Global Positioning System (GPS), Russia’s GLONASS, and Europe’s Galileo. A collision in this region is a nightmare scenario. A debris cloud created here could lead to “fratricide” – a chain reaction that takes out other satellites in the same navigation constellation, potentially crippling a service the entire world depends on.
  • Geostationary Orbit (GEO): This is not so much a “region” as a “ring.” At a precise altitude of 35,786 km, a satellite’s orbit perfectly matches the 24-hour rotation of the Earth. From the ground, it appears to “hover” in a fixed spot in the sky. This makes it priceless real estate for communications, broadcast television, and strategic missile-warning satellites.

This hard data reveals a terrifying “lethality gap.” This gap is a blind spot that exists between our two primary defense strategies: tracking and shielding.

As we will see, a spacecraft like the International Space Station is heavily shielded… but that shielding is only effective against debris up to 1 cm in size. And our global tracking networks… are only reliable for objects larger than 10 cm.

This creates a massive, unprotected vulnerability. The 1.2 million “marble-sized” objects fall squarely into this gap. They are (1) too small to be reliably tracked and dodged, and (2) too large and energetic to be shielded against.

They are, in short, invisible, indefensible, and lethal bullets. The new Space Fence is designed to help shrink this gap, but it’s one of the greatest unsolved technical risks in orbit today.

The Three Events That Defined the Debris Problem

How did the orbital environment get this bad? While the “slow burn” of accidental explosions and routine mission-related debris has been a steady contributor, the modern debris environment has been shaped by three distinct, man-made events. These events form a perfect narrative arc of the debris problem: an act of significant negligence, an unavoidable-but-predicted consequence, and a lesson learned.

Case Study 1: The 2007 Chinese ASAT Test

On January 11, 2007, the world’s space-faring nations were shocked by an act of deliberate and unprecedented orbital pollution. The People’s Republic of China conducted a “direct-ascent” anti-satellite (ASAT) weapon test. The weapon, a missile launched from the ground, struck and destroyed one of its own defunct weather satellites, the FengYun 1C.

The result was the single worst debris-generating event in human history.

The collision was a catastrophic hypervelocity impact. It created a massive, expanding cloud of junk. The U.S. Space Surveillance Network cataloged at least 2,087 new, trackable pieces of debris. The total number of lethal-but-untrackable fragments was far higher, with NASA estimating over 35,000 pieces larger than 1cm were created.

The impact on the LEO environment was immediate. The number of predicted close approaches (conjunctions) for all satellites in LEO jumped by over 37%.

But the true, unforgivable sin of the 2007 test was its altitude. The intercept occurred at 865 kilometers, a high, stable orbit where atmospheric drag is almost non-existent. The debris created that day isn’t going away. It has permanently polluted some of the most valuable orbital “highways.” NASA estimated that 30% of the larger debris from this one test would still be in orbit in 2035. Much of it will remain a threat for centuries.

Case Study 2: The 2009 Iridium-Kosmos Collision

If the 2007 test was an act of deliberate orbital vandalism, what happened two years later was the moment the “Kessler Syndrome” theory became terrifying fact.

On February 10, 2009, at 16:56 UTC, two large satellites passed above the Taymyr Peninsula in Siberia. They were at an altitude of 790 kilometers. And they were on a collision course.

  • Satellite 1: Iridium 33, a 560-kilogram (1,200 lb) active commercial communications satellite, part of the Iridium constellation. It was operational and doing its job.
  • Satellite 2: Kosmos 2251, a 950-kilogram (2,100 lb) derelict Russian military communications satellite. It had been out of service since 1995 and was no longer under any control.

They slammed into each other at a relative velocity of 11.7 kilometers per second (over 26,000 mph). This was the first-ever accidental, hypervelocity collision between two intact satellites.

The impact was the most severe accidental fragmentation event on record. Both satellites were obliterated, creating a cloud of over 1,800 new tracked debris fragments.

This event was the ultimate wake-up call. It proved that derelict satellites are not just passive junk; they are un-controlled, un-guided “time bombs.” It also proved that tracking, by itself, was not a solution. The operators at Iridium knew about the potential close approach. At the time, their constellation was already receiving 400 conjunction warnings per week. The risk was known, but the collision happened anyway, proving the urgent need for a more robust management system.

Case Study 3: The 2019 Indian “Mission Shakti” Test

This third event is significant not for what it broke, but for the lessons it demonstrated. On March 27, 2019, India became the fourth country (after the U.S., Russia, and China) to successfully test an ASAT weapon. “Mission Shakti” used an interceptor missile to destroy an Indian satellite, Microsat-R.

The international community, remembering the 2007 disaster, prepared for the worst. But the Indian government had learned from China’s mistake. By 2019, the global norms against creating space debris had become a powerful political force.

India deliberately planned the test to minimize the long-term debris.

  1. Low Altitude: They chose a target in a very low, high-drag orbit of approximately 282 kilometers.
  2. Public Messaging: Immediately following the test, the Indian government made a point to publicly state that the test was “done in the lower atmosphere to ensure that there is no space debris. Whatever debris that is generated will decay and fall back onto the earth within weeks”.

This was a new development: a nation treating its orbital “cleanliness” as a point of national pride and public diplomacy. And their claim was true. The debris from Mission Shakti reentered the atmosphere rapidly. The final cataloged piece of debris from the test decayed and burned up by June 2022.

These three events tell the story of the debris problem in three acts:

  1. Negligence (2007): China’s test, an act of significant disregard for the orbital commons, created a “permanent” debris field.
  2. Consequence (2009): The Iridium-Kosmos collision, the inevitable, accidental proof that Kessler’s theory was correct.
  3. Mitigation (2019): India’s test, which demonstrated that the international “soft law” and norms against debris had become strong enough that a nation had to conduct its test “responsibly” to avoid global condemnation.

Dodging Bullets on the International Space Station

The threat of this invisible junkyard is not abstract. For the astronauts living and working on the International Space Station (ISS), it is a clear and present danger.

The ISS, a $100+ billion scientific laboratory, is orbiting directly in the “commuter lane” of LEO, one of the most debris-congested regions. It has two main lines of defense against this constant barrage of high-speed threats.

The first line of defense is its “armor.” The habitable modules of the ISS are protected by Whipple shielding, a multi-layer buffer designed to break up and stop small projectiles. This shielding is a marvel of engineering, but it has a critical limit: it is only effective against MMOD (micrometeoroids and orbital debris) up to about 1 centimeter (0.4 inches) in diameter.

It provides no protection against the 1.2 million “marble-sized” objects or the 40,000 “softball-sized” (or larger) objects.

For these larger, trackable threats, the ISS has only one option: it must move.

This is the second line of defense: the Debris Avoidance Maneuver (DAM). The process is a model of international coordination. The 19th SDS, or another international partner like ESA, will detect a potential conjunction. If the predicted “miss distance” is too small and the probability of collision rises above a set threshold (e.g., 1-in-10,000), flight controllers on the ground will “fly” the entire 450-ton station out of the way. They do this by firing the thrusters on the station or on a docked Progress cargo vehicle, slightly raising or lowering its orbit to miss the incoming “bullet”.

These maneuvers are not rare. They are a routine part of life in orbit. By the end of 2020, the ISS had performed over 26 of these maneuvers. Between 1999 and 2014, it had to move 19 times.

The threats come from all of human history’s space activity. The ISS has had to dodge debris from the 2009 Kosmos collision and has had close calls with fragments from the 2007 Chinese ASAT test. A recent example occurred on April 30, 2025. Flight controllers fired the thrusters on the docked Progress 91 vehicle for 3 minutes and 33 seconds to move the station, avoiding a fragment from a 2005 Chinese rocket. Without the maneuver, the fragment was predicted to pass within 0.4 miles of the station – a razor-thin margin in orbital terms.

Sometimes, the threat is detected too late. A piece of debris might be on a difficult-to-predict orbit, or it might be newly created. If a high-risk conjunction is spotted with too little time for the station to safely maneuver, flight controllers enact “shelter in place” procedures. The astronauts are instructed to close the hatches between the station’s modules and retreat to their “lifeboat” vehicles – the docked SpaceX Dragon or Russian Soyuz capsules – so they can make an emergency escape back to Earth if the station is struck and depressurized.

It’s tempting to look at the increasing number of ISS maneuvers and conclude, “space is getting more crowded.” While this is true, it misses a more subtle and important point.

The data shows that from roughly 2003 to 2008, the frequency of ISS maneuvers dramatically decreased, with only a single maneuver in a 5.5-year period. This wasn’t because the orbital environment was cleaner. It was because the conjunction assessment process – our ability to track and predict – got better, reducing the number of false alarms.

This means the increase in maneuvers we see today is a double-edged sword. It’s not just that there’s more junk; it’s that our better, more modern sensors, like the Space Fence, are finally seeing the junk that was there all along. We are, in effect, just now becoming aware of how many bullets we’ve been dodging for decades without even knowing it.

The New Space Race: Megaconstellations

The problem of space debris, which has been building for 60 years, is now being radically and exponentially accelerated by a new phenomenon: the “New Space” race. This new era is defined by the rise of commercial “megaconstellations” in Low Earth Orbit.

A Revolution in Scale

Led by private companies like SpaceX (with its Starlink constellation) and OneWeb, this new model for space is based on launching thousands of satellites, rather than a few exquisite ones, to provide global internet coverage.

The numbers involved are difficult to comprehend. The plans for these constellations will, by themselves, dwarf the entire population of satellites launched in human history. By the end of the 2020s, plans call for over 17,000 new satellites. In the following decade, that number could surpass 50,000.

This is more than twenty times the active satellite population of the 2010s. The “New Space” era is, in effect, building a new city in the sky, and it’s doing it in a decade.

A New Kind of Collision Risk

This new, unprecedented density of satellites creates a new kind of risk. This is no longer the Kessler Syndrome’s “random” collision problem. This is a traffic density problem.

The satellites in these constellations are designed to be “responsible.” They are equipped with their own propulsion systems and are designed to “auto-maneuver” to avoid collisions. They are also designed to de-orbit themselves at the end of their 5-year life, burning up in the atmosphere.

But what happens when one fails?

A single satellite that “goes dead” – due to a power failure, a propulsion-system failure, or a computer fault – instantly becomes a 500-pound piece of uncooperative debris. It’s a “dead car” stuck in the middle lane of a high-speed highway, and it can’t be moved. All the other active satellites must now dodge it.

The risk is not theoretical. Simulations have shown that for the Starlink constellation alone, there is a 70.2% probability of at least one collision within its own constellation during its operational lifetime. And the burden on the active satellites is already staggering.

In a six-month period in early 2024, SpaceX reported that its fleet of Starlink satellites had to perform 50,000collision avoidance maneuvers. That is an average of one maneuver every six minutes, 24 hours a day, 7 days a week.

A Broken Coordination System

This new, machine-speed problem is being managed by an antiquated, human-speed system. When a U.S. Space Force tracker predicts a close approach between a Starlink satellite and a OneWeb satellite, how do the two companies coordinate who moves?

They email each other.

The current system for coordination between different satellite operators is described as “ad-hoc” and is conducted “over email exchanges”.

This system is, in the words of operators and experts, “neither sustainable nor efficient”. The risk of miscalculation is high. Operators often make maneuver decisions “independently”. This creates the nightmare scenario where a SpaceX operator, seeing a warning, decides to maneuver “down,” while the OneWeb operator, seeing the same warning, also decides to maneuver “down,” or worse, moves “down” to avoid a different piece of debris – and they maneuver directly into each other.

The Altitude Problem and a New Crisis

Where these satellites are placed matters. Starlink operates in the “busiest zone” of LEO, at altitudes below 600km. This is a high-drag environment, which is good for sustainability: a “dead” satellite here will naturally de-orbit in a few years.

OneWeb operates at a higher altitude, around 1,200 km. There is far less atmospheric drag there. A “dead” OneWeb satellite could remain a high-speed hazard for decades or longer.

And it’s not just a collision risk. These thousands of new objects are creating a new form of environmental pollution: light pollution.

  • For Optical Astronomy: The satellites’ reflective surfaces catch the sunlight, creating bright “streaks” in the long-exposure images taken by ground-based telescopes. Astronomers report their data is being ruined, frustrating decades of scientific work.
  • For Radio Astronomy: It’s also an invisible pollution. The satellites’ electronics, even when not actively transmitting, are “leaking” electromagnetic radiation – radio noise – across various frequencies. This “noise” is drowning out the faint, natural signals from the universe that radio astronomers are trying to “hear”.

The arrival of megaconstellations marks the most important shift in the history of this problem. They have fundamentally changed the orbital hazard from a passive debris problem to an active traffic management problem.

The Kessler Syndrome was about random collisions between derelict objects, a “roll of the dice” that might happen over decades. The new problem, as shown by the 50,000-maneuver statistic, is about non-random, high-frequency close approaches between thousands of active satellites.

The system is no longer random. It is a managed system that is running at its absolute limit. And the “ad-hoc email” method of coordination proves that the management is failing to scale. The problem is no longer just debris. It’s traffic.

The Future Solution: Space Traffic Management

If the problem has evolved from “debris” to “traffic,” the solution must evolve from “tracking” to “traffic management.” This is the concept of Space Traffic Management (STM).

What is STM?

If SSA is the “seeing” – the data, the tracking, the awareness – then STM is the “doing.” It is the action and coordination.

STM is defined as the “means and the rules” to access, conduct activities in, and return from outer space safely, sustainably, and securely. It is the future framework for organizing the orbital environment.

The concept is not new. The idea of “traffic rules for outer space” was discussed by jurists as early as the 1930s, and it was revived as a serious policy concept in 1982 by Dr. Lubos Perek. But it’s only in the “New Space” era, with the advent of megaconstellations, that STM has gone from a theoretical academic topic to an urgent, practical necessity.

The “Air Traffic Control for Space” Analogy

When most non-technical people hear “Space Traffic Management,” they immediately think of “Air Traffic Control (ATC) for space”. This analogy is the most common way to explain the concept.

Why the analogy is helpful: It perfectly captures the goal. We all understand what ATC does: it prevents collisions, organizes the flow of air traffic, and provides information to pilots. We also understand its history. In the 1930s, aviation was a “Wild West.” After a series of high-profile, deadly mid-air collisions, the U.S. federal government stepped in to create a unified, national ATC system. Space, with its recent near-misses and the Iridium-Kosmos collision, is at a very similar inflection point. The analogy sets the stage.

Why the analogy is wrong (and dangerously misleading): While the goal is the same, the method is impossible, and the space community “rejects the idea of management” in the ATC sense.

  1. Sovereignty: Air Traffic Control is a “positive control” system. A controller on the ground issues a mandatory instruction (“United 123, descend and maintain 5,000 feet”), and the pilot must obey. There is no such “controller” for space. There is no global, supranational authority with the legal power to order a sovereign Chinese military satellite, or a private U.S. company’s satellite, to move.
  2. Physics: The physics are completely different. An airplane can be “vectored.” It can be told to stop, slow down, speed up, or turn left right now. A satellite in orbit cannot. You can’t “stop” or “pull over.” Any maneuver – a thruster firing – must be planned hours or days in advance, and it doesn’t just change the satellite’s position; it changes its entire future orbit.

The Real Goal: Coordination, Not Control

The true aim of STM is not “management” in the sense of “control,” but rather “coordination”.

The goal is to create a framework that replaces the “ad-hoc email” system. It’s about getting all the operators in the “sky” to agree on a single set of “rules of the road,” technical standards for data sharing, and a communication platform so they can de-conflict their paths efficiently.

This is why experts in the field insist that STM is, at its core, a governance and data-sharing challenge, not a technical one. The solution will not look like an airport control tower. It will look more like the “International Regulations for Preventing Collisions at Sea” (COLREGs) – a set of pre-agreed rules that all “captains” (satellite operators) understand and follow to avoid mutual destruction.

The Rise of the Commercial Watchers

This new STM framework cannot be built by the military. That original “split personality” – the conflict between a military’s need for secrecy and a safety system’s need for openness – makes the U.S. Space Force the wrong organization to lead a global, civil system.

The U.S. government has recognized this. In a major policy shift, the task of building the nation’s civil, public-facing SSA and STM capability has been moved from the Department of Defense to the Department of Commerce (DOC).

A New Public-Private Partnership

The Department of Commerce’s goal is not to build a new, government-run sensor network to replace the military’s. Its goal is far more innovative.

The DOC is building an “open architecture data repository”. This will be a cloud-based platform that takes the U.S. military’s SSN data as its foundation, and then fuses it with high-quality data from a new, booming commercial SSA industry.

In the last decade, a new ecosystem of “commercial watchers” has emerged. Companies like ExoAnalytics, LeoLabs, and AGI (now Ansys) have built their own private, global sensor networks (both optical and radar). They are developing advanced software and, in many cases, are more agile and innovative than the government. They are creating their own SSA “products” and selling them to satellite operators, insurance companies, and allied governments.

The new U.S. civil system will buy this commercial data, combine it with the military’s data, and provide a “best-of-breed” product to the public for free.

A New Policy Wrinkle

This new public-private model, while promising, creates a new and strange policy tension. The U.S. government is trying to foster this new commercial industry by becoming its biggest customer.

But, at the same time, the government has declared it provides its own core SSA/STM services for free, as a public good.

As one commercial SSA provider pointed out in testimony to Congress, the government’s free service “represents direct competition” with the very U.S. companies it is also trying to help. The government is now, simultaneously, the chief customer of, and chief competitor to, the commercial SSA market. Navigating this new relationship will be one of the DOC’s greatest challenges.

This bureaucratic shift in Washington – from the Department of Defense to the Department of Commerce – is far more than just moving boxes on an organizational chart. It is a sophisticated political move designed to solve the diplomatic stalemate.

A key reason a global STM system is stuck is that foreign nations (like China or Russia) and even commercial operators are understandably suspicious of a global “safety” system run by the U.S. military. They fear that “safety warnings” could be used to glean intelligence or restrict their own operations.

By shifting the public-facing, international-facing role to a civil agency – the Department of Commerce – the U.S. is “rebranding” STM as a neutral, civil safety service, much like the National Weather Service provides weather forecasting for the world. This makes it far more politically palatable for international partners and commercial companies to “buy in” and – most importantly – to share their own data. This data-sharing is the only way a global system can work.

It is, in short, a geopolitical strategy: using a domestic, bureaucratic shift to enable a new, global governance framework that the Pentagon, by its very nature, could never lead.

The Future’s Toolkit: AI, Robots, and Lasers

The governance problem is the hardest one, but technology must also evolve to meet the new challenges of the megaconstellation era. The future of SSA and STM will be built on a toolkit of advanced technologies, from artificial intelligence to orbital “cleanup crews.”

Artificial Intelligence: The Essential Tool

Artificial Intelligence (AI) and Machine Learning (ML) are not just “nice-to-have” features for the future of SSA. They are an absolute necessity. The sheer volume of data from new sensors like the Space Fence, combined with the sheer number of conjunctions (50,000 in 6 months) from megaconstellations, is a problem that is fundamentally impossible for humans to solve.

AI is being applied to every step of the process:

  • AI-Driven Data Fusion: AI is the only tool that can perform “sensor-data fusion” at the required scale. It can autonomously take in billions of data points from conflicting optical, radar, and RF sensors, “weigh” the credibility of each, and “fuse” them into a single, high-fidelity “truth”.
  • Tracking and Prediction: Neural networks are being trained on orbital dynamics to learn the “behavior” of satellites, allowing for more accurate predictions of their future paths and reducing the errors in tracking.
  • Fighting “Alert Fatigue”: This is one of the most practical applications. Today, satellite operators are drowning in false-positive collision alerts, which leads to a “culture of ignored alerts”. New AI-driven “Maneuver Decision Support Systems” are being developed. These tools go beyond a simple “probability” number. They analyze the quality of the data and the physical dynamics of the conjunction to provide a more meaningful “Urgency” score. This allows human operators to ignore the 999 “maybes” and focus on the one credible threat that requires action.
  • Characterization and Anomaly Detection: AI models are being used as “watchdogs” to monitor the “telemetry” (health and status data) of satellites in real-time. This has two significant uses.
    1. Safety: The AI can detect the first, faint signs of a fault (“The voltage on solar panel B is fluctuating”), allowing operators to predict a failure before it happens and save the mission.
    2. Security: The AI can flag “anomalous” behavior. For example, it can detect an unplanned trajectory change. This allows operators to answer a critical question: “Did that satellite just maneuver because its thruster is broken, or did it maneuver because it’s a hostile act?”

The true revolution in AI for SSA is not just better data analysis on the ground. It’s autonomy in space.

The 50,000-maneuver-in-6-months problem is a machine-speed problem that cannot be solved by humans exchanging emails. The only long-term, scalable solution is to put AI on board the satellites themselves – a concept known as “Edge AI”.

This would enable a future scenario where two satellites (from two rival companies, like SpaceX and OneWeb) on a collision course could:

  1. Autonomously detect the threat with their on-board sensors.
  2. Autonomously negotiate a maneuver with each other (“I’ll go high, you go low”).
  3. Autonomously execute the maneuver.

All of this would happen in a matter of seconds, without a single human on the ground ever being involved. This AI-driven, machine-to-machine coordination is not just a “feature”; it is the only way the megaconstellation business model is sustainable.

Active Debris Removal (ADR): The “Cleanup Crew”

AI and STM are for managing traffic. But to solve the underlying debris problem – the Kessler Syndrome – we must also clean up the junk that’s already there. We must actively remove the largest, most dangerous “time bombs” from orbit.

This is the goal of Active Debris Removal (ADR).

The flagship mission for this new technology is the European Space Agency’s ClearSpace-1. Planned for launch between 2026 and 2029, this will be the world’s first mission to rendezvous with, capture, and de-orbit a piece of “uncooperative” debris.

  • The Target: An old, 95kg ESA satellite called PROBA-1, which has been in orbit since 2001 and is now derelict.
  • The Mission: A “chaser” satellite, developed by the Swiss company ClearSpace, will launch and navigate to the PROBA-1. It will then perform a highly complex rendezvous and use a set of four robotic arms to “grab” the tumbling satellite.
  • The End: Once it has a secure grip, the ClearSpace chaser will fire its own engines, acting as a “space tug” to drag the PROBA-1 down into the Earth’s atmosphere, where both satellites will safely burn up on reentry.

ClearSpace is just one concept. Other ADR technologies being tested by companies like Astroscale include magnetic docking plates (for future satellites) and even “harpoons” and “nets” for capturing uncooperative targets.

On-Orbit Servicing (OOS): The “Preventative Medicine”

If ADR is the “cure” for the debris-filled orbits, then On-Orbit Servicing (OOS) is the “preventative medicine.”

OOS, or On-Orbit Servicing, Assembly, and Manufacturing (OSAM), is a new-and-emerging capability where a “servicing vehicle” can fly to a “client” satellite to perform a range of helpful tasks.

These services include:

  • Inspection: A servicer can fly around a high-value satellite that has malfunctioned, take high-resolution pictures, and help operators on the ground diagnose the problem.
  • Repair: Using advanced robotic arms, a servicer could fix a satellite in orbit – for example, by un-sticking a solar panel that failed to deploy.
  • Upgrading: A servicer could attach a new, more modern “backpack” (like a new sensor or computer) to an older, but still-functional, satellite.
  • Refueling: This is the most powerful application. The vast majority of satellites “die” and become space debris not because their components fail, but simply because they run out of fuel. They need propellant for “stationkeeping” (staying in their lane) and for collision avoidance maneuvers. A refueling servicer, like Astroscale’s planned “Life Extension In-Orbit” (LEXI) vehicle, could dock with a client satellite and transfer propellant, adding years of new life to an asset that would otherwise have become junk.

OOS is a sustainability tool. It prevents satellites from becoming debris in the first place, and it reduces the need to launch as many “replacement” satellites, lessening the load on the orbital environment.

The Final Frontier: A “Rules of the Road” for Space

This new toolkit of AI, robotics, and commercial innovation gives humanity the technical means to solve the space traffic problem. But the greatest, highest, and final hurdle is not technical.

The STM problem is “primarily a governance challenge rather than a technical one”. We have the tools to track objects; we lack the political will and legal framework to manage them.

1960s Law for a 2030s Problem

The foundational international space law is the 1967 Outer Space Treaty. This treaty was a triumph of Cold War diplomacy. It established beautiful, sweeping principles:

  • Outer space is the “province of all mankind.”
  • It cannot be claimed as sovereign territory by any nation.
  • States are liable for any damage their space objects cause.
  • The Moon and other celestial bodies shall be used for peaceful purposes.

But the 1967 treaty has a glaring omission, because it was written for an empty sky. It contains no “rules of the road”. It doesn’t define who has the right-of-way in a conjunction. It doesn’t set speed limits. It doesn’t create “lanes” of traffic. It is a 1960s-era legal framework being applied to a 2030s-era traffic jam.

The “Soft Law” Approach

The central body for this legal and diplomatic discussion is the United Nations Committee on the Peaceful Uses of Outer Space (COPUOS), which meets annually in Vienna.

For decades, COPUOS has been stuck in “gridlock,” unable to pass a new, binding treaty (or “hard law”) to update the 1967 framework. The geopolitical disagreements are too deep.

Instead, the international community has shifted to a “soft law” approach. This “soft law” consists of voluntary, non-binding guidelines that no one is forced to sign, but everyone is expected to follow.

  • Examples include the IADC Space Debris Mitigation Guidelines, which created the (mostly voluntary) “25-year rule” stating that operators should de-orbit their satellites within 25 years of their end of life.
  • Another is the COPUOS Guidelines for the Long-Term Sustainability of Outer Space Activities (LTS), a set of best practices that nations can adopt voluntarily.

This “soft law” approach is what led to the 2019 “Mission Shakti” being a low-altitude test. India knew it would be judged by the global community against these non-binding rules.

The Great Stalemate: Security vs. Safety

Why is a new “hard law” treaty so impossible? Because it runs head-first into the “Great Stalemate,” the fundamental conflict between national security and public safety. This is the same “split personality” that has existed since 1957.

An effective, safe, and transparent STM system requires one thing: data. It needs all operators (commercial, civil, and military) to share high-precision, real-time data about where their satellites are and where they plan to maneuver them.

But for a military or intelligence agency, this exact data is one of its most closely guarded secrets.

If the U.S. National Reconnaissance Office (NRO) tells a public, international STM database exactly where its top-secret spy satellite is, and exactly where it plans to be tomorrow, it is no longer a very effective spy satellite. This “tendency toward secrecy,” as one paper puts it, “actively undercuts the safety of operations for all actors”.

This creates the ultimate diplomatic Catch-22. The 1967 Outer Space Treaty guarantees a nation’s “freedom of use” of space. But in the crowded sky of 2025, that freedom of use is now directly at odds with the safe use of space.

To get safety, you must restrict freedom. You must create “rules of the road” and demand data transparency. But no nation, especially a major military power, wants to be the first to accept binding restrictions on its freedom of movement.

This is the final frontier. To be effective, any future global STM system will need to achieve consensus on daunting political and economic issues. It must find a way to get “buy-in” from the key space powers – the United States, China, and Russia. It must solve the “flags of convenience” problem, where an operator might register their satellite in a nation with lax rules to avoid regulation.

The future of space traffic will not be a clean, top-down “ATC” model. It will be a messy, voluntary, and piecemeal system, built on a fragile consensus and a shared desire for survival.

Summary

The journey into space, which began in 1957 with a single satellite’s lonely “beep-beep-beep”, has transformed the orbits above Earth into a complex, crowded, and hazardous domain. What started as a dual mission – a scientific quest for knowledge and a military surveillance imperative to watch for bombers and missiles – has evolved into a global necessity: tracking a vast, invisible junkyard of our own making.

The “Kessler Syndrome,” once a distant 1978 theory about a “debris belt”, became a tangible reality. The 2007 Chinese ASAT test and the 2009 Iridium-Kosmos collision were watershed moments, proving that this “runaway cascade” was possible and that derelict satellites were “time bombs” in orbit. Today, we live in an environment with over 1.2 million lethal-but-untrackable “marble-sized” objects, forcing assets like the International Space Station to dodge these hypervelocity “bullets” on a regular basis.

The future is defined by a new, exponential challenge: the launch of tens of thousands of satellites in commercial megaconstellations. This new “city in the sky” is already overwhelming our antiquated, human-speed, “ad-hoc email” methods of traffic coordination, forcing 50,000 avoidance maneuvers in just six months. This has changed the problem from a passive “debris” issue to an active “traffic” crisis.

The solution is two-fold.

Technologically, the future lies in an advanced toolkit: new, powerful sensors like the Space Fence; the essential, non-negotiable power of AI to automate tracking, data fusion, and anomaly detection; and new “sustainability” missions like Active Debris Removal (ADR) to clean up the past, and On-Orbit Servicing (OOS) to prevent future junk.

But the technology is not the highest hurdle. The true challenge is one of governance. The creation of a functional, global Space Traffic Management system is a diplomatic and political struggle, pitting the vital need for public safety against the iron-clad demands of national security. Success will not look like a simple “Air Traffic Control for space.” It will require a new, cooperative framework – a “rules of the road” built on “soft law” and fragile consensus – to ensure that the final frontier remains open, safe, and sustainable for generations to come.

YOU MIGHT LIKE

WEEKLY NEWSLETTER

Subscribe to our weekly newsletter. Sent every Monday morning. Quickly scan summaries of all articles published in the previous week.

Most Popular

Featured

FAST FACTS