As an Amazon Associate we earn from qualifying purchases.

- Key Takeaways
- A New Kind of Warfare
- What Space Services Actually Provide
- From Orbit to Target
- The Algorithms Behind the Kill Chain
- Swarms, Satellites, and Scale
- The Governance Void
- The Case for a Ban, and Why It's Proving Difficult
- Space Services as a Strategic Target
- The Private Sector's Uncomfortable Position
- Summary
- Appendix: Top 10 Questions Answered in This Article
Key Takeaways
- Space infrastructure, from GPS to SATCOM, underpins every modern autonomous weapons system.
- Commercial satellite firms now routinely supply targeting data to military forces globally.
- International law has yet to produce a binding treaty governing lethal autonomous weapons.
A New Kind of Warfare
In November 2017, a seven-minute short film quietly rattled the defense policy community. Slaughterbots, commissioned by the Future of Life Institute, depicted small, palm-sized drones using facial recognition to hunt and kill specific individuals without any human operator making a moment-to-moment decision. Stuart Russell, the AI researcher at the University of California, Berkeley who helped produce the film, was trying to displace Hollywood’s Terminator narrative with something more practically worrying: machines that were cheap, scalable, and already technically plausible.
The film’s reception was divided. Paul Scharre of the Center for a New American Security pushed back on the scenario as overstated in its near-term feasibility. His book Army of None offers a measured accounting of where the technology actually stood at the time, and where it was headed. Others pointed out that the building blocks of such a system, off-the-shelf drones, computer vision algorithms, compact explosive charges, were all commercially available without any classified knowledge. Less than four years after the film’s release, a United Nations Panel of Experts would document what appeared to be exactly this kind of weapon in a real conflict.
In March 2021, the U.N. Panel of Experts on Libya released a report on the conflict that included a widely discussed passage describing retreating Haftar-affiliated forces as being “hunted down and remotely engaged” by unmanned combat aerial vehicles or lethal autonomous weapons systems, including the STM Kargu-2, a rotary-wing loitering munition produced by the Turkish defense contractor STM. The report stated that these systems were “programmed to attack targets without requiring data connectivity between the operator and the munition,” describing this as a true “fire, forget and find” capability. The episode drew sustained international attention and prompted analysis from organizations such as the International Committee of the Red Cross and the Lieber Institute, both of which examined the significance of the report’s description of autonomous attack functions in combat. The public debate was intensified by the response from STM and Turkish defense officials, who rejected the broader interpretation that the Kargu-2 had been used as a fully autonomous killer robot. That dispute, also summarized by Project Ploughshares, made the Libya episode one of the most prominent early case studies in the modern debate over lethal autonomous weapons.
The Kargu-2 itself is a quadcopter drone that uses machine learning-based object classification and can operate in both manual and autonomous modes. STM markets it for anti-personnel applications and has touted a swarming capability allowing up to twenty units to coordinate as a networked strike group. Notably, its design does not require satellite or network connectivity for close-range operations, making it effective in GPS-denied or electronically contested environments. That particular characteristic placed it at an interesting edge of the space services discussion. The weapon doing the killing may not have needed a satellite signal in its final moments of operation, but the broader military campaign that deployed it relied heavily on orbital assets for intelligence, surveillance, and situational awareness.
The relationship between lethal autonomous systems and space infrastructure is not always direct. It is often a layer deeper.
What Space Services Actually Provide
To map what “space services” means in a military context, it helps to distinguish four broad categories: positioning and navigation, communications, Earth observation, and space domain awareness. Each of these categories intersects with autonomous weapons development in a different way, and at a different point in what military planners call the kill chain.
The Global Positioning System, operated by the United States Space Force, underpins precision navigation for an enormous range of military systems. Autonomous platforms, loitering munitions, reconnaissance drones, and naval vessels all depend on GPS signals to determine their position. Russia’s GLONASS, Europe’s Galileo, and China’s BeiDou provide equivalent capabilities to their respective operators. For any autonomous platform that functions without a human operator actively guiding it, GPS is frequently the difference between a targeted strike and a wandering machine. AeroVironment’s Switchblade 600, widely supplied to Ukrainian forces, depends on GPS guidance through most of its flight before switching to electro-optical or infrared terminal homing. The same applies to a wide range of loitering munitions from Israel Aerospace Industries, Kratos Defense, and others.
Satellite communications, abbreviated SATCOM, handle data transfer between a remote operator, a command center, or an automated coordination hub and whatever autonomous platform is deployed. SpaceX‘s Starlink constellation, operating in low Earth orbit, became the most visible commercial SATCOM example in active conflict, partly because of its extraordinary role in Ukraine. Its low-latency, high-bandwidth connectivity, independent of ground-based infrastructure, made it a strategic asset almost immediately after Russia’s full-scale invasion in February 2022. But Starlink does not need to connect directly to an autonomous weapon to shape the battlefield. When it links a drone reconnaissance unit to an artillery targeting cell, or relays imagery from a naval drone to a command vessel hundreds of kilometers away, it becomes a node in the sensor-to-shooter loop that drives autonomous and semi-autonomous strike decisions.
Earth observation satellites supply the imagery and persistent monitoring that allows military planners to build target lists, confirm activity patterns, track troop movements, and assess battle damage. Planet Labs, a San Francisco-based company that operates the world’s largest constellation of small optical imaging satellites, provides daily imagery of every point on Earth at resolutions useful for tracking vehicles, structures, and large formations. Ventor supplies higher-resolution imagery widely used by governments and agencies for precision applications. ICEYE, a Finnish company specializing in synthetic aperture radar satellites, can image through cloud cover and in full darkness. Capella Space and BlackSky, which offers near-real-time geospatial intelligence from low Earth orbit, both secured contracts with US defense agencies during this period.
By 2024, commercial satellite imagery was, by the assessment of multiple defense observers, embedded in military targeting and situational awareness pipelines as a routine matter rather than an exceptional one. The data no longer flowed only to classified government systems. It increasingly reached commercial platforms supporting open-source intelligence analysis, which fed into military planning at multiple levels. This means the companies producing commercial satellite imagery have become, whether they intended to or not, suppliers to war.
Space domain awareness forms the fourth category. The US Space Force manages the Space Surveillance Network, a global array of radar and optical sensors that tracks objects in orbit. The Space Force tracks over 40,000 objects in orbit and maintains a catalog used to assess collision risks, identify debris clouds, and monitor satellite activity by other nations. This awareness function is not a weapons capability itself, but it feeds directly into the protection of the GPS, SATCOM, and Earth observation assets that autonomous weapons systems depend on.
From Orbit to Target
The Starlink story in Ukraine is the most thoroughly documented case of commercial space services reshaping a military conflict, and it illuminates the indirect but significant way that orbital infrastructure connects to autonomous weapons operations on the ground.
By early 2025, Ukraine had secured at least 47,000 Starlink terminals, with the majority supplied through partner governments including Poland, Germany, and the United States, as well as directly through SpaceX. Ukrainian reconnaissance units used Starlink to relay imagery and targeting data from surveillance drones to artillery units, shortening sensor-to-shooter loops to a degree that would not have been possible with radio alone. Naval drones equipped with Starlink terminals conducted operations in the Black Sea at ranges that would have been impractical without a reliable satellite link. The Magura V5, a Ukrainian unmanned surface vessel, was credited with sinking the Russian Sergei Kotov corvette in March 2024 while using satellite-assisted guidance. Drone boats strapped with explosives attempted a Starlink-guided sneak attack on the Russian Black Sea Fleet at Sevastopol as early as September 2022.
The complication arrived in 2024, when Ukrainian defense observers documented that Russian forces had acquired Starlink terminals through illicit supply chains running through intermediary countries including the United Arab Emirates and Kyrgyzstan. Starlink terminals were found aboard Russian Shahed-136 kamikaze drones and aboard the RD-8, a Russian mothership drone reportedly capable of controlling other loitering munitions using satellite connectivity. The Institute for the Study of War documented the first Starlink-equipped Shahed variants in September 2024. This was not a use case SpaceX had sanctioned. In May 2024, the US Department of Defense stated it was working on measures to prevent unauthorized Starlink use. By June 2024, the Pentagon reported that several hundred unauthorized terminals had been disabled. Despite this, reporting through late 2025 continued to surface imagery of Starlink-equipped Russian drones, including the Molniya tactical drone spotted near Pokrovsk in November 2025 and the BM-35 drone that struck a Ukrainian airfield in January 2026.
A different kind of space-assisted autonomous system was simultaneously unfolding in Gaza. The Israel Defense Forces deployed a suite of AI-assisted targeting tools that drew on satellite imagery, drone surveillance, signals intelligence, and phone-location data to generate and prioritize strike lists at a pace no human team could replicate unassisted. The Gospel, developed by Unit 8200 of the Israeli Intelligence Corps, automatically reviewed surveillance data looking for buildings and structures associated with Hamas or Palestinian Islamic Jihad and recommended targets to human analysts. Satellite images of weapons, equipment, and facilities fed directly into the Gospel’s inference pipeline. The system had been introduced in an earlier form around 2021, built to accelerate target selection using data from drones, satellites, communications intercepts, and social media. A companion tool called Lavender maintained an AI-generated database of Palestinian men algorithmically linked to armed groups, reportedly flagging as many as 37,000 individuals at the height of its use in the weeks following October 7, 2023. A third tool, “Where’s Daddy,” reportedly tracked individuals by phone location, using presence at home with family members as a trigger point for strike timing.
Israel has consistently described these as decision-support tools rather than autonomous weapons systems, and the IDF has maintained that human analysts authorize every strike. Intelligence officers who spoke anonymously to +972 Magazine and the Guardian described a different operational pace: some analysts approved Lavender-generated targets after as few as twenty seconds of review, with the AI’s outputs treated almost as a default. The IDF disputes this characterization. What is not disputed is that satellite imagery of buildings, vehicles, and personnel fed directly into these analytical pipelines. Orbital data was part of the targeting architecture regardless of whether any individual system qualified as autonomous under international humanitarian law.
The Algorithms Behind the Kill Chain
Anduril Industries, founded in 2017 by Palmer Luckey and a team from Oculus and other technology ventures, has built its business model around exactly this kind of networked, AI-driven architecture. Its Lattice platform functions as what the company describes as a persistent AI integration layer, connecting sensors, weapons, drones, and surveillance towers into a real-time decision network.
By 2024, the Lattice platform had been selected by the US Space Force for use in surveillance networks, and Anduril had won contracts to manage data from the Space Surveillance Network, the global sensor array that tracks objects in orbit. This was not merely battlefield sensor fusion. Lattice was being integrated into space domain awareness. In September 2024, Anduril announced plans to design, build, and launch fully integrated space systems by the end of 2025, explicitly citing applications in space domain awareness, on-orbit sensor data processing, and satellite defense. Gokul Subramanian, Anduril’s senior vice president of space engineering, stated that Lattice would “autonomously monitor and manage space-based assets, improving situational awareness and reducing operator workload.” By March 2026, Anduril had acquired ExoAnalytic Solutions, a commercial space tracking company, deepening its position in the space intelligence market.
Analysts at Sacra estimated that Anduril reached approximately $1 billion in revenue in 2024, up from roughly $400 million in 2023. In June 2025, the company raised $2.5 billion from venture capital firms including Founders Fund and 1789 Capital. By March 2026, reporting indicated Anduril was raising a further $4 billion round at a valuation of $60 billion, led by Andreessen Horowitz and Thrive Capital. The company that started as a defense technology startup was now operating at a scale that placed it among the most significant participants in American national security.
In March 2025, the US Special Operations Command awarded Anduril an $86 million, three-year contract to serve as its Mission Autonomy Systems Integration Partner. Under this arrangement, Anduril would support SOCOM in deploying government-owned and commercial mission autonomy software across a range of robotic platforms, enabling what the command described as “coordinated mass effects” from “teams of diverse autonomous systems.” Lattice was already selected for the Pentagon’s Replicator initiative, a program intended to field low-cost, disposable autonomous systems, including loitering munitions and counter-drone platforms, at scale. Anduril was one of three firms chosen by the Defense Innovation Unit to manage swarms of hundreds or thousands of uncrewed assets across multiple domains. In November 2025, Anduril also formed the Edge-Anduril Production Alliance with UAE-based weapons company Edge Group, focused on manufacturing autonomous weapons systems with full-rate production targeted by the end of 2028.
The Replicator initiative itself reflects a strategic judgment, pressed by Deputy Secretary of Defense Kathleen Hicks during its 2023 rollout, that future competition, particularly with China, would involve autonomous systems operating at a pace and scale exceeding purely human decision cycles. The initiative sought platforms capable of functioning even when satellite communications were degraded or denied, which drove requirements for onboard intelligence and edge computing rather than purely cloud-based or satellite-linked architectures. According to a November 2025 Congressional Research Service review, only “hundreds” rather than the originally projected “thousands” of systems materialized by the August 2025 target date, and technical challenges with software capable of commanding large numbers of different drone types persisted. The program was subsequently transferred to a new organization called the Defense Autonomous Warfare Group within SOCOM. The difficulties were not a sign that autonomous weapons were retreating. They were a sign that fielding them at scale was harder than procurement documents suggested.
This is the practical tension at the center of the space services conversation. The most sophisticated autonomous systems want to be connected to orbital infrastructure for maximum effectiveness. They also need to function when that connection is cut. Space services provide the full-capability ceiling. Onboard AI provides the fallback floor.
The table below lists several notable autonomous and semi-autonomous weapons systems alongside their developers and assessed space service dependencies.
| System | Developer | Type | Space Dependency |
|---|---|---|---|
| STM Kargu-2 | STM, Turkey | Loitering munition | Optional GPS; designed for GPS-denied operation |
| AeroVironment Switchblade 600 | AeroVironment, USA | Tube-launched loitering munition | GPS navigation; operator datalink |
| Anduril Altius 700M | Anduril Industries, USA | Fixed-wing loitering munition | GPS; Lattice AI mesh networking |
| Shahed-136 | Shahed Aviation Industries, Iran | Kamikaze drone | GPS; illicitly acquired Starlink terminals documented 2024 |
| IAI Harop | Israel Aerospace Industries | Anti-radiation loitering munition | GPS; passive radar homing |
| Anduril YFQ-44A | Anduril Industries, USA | Collaborative combat aircraft | GPS; Lattice platform; SATCOM for command coordination |
Swarms, Satellites, and Scale
The Perdix drone swarm test, conducted on October 26, 2016 at China Lake, California, involved 103 small autonomous drones launched from three F/A-18 Super Hornet aircraft. The drones, originally designed by Massachusetts Institute of Technology engineering students and subsequently modified for military use by MIT Lincoln Laboratory beginning in 2013, demonstrated collective decision-making, adaptive formation flying, and self-healing. SCO Director William Roper described them as “a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature.” Secretary of Defense Ash Carter called it “cutting-edge innovation that will keep us a step ahead of our adversaries.” The Perdix program was developed with a reported budget of $20 million, and the Pentagon subsequently sought companies capable of producing the drones in batches of 1,000.
Perdix itself carried no warheads. But it established the swarm coordination model that now forms the conceptual basis of programs like Replicator and the autonomous systems being developed under AUKUS Pillar II, the technology-sharing arrangement between Australia, the United Kingdom, and the United States that explicitly prioritizes autonomous underwater vehicles, advanced AI, and quantum capabilities. Australia’s Ghost Shark autonomous submarine, developed by Anduril under a contract with the Royal Australian Navy and the Defence Science and Technology Group signed in May 2022, is among the most prominent Pillar II-aligned projects. The program of record is valued at A$1.7 billion.
Swarms change the space services calculus in ways that single-platform analysis does not capture. A single loitering munition can use GPS to guide itself and send data via SATCOM. A swarm of hundreds encounters a different architecture challenge. Full GPS dependency for each individual drone creates a massive jamming surface. Full SATCOM dependency creates bandwidth constraints and latency problems when hundreds or thousands of units need to coordinate in real time. The solution increasingly employed is a hierarchical model. A small number of mothership or relay drones maintain satellite connectivity and share targeting data with swarm members through mesh networking, while individual sub-munitions execute using onboard AI that does not require a continuous link to orbit.
China revealed the Jiu Tian mothership drone at the Zhuhai Airshow in November 2024. This 10-ton UAV, equipped with a modular payload bay, can deploy smaller drone swarms at speeds of up to 900 kilometers per hour with a reported range of 2,000 kilometers. The mothership concept, whether American, Chinese, or Russian in design, reflects the same architectural logic: combine satellite connectivity at the platform level with autonomous edge intelligence at the swarm member level.
Russia’s reported use of the RD-8 mothership drone in Ukraine illustrates this architecture in active conflict. The RD-8 uses satellite connectivity to maintain communications between a distant operator and a collection of smaller loitering munitions it deploys. Russian forces did not acquire Starlink specifically for this purpose. They acquired it through illicit channels because it was militarily dominant in the theater and then found ways to embed it into drone operations. That improvised adaptation is itself a data point about how commercial space services function as dual-use infrastructure. Companies that build and operate satellite constellations for commercial customers cannot fully control who benefits from the capabilities they provide.
The Governance Void
On December 22, 2023, the United Nations General Assembly adopted Resolution 78/241, the first General Assembly resolution specifically addressing lethal autonomous weapons systems. It passed with 152 votes in favor, four against, and eleven abstentions. The resolution acknowledged the “serious challenges and concerns” raised by AI and autonomous weapon technologies and called on UN Secretary-General António Guterres to solicit member states’ views and prepare a report by September 2024.
The four votes against came from Belarus, Mali, Niger, and Russia. Among those abstaining were China, India, Iran, Israel, North Korea, Saudi Arabia, and Turkey. These are not peripheral actors in the autonomous weapons debate. China, India, Iran, Israel, and Turkey have all made substantial investments in military AI and autonomous systems. Israel’s use of the Gospel and Lavender targeting tools in Gaza was already underway. Turkey had deployed the Kargu-2. The votes reflected, with unusual transparency, who had the most to lose from binding constraints.
Discussions on lethal autonomous weapons have been conducted at the Convention on Certain Conventional Weapons framework in Geneva since May 2014, over eleven years without a binding outcome. The CCW operates by consensus, meaning a single member state can block any agreement. In November 2023, CCW states agreed to meet for up to twenty days across 2024 and 2025 to work toward “a set of elements of an instrument, without prejudging its nature.” The phrasing was crafted to avoid committing to negotiating, adopting, or enforcing anything. Major military powers had once again successfully drained urgency from the language before it could constrain their procurement.
The Campaign to Stop Killer Robots, a coalition of NGOs that launched in 2013 and includes Human Rights Watch as a founding member, has continued to push for a new international treaty prohibiting autonomous weapons capable of selecting and attacking human targets without meaningful human control. Human Rights Watch issued a statement in February 2025 reiterating the urgency of treaty negotiations. The International Committee of the Red Cross has separately called for a prohibition on autonomous systems that apply force directly against human beings and for regulations requiring meaningful human control over other types of autonomous weapons. Importantly, the ICRC’s position does not propose banning all military AI. Automated missile defense systems that protect against incoming rockets and missiles are not the target. The concern is specifically with systems that make individual life-and-death targeting decisions without a human making an individual judgment each time.
What constitutes “meaningful human control” is deeply contested. When an Israeli analyst approves a Lavender-generated target after twenty seconds of review, having examined a data summary produced by an algorithm trained on disputed datasets, is that meaningful human control? When a Kargu-2 is programmed before launch and then engages autonomously, has the human decision occurred at the programming stage? When an Anduril Lattice node allocates a loitering munition to a target through real-time sensor fusion, who made the decision? These are not rhetorical questions. They have concrete implications for accountability under international humanitarian law, for the application of proportionality and distinction requirements, and for criminal responsibility when something goes wrong.
The ICRC’s 2021 position statement expressed the core concern precisely: “The process of functioning risks effectively substituting human decisions about life and death with sensor, software and machine processes.”
The Case for a Ban, and Why It’s Proving Difficult
This article takes a clear position: a binding international prohibition on fully autonomous lethal weapons systems is necessary. The argument rests not on the claim that autonomous systems always perform worse than human soldiers, but on the simpler point that removing human judgment from individual life-and-death decisions is a categorical moral error that no efficiency argument can overcome.
Ronald Arkin, a roboticist at Georgia Tech, proposed in his book Governing Lethal Behavior in Autonomous Robots that properly designed autonomous systems could potentially comply with international humanitarian law more consistently than human soldiers under combat stress. Soldiers panic, commit atrocities out of rage, fire into crowds under ambiguous threat assessments. Machines follow code. Arkin’s “ethical governor” framework suggested it might eventually be possible to build systems that apply the laws of armed conflict more reliably than combatants in the field.
Whether that turns out to be true in actual combat conditions, with adversarial inputs, degraded sensors, edge cases the training data never covered, and opponents actively trying to confuse the system’s classification engine, is simply not known with the confidence that deployment decisions seem to assume. The history of machine learning systems producing confident wrong answers in high-stakes contexts, from facial recognition errors to misidentified radar tracks, suggests that a 90 percent accuracy rate in a training environment is not equivalent to reliable discrimination between a combatant and a civilian in a city street.
The Lavender system’s reported error rate makes this concrete. Even accepting Israel’s characterization of Lavender as a database rather than an autonomous weapons system, a machine learning tool making incorrect inferences about approximately 1 in 10 individuals it flags for potential lethal action, operating at the scale and pace described by intelligence officers, represents a consequence that no individual human decision-maker could have generated alone. One source told +972 Magazine that the number of civilians considered acceptable to kill alongside junior Hamas targets was set at up to twenty in the initial weeks of the war, and that operators moved quickly through targets because “the system, the targets never end.” The IDF has disputed the accuracy of this reporting. But the underlying structural question, what happens when AI-generated target recommendations scale faster than human verification capacity, does not depend on which account is more accurate. The question is present regardless.
There is a second argument for a ban that has nothing to do with the performance of individual systems: proliferation. The Kargu-2 is not expensive or technically exotic. Researchers writing in Lawfare estimated that a manufacturer with $10 million and access to off-the-shelf components could produce functional autonomous anti-personnel drones at a unit cost of roughly $200 each, since the base airframe and camera alone cost approximately $45. Machine learning libraries capable of object classification are publicly available. A functional autonomous loitering munition does not require the resources of a nation-state to produce or deploy.
An international framework that legitimizes autonomous weapons, even with nominal meaningful human control requirements, provides cover for actors who will interpret that requirement as loosely as possible while building systems that are functionally indistinguishable from fully autonomous ones. A ban, like the Ottawa Treaty on anti-personnel landmines adopted in 1997 or the Chemical Weapons Convention, creates a clear line that, while not perfectly enforceable, shapes the behavior of adherent states and delegitimizes use by those who refuse to sign.
The counterargument, developed carefully by scholars including Paul Scharre, is that a ban would be ineffective because autonomous weapons are technically difficult to define, easy to disguise in dual-use hardware, and produced by state actors who would simply ignore any treaty. This is a serious objection. Russia’s sustained violations of the Intermediate-Range Nuclear Forces Treaty before its collapse in 2019 demonstrated that even formal treaty commitments from major nuclear powers do not guarantee compliance. The United States itself has declined to formally join the Ottawa Treaty, though it has significantly reduced its anti-personnel mine stockpile.
But the historical record of weapons prohibition also shows that normative pressure produces real effects. The Ottawa Treaty’s 164 state parties have collectively destroyed the vast majority of the world’s stockpiled anti-personnel mines, and states that continue to use such mines face significant diplomatic and reputational costs even when not treaty members. The humanitarian norm has outlasted the diplomatic ambiguity. The question for autonomous weapons is whether the international community will move toward that kind of framework before the technology’s proliferation advances to the point where norms become irrelevant.
Space Services as a Strategic Target
One dimension of the slaughterbots conversation that receives less attention than it warrants is the vulnerability of space services themselves. Autonomous weapons that depend on GPS, SATCOM, or satellite-derived targeting data have an attack surface that extends into orbit.
Russia demonstrated direct-ascent anti-satellite capability in November 2021 when it destroyed its own Cosmos 1408 satellite, generating a debris field of over 1,500 trackable objects in low Earth orbit and drawing international condemnation. China had conducted a similar demonstration in January 2007, destroying the Fengyun-1C weather satellite and creating a debris cloud that still poses collision risks to low Earth orbit operations. Both tests served as implicit signals about the fragility of satellite infrastructure in a high-intensity conflict, even if neither was framed explicitly as a threat.
GPS jamming and spoofing are lower-threshold options that state actors have been deploying routinely. The GPSJam.org monitoring service, which aggregates GPS anomaly data from commercial aircraft navigation systems, has documented extensive GPS interference across Eastern Europe, the Middle East, and parts of Asia throughout 2024 and 2025. Much of this interference is assessed to be deliberate military jamming. In a high-intensity conflict, an adversary capable of degrading GPS signals over a contested region can simultaneously degrade the navigation layer that loitering munitions and autonomous ground vehicles depend on.
This is why design specifications like the Kargu-2’s machine vision-based terminal guidance are not simply marketing details. They reflect the practical battlefield expectation that GPS will be contested. The hierarchical swarm architecture, with motherships maintaining satellite connectivity while sub-munitions operate through mesh networking, is another response to the same reality. Autonomous systems are being designed from the outset to function in environments where the orbital infrastructure they prefer may be partially or fully unavailable.
Anduril’s acquisition of ExoAnalytic Solutions in March 2026 reflects the logical conclusion of this dynamic. A company that builds autonomous weapons systems also needs to track and defend the satellites those systems depend on. The US Space Force has expanded its Space Surveillance Network and invested in new ground-based and orbital sensors. As autonomous weapons become more numerous and more satellite-dependent, protecting space services becomes a military operational objective tied directly to the performance of the weapons themselves.
There is a circular quality to this logic that deserves attention. Autonomous weapons depend on space services to perform at their rated capability. Protecting space services becomes a military objective because autonomous weapons depend on them. The militarization of space services deepens, and with it the risk that a future conflict sees direct attacks on commercial and military satellite infrastructure. Commercial operators like Planet Labs, Maxar, ICEYE, and SpaceX are drawn further into that circle whether they intend to be or not.
The Private Sector’s Uncomfortable Position
Commercial space companies have not fully reckoned with the role they play in enabling autonomous warfare. Starlink’s entanglement in the Ukraine conflict, on both sides of the frontline, is the most visible example. But Planet Labs’ daily global imaging, Maxar’s high-resolution targeting support, ICEYE and Capella’s all-weather radar coverage, and BlackSky’s near-real-time revisit capability are now routine parts of military intelligence pipelines across multiple conflicts.
These companies operate under US export control regulations and arms trafficking laws, which set limits on what they can provide to which parties. But the rules were not written with autonomous AI targeting systems in mind, and the pace at which commercial imagery flows into military AI pipelines, often through data brokering arrangements several steps removed from any direct government contract, makes regulatory oversight difficult to apply consistently.
Palantir Technologies, the data analytics firm co-founded by Peter Thiel, told TIME magazine in 2023 that its software was “responsible for most of the targeting” against Russian forces in Ukraine, describing a system that presented commanders with targeting options compiled from satellites, drones, open-source intelligence, and battlefield reports. This placed a commercial software company at the operational center of one of the most consequential military conflicts of the post-Cold War era. Palantir’s integration with the Pentagon’s Project Maven and the Maven Smart System gave it an institutionalized role in US military AI targeting that has expanded significantly since 2017, when the project began.
Project Maven’s early history is instructive. In 2018, thousands of Google employees signed a letter protesting the company’s provision of AI tools to the DoD for autonomous drone analysis. The protest led Google to decline renewal of the Maven contract. The backlash demonstrated that at least some parts of the commercial technology sector were unwilling to participate in autonomous targeting systems without public debate. It also demonstrated the limits of that resistance: Maven continued, supported by other contractors, and by 2025 had become an operational capability deployed at scale.
The movement of talent, capital, and government contracts through the commercial technology sector into autonomous weapons applications is so far advanced that calling it a blurring of civilian and military technology understates the degree of integration. For the commercial space industry specifically, the question is whether companies that built their business models on open-access imagery and commercial connectivity can manage the accountability implications of being integral parts of systems that make lethal targeting decisions.
There is no clear regulatory answer to this question. US export control law has not been updated at pace with commercial space capability. International frameworks like the Wassenaar Arrangement, which governs dual-use technology exports, were designed for hardware transfers between states and do not map cleanly onto data services delivered via satellite subscription. The gap between the existing legal architecture and the actual operating environment is wide and is not narrowing.
Summary
The trajectory of autonomous weapons development since 2017 has moved faster than governance frameworks and faster than public understanding. What the Slaughterbots film depicted as a near-future scenario arrived, in partial and contested form, in Libya in 2020, in Ukraine from 2022 onward, and in Gaza from late 2023. The specific forms were messier, more legally ambiguous, and more dependent on commercial infrastructure than the film suggested. But the core dynamic, machines making or substantially shaping lethal targeting decisions without adequate human oversight, was present in all three cases.
Space services form the enabling layer of this transformation in ways that have received insufficient attention. GPS precision, SATCOM connectivity, and commercial satellite imagery do not appear in most public discussions of autonomous weapons. They are infrastructure, taken for granted until they become contested, at which point the weapons systems that depend on them are revealed to have an orbital dimension that is militarily significant and vulnerable. The actors developing the most capable autonomous systems, particularly the United States, China, and Israel, are simultaneously the actors most invested in protecting and exploiting space services, and least interested in binding international constraints on the systems that make use of them.
Something that has not received enough emphasis in this debate: the commercial space industry’s role as an autonomous warfare enabler is both unintentional and accelerating. Planet Labs launched with an explicit mission of using satellite imagery for peaceful applications. Starlink was built for consumer and enterprise internet access. These companies did not set out to supply infrastructure to AI targeting systems. They are doing so anyway, partly because their technology is inherently dual-use, partly because government defense contracts are financially attractive, and partly because the regulatory environment has not drawn clear lines about what is and is not permissible. Addressing autonomous weapons seriously means addressing the space services layer that sustains them, and that conversation has barely begun.
Appendix: Top 10 Questions Answered in This Article
What is a slaughterbot?
A slaughterbot is an autonomous weapons system that uses artificial intelligence, typically including computer vision and machine learning, to identify, select, and engage human targets without a human operator making individual targeting decisions. The term was popularized by a 2017 short film produced by the Future of Life Institute. Such systems are more formally described as lethal autonomous weapons systems, or LAWS.
What was the significance of the STM Kargu-2 incident in Libya?
In March 2021, a UN Panel of Experts on Libya reported that Turkish-made STM Kargu-2 drones had “hunted down and remotely engaged” retreating forces in a March 2020 engagement without requiring data connectivity between the operator and the munition. The incident is widely cited as the first documented potential use of a fully autonomous lethal weapons system in active combat, though whether it resulted in autonomous kills without human authorization remains disputed. STM, the manufacturer, has denied that the drone operated fully autonomously.
How does GPS support autonomous weapons systems?
GPS provides navigation precision that allows autonomous platforms to operate in unfamiliar terrain without real-time human guidance. Loitering munitions, reconnaissance drones, and naval unmanned vehicles all use GPS signals to determine position and guide flight paths. In GPS-denied or electronically contested environments, some autonomous systems switch to machine vision or inertial navigation for terminal guidance.
How did Starlink become involved in the Ukraine conflict?
SpaceX provided Starlink satellite internet terminals to Ukraine following Russia’s full-scale invasion in February 2022, enabling Ukrainian forces to maintain communications despite attacks on ground infrastructure. By early 2025, Ukraine operated at least 47,000 Starlink terminals. Russian forces subsequently acquired terminals illicitly through smuggling networks and used them to extend the range and accuracy of drone strikes, a practice documented through at least late 2025 despite Pentagon and SpaceX efforts to disable unauthorized access.
What are the Gospel and Lavender AI systems used by the Israeli military?
The Gospel is an AI system developed by Unit 8200 of the Israeli Intelligence Corps that automatically reviews surveillance data to recommend buildings and structures as potential strike targets. Lavender is a companion AI system that assigns individual Palestinian men a score linked to suspected affiliation with Hamas or Palestinian Islamic Jihad, reportedly flagging up to 37,000 individuals for possible targeting. Israel describes both as decision-support tools that assist human analysts rather than as autonomous weapons.
What is Anduril’s Lattice platform?
Lattice is an AI integration platform developed by Anduril Industries that connects sensors, weapons, drones, and surveillance systems into a networked decision environment capable of real-time autonomous coordination. It has been selected for use by the US Space Force for surveillance networks, the US Army for counter-drone operations under the Integrated Battle Command System Maneuver program, and the Pentagon’s Replicator initiative for managing swarms of autonomous systems across multiple domains.
What has the United Nations done to regulate autonomous weapons?
Discussions at the Convention on Certain Conventional Weapons in Geneva have been ongoing since May 2014 without producing a binding instrument. In December 2023, the UN General Assembly adopted Resolution 78/241, the first General Assembly resolution specifically addressing lethal autonomous weapons, which passed 152 to 4 with 11 abstentions. The resolution called for member states to report their views and tasked the Secretary-General with preparing a report, but it did not mandate negotiations toward a binding treaty.
What is the Pentagon’s Replicator initiative?
Replicator is a US Department of Defense initiative announced in 2023 by then-Deputy Secretary Kathleen Hicks, designed to field low-cost, attritable autonomous systems, including loitering munitions and counter-drone platforms, at scale as a counterweight to China’s mass production of military systems. The initiative requested approximately $1 billion across fiscal years 2024 and 2025. A Congressional Research Service review noted that only hundreds rather than the projected thousands of systems were delivered by the August 2025 target date, and oversight was subsequently transferred to a new Defense Autonomous Warfare Group.
How does the Campaign to Stop Killer Robots differ from the ICRC’s position?
The Campaign to Stop Killer Robots, launched in 2013 as a coalition of NGOs including Human Rights Watch, calls for a comprehensive international treaty prohibiting autonomous weapons that select and engage human targets without meaningful human control. The International Committee of the Red Cross advocates a narrower but still binding set of prohibitions specifically targeting autonomous systems designed to apply force directly against humans and systems with highly unpredictable behavior, while supporting regulations rather than a ban on other autonomous military technologies.
Why are commercial satellite companies relevant to autonomous weapons governance?
Commercial satellite operators supply data services, including GPS-comparable navigation, broadband SATCOM, and daily Earth observation imagery, that form the infrastructure layer of autonomous weapons operations. Planet Labs, Maxar Technologies, ICEYE, BlackSky, and SpaceX’s Starlink have all supplied data or connectivity that has fed military targeting and autonomous systems in active conflicts. Because these services are governed by commercial contracts and export control regulations not designed with autonomous AI targeting in mind, significant regulatory gaps exist between what these companies provide and what international humanitarian law frameworks address.