HomeComparisonsOrbital Data Centers: Real Business Opportunity or Speculative Fantasy?

Orbital Data Centers: Real Business Opportunity or Speculative Fantasy?

Key Takeaways

  • Small in-orbit compute services are real, but hyperscale orbital cloud plans are not.
  • The best current use case is edge processing for spacecraft, not replacing Earth’s cloud.
  • Power in orbit is attractive, yet launch, heat rejection, servicing, and debris still dominate.

The idea is being sold in two very different sizes

The phrase orbital data center is being used for at least two different businesses, and the confusion between them is doing much of the selling. One version is small, narrow, and already real: computers in orbit that process data for satellites, space stations, remote sensing payloads, and government users before information is sent to the ground. The other version is far larger and far noisier: solar-powered constellations in low Earth orbit that would one day compete with or supplement terrestrial hyperscale data centers for artificial intelligence workloads. Those are not the same market. They do not require the same capital, the same launch rate, the same thermal design, or the same customer base. Treating them as one industry makes the whole segment look more mature than it is.

That distinction leads to the clearest judgment available in March 2026. Orbital data centers are a real business opportunity in the narrow sense of on-orbit edge computing, secure storage, relay processing, and mission-specific services for spacecraft. They are still speculative fantasy in the larger sense that matters to most headlines: giant orbital AI clusters taking meaningful load away from terrestrial cloud infrastructure any time soon. The hardware path has become believable. The business path for hyperscale orbital computing has not.

Earth’s power problem is real, which is why the idea keeps attracting money

The pressure driving this market is not imaginary. The International Energy Agency says data centers used around 415 terawatt-hours of electricity in 2024, or about 1.5% of world electricity consumption, and projects that figure to rise to roughly 945 terawatt-hours by 2030. The IEA also says global investment in data centers nearly doubled since 2022 and reached half a trillion dollars in 2024. In the United States, the IEA expects data centers to account for nearly half of electricity demand growth through 2030. Grid queues, transformer lead times, local bottlenecks, cooling loads, water use, and permitting delays have turned computing capacity into an energy and infrastructure problem as much as an information-technology problem. That is why an idea that once sounded like science fiction now receives serious boardroom attention.

Space startups and large technology firms are reading those same constraints and asking whether solar power in orbit, near-continuous exposure to sunlight in selected orbits, and the absence of land and local-grid bottlenecks could create a new compute platform. Google Research wrote in November 2025 that in the right orbit a solar panel can be up to eight times more productive than on Earth and produce power nearly continuously. Starcloud markets similar advantages, arguing that orbital systems could grow to gigawatt scale while sidestepping terrestrial permitting constraints. The pitch sounds strong because it starts from a real strain on Earth. That part of the story should not be dismissed.

Yet a real terrestrial problem does not automatically make the proposed off-planet answer sensible. Plenty of infrastructure booms begin with a genuine bottleneck and still end with misplaced capital. Orbital data centers sit in that uncomfortable category. They are not solving an invented issue. They are proposing a response whose scale, timing, and customer economics remain unsettled. That is a much less dramatic claim than calling the whole sector fake, but it is much closer to the evidence.

Space computing did not begin with the AI boom

This market did not appear from nowhere in 2025. The more modest form of orbital computing has been under development for years. Hewlett Packard Enterprise and NASA flew the first Spaceborne Computer to the International Space Station in 2017 to test whether commercial off-the-shelf systems could survive in orbit. HPE’s timeline says the second generation launched in 2021, completed 24 research experiments in 2022, and a third mission went to the ISS on January 30, 2024. HPE presents the programme not as a replacement for terrestrial cloud, but as a way to process data and run AI workloads at the edge in the harsh environment of space. That is a narrower, less glamorous, and much more believable proposition.

AWS Snowcone followed a similar path. Amazon, NASA, and Axiom Space worked together to validate a Snowcone unit for flight to the ISS in 2022. The AWS account of that effort is revealing because it reads like aerospace engineering rather than cloud-marketing fantasy. The device needed a detailed thermal analysis, a safety review, vibration testing, and extra Kapton protection. A later AWS public-sector post said Japan Manned Space Systems used the Snowcone on the ISS to automate transmission of large data volumes back to Earth. That is orbital compute in practice. It is not an orbital data center in the sense that headlines suggest, though it is clearly part of the same technical lineage.

Europe produced its own early evidence. D-Orbit and Unibap flew the SpaceCloud platform on the Wild Ride mission in June 2021. Unibap says the system validated cloud applications and AI workloads in orbit, while D-Orbit later described its in-orbit cloud platform as providing distributed high-performance analytics, computing, and storage capabilities in space. This is an important part of the story because it shows that the current orbital-data-center wave did not emerge only after large language models made compute fashionable. The technology base had already been moving, just at a smaller scale and with less theatrical language.

That earlier history matters because it improves the skeptical reading. A decade of technical steps has made one thing clear: computing in space is feasible. A decade of technical steps has not shown that placing vast, terrestrial-scale cloud infrastructure in orbit will soon be the lowest-cost way to handle AI demand on Earth. The record supports one conclusion and does not yet support the other.

What exists right now is closer to edge compute than to cloud replacement

The most concrete current programmes look like edge computing in orbit. Axiom Space says it deployed Data Center Unit-1 to the ISS in 2025 and launched its first two dedicated orbital data center nodes to low Earth orbit on January 11, 2026. Those nodes flew with the first tranche of Kepler Communications optical relay constellation. Axiom’s own language is centered on processing, storage, cybersecurity, data fusion, and AI/ML closer to the source rather than shipping all raw data to Earth first. It is selling performance in orbit for space missions. That is a real service, and it makes economic sense for some customers now.

Kepler’s materials say much the same thing from the network side. The company says its first operational optical-relay tranche consists of ten satellites launched on January 11, 2026, each with optical terminals and multi-GPU compute modules plus terabytes of storage. By March 16, 2026, Kepler said all ten satellites were interconnected through real-time optical communications and operating as a space-based edge-compute fabric powered by NVIDIA acceleration. Kepler’s own description is not that of a giant orbital hyperscaler. It is a cloud-like layer in orbit for mission operations, hosted payloads, secure routing, and fast in-space processing. The distinction is important because it shows where the technology is already useful.

OroraTech gives the best example of why this smaller market exists. In January 2026, OroraTech and Kepler announced plans for what they called the world’s first thermal livestream of Earth, using Kepler’s optical-relay and on-orbit-compute capabilities to support real-time thermal Earth observation and wildfire intelligence. This is not about moving a public cloud into orbit. It is about filtering, routing, and accelerating mission data that is perishable and operationally valuable. A system that can make that job faster without waiting for downlink windows has a customer logic. That is why the niche is real.

The present business case is strongest where downlink is the bottleneck

A great deal of orbital data is valuable only if it is handled quickly. That applies to wildfire monitoring, missile warning, time-sensitive reconnaissance, disaster response, maritime domain awareness, and certain industrial or weather applications. Sending every raw file to the ground before any filtering, compression, prioritization, or inference takes place is often wasteful. Kepler says its network allows data to be processed and analyzed directly in space rather than waiting for downlink. Axiom Space makes the same argument, framing its nodes as a way to detect features, compress files, run AI models, and keep systems operating even if a link to Earth drops. This is the first market worth taking seriously. It is not hypothetical. It lines up with actual mission friction.

The defence and sovereign-data angle strengthens the case further. Kepler says its system is compatible with Space Development Agency optical standards and has already been used in work tied to Defence Research and Development Canada and the Canadian Space Agency. In October 2025, Kepler announced a multi-year contract from DRDC to demonstrate real-time data sharing and connectivity for continental defence, including the Canadian Arctic, with processing and compression of Earth observation imagery on orbit. In December 2025, it announced a CSA contract for a concept study related to Canada’s next-generation Earth observation system and sovereign access to satellite data. That is another real demand signal. Governments will pay for faster, more controlled, more resilient data handling in space before they pay to relocate generic enterprise compute there.

That customer profile matters because it helps separate a working early market from a speculative late one. A defence customer, an Earth-observation operator, or a satellite fleet manager does not need an orbital cluster to beat terrestrial cloud prices across the board. That customer only needs the orbital node to solve a specific timing, security, or bandwidth problem that terrestrial routing handles badly. Once the threshold is defined that way, the economics get much less absurd.

The new wave is being driven by AI rhetoric, not by proven cloud demand

The segment became much louder in late 2025 and early 2026 because AI turned compute itself into a prestige asset. Starcloud says its Starcloud-1 satellite launched in November 2025 carrying the first NVIDIA H100 in space. The company says it ran a version of Gemini in orbit and trained a small language model there in December 2025. The company has since raised major funding and presented much larger ambitions. That is a remarkable fundraising story. It is not proof that orbital hyperscale compute has become a mature market.

Google Research then made the category look even more serious by publishing Project Suncatcher, a moonshot to explore constellations of solar-powered satellites carrying Tensor Processing Units connected by optical links. Google’s post is careful. It describes early research, lays out engineering hurdles, and says the next milestone is a learning mission with Planet slated for early 2027. It does not claim the market already exists at terrestrial scale. That caution is one reason the post is useful. It shows that the most technically serious version of this idea still treats it as an exploratory systems programme, not as a near-term cloud replacement.

SpaceX pushed the rhetoric further than anyone else. On February 4, 2026, the Federal Communications Commission released a public notice saying the Space Bureau had accepted for filing SpaceX’s application for a new non-geostationary “Orbital Data Center” system of up to one million satellites. The FCC notice says the proposed system would operate from 500 kilometers to 2,000 kilometers altitude and use optical intersatellite links. That filing made headlines because of its scale. It also made the problem easier to see. A million satellites is not a normal business expansion. It is a civilizational-scale wager on launch cost, spacecraft production, orbital traffic management, regulator tolerance, and future demand. That does not place the idea inside an ordinary market category. It places it near the edge of speculative infrastructure doctrine.

Calling all of this a data center market hides the useful truth

The same phrase now covers an ISS prototype, a relay-network edge node, a hosted payload service, a 2027 learning mission, an 88,000-satellite startup ambition, and a one-million-satellite FCC filing. That should make anyone cautious. A data center on Earth usually implies stable power, dense compute, repeatable maintenance, clear network economics, and customers who know what service they are buying. In orbit the term is being stretched from “compute hardware on a spacecraft” all the way to “future solar-powered AI cloud for Earth.” Those are not just different sizes of the same product. They are different businesses with different risk structures.

This is why the category keeps looking more advanced from far away than it does up close. An investor can point to the Axiom nodes, the Kepler network, the Starcloud demo, Google’s research, and SpaceX’s filing and describe one giant wave. A more careful read shows a ladder of escalating ambition. The lower rungs are real, useful, and already earning their keep as technical demonstrations or mission tools. The upper rungs still depend on assumptions that have not yet been tested in the open market. The label “orbital data centers” flattens that ladder into a single story, which is convenient for fundraising and misleading for analysis.

Power is an advantage in orbit, but only in a specific sense

The strongest argument for orbital data centers is still power. In a properly chosen sun-synchronous orbit or another geometry with prolonged sunlight, solar arrays can operate at a very high duty cycle. Google says a solar panel in the right orbit can be up to eight times more productive than on Earth and nearly continuous in output. That is an appealing prospect at a moment when grid connection is becoming a commercial choke point for terrestrial AI facilities. Launch a power-hungry workload into constant sun, remove it from local politics and land scarcity, and the story begins to sound simple.

Yet power generation is only one half of the data-center problem. A usable server in orbit still needs structure, radiation management, networking, storage, autonomy, and a way to reject heat. It also needs launch mass devoted to solar arrays, radiators, power conditioning, attitude control, optical terminals, and propellant or deorbit systems. The “free solar” narrative can slide into a sleight of hand, because it highlights the cost of electricity on Earth while underplaying the mass and system-engineering penalties required to turn sunlight in orbit into dependable compute. The Sun helps. It does not erase aerospace math.

That is why current systems look so different from the giant marketing visions. Kepler’s first tranche uses ten satellites weighing about 300 kilograms each. Axiom’s current nodes are small enough to fly as hosted payloads on that network. The technology is climbing through modest, serviceable steps because each added watt in orbit carries system costs that terrestrial operators do not have to think about in the same way. The people building this hardware know that. Most public discussions do not.

Heat rejection is where the cleaner story gets messy

Cooling is another place where the sales pitch sounds easier than the engineering. Orbital-data-center advocates often point out that space is cold and that heat can be radiated away. The second claim is true. The first is the kind of phrase that hides trouble. In vacuum, heat does not drift away through air. It must be rejected through thermal radiation, which means large radiators, careful orientation, and design choices that often fight against compactness and power density. Google Research explicitly lists thermal management as one of the major engineering challenges that still remain for space-based AI infrastructure. That is a far more grounded statement than the casual suggestion that space solves cooling by default.

Starcloud markets radiative cooling as a reason orbital systems can scale cheaply. That may prove true in some architectures, especially if compute, solar collection, and radiator geometry are tightly integrated from the start. Even so, the present record is not one of dense, high-power orbital server farms operating at terrestrial scale. It is one of early spacecraft proving that advanced chips can run in orbit at all. That is progress. It is not evidence that thermal design has already been solved for the kind of giant clusters implied by gigawatt language.

The difference is not academic. A spacecraft can be compute-rich by satellite standards and still be sparse by data-center standards. That is a sensible trade in the current market. It becomes a problem only when the same public narrative implies that thousands or millions of such spacecraft will soon deliver an economically normal cloud-computing platform from orbit. The thermal challenge is one reason that leap still looks far away.

Radiation and servicing are the less glamorous reasons to stay cautious

Any serious discussion of orbital data centers has to leave the investor deck and enter the maintenance room. Google Research devoted part of Project Suncatcher to radiation tolerance testing, including proton-beam work on Trillium TPUs. HPE and NASA built the entire Spaceborne programme around the question of whether commercial systems could survive in orbit. AWS, NASA, and Axiom put a small Snowcone unit through months of validation and thermal review before launch. All of that effort says the same thing. Space hardware does not receive a free pass just because the software market wants more compute.

Servicing makes the picture harder still. A failed server in a terrestrial data center gets swapped by a technician. A failed orbital compute node either keeps running with degraded capability, receives a software workaround, gets replaced on a later mission, or becomes another object whose value depends on launch cadence and replenishment economics. Axiom Space says future maintenance and upgrades can happen through resupply missions or replacement of modules for free-flying nodes. That is practical language, yet it also reveals the gap between this business and the one people imagine when they hear “cloud.” Real cloud infrastructure depends on serviceability, replacement velocity, spare parts, and human access. Orbital infrastructure depends on launches. Those are very different supply chains.

This is also where the big plans begin to wobble. The harder the hardware is to service, the more the business leans toward mass production plus replacement. That increases the importance of launch cost and orbital disposal. It also makes giant constellations more dependent on industrial tempo than on pure compute economics. The market may still reach that point someday. It has not reached it yet.

Launch economics still do not close for hyperscale use

The most direct current rebuttal came from inside the cloud industry. AWS leadership has publicly argued that orbital data centers remain far from economic reality, pointing to launch limits and payload cost. That assessment should carry weight because AWS is not a casual observer of data-center economics. A company that operates one of the world’s largest cloud platforms has every reason to notice a credible competitor. Its leadership did not sound alarmed. It sounded unconvinced.

The pro-orbital camp does have models on its side, just not short-term proof. Google’s research says historical and projected launch pricing might fall below $200 per kilogram by the mid-2030s and that at that point launching and operating a space-based data center could become roughly comparable to the reported energy cost of an equivalent terrestrial facility on a per-kilowatt-per-year basis. Wider deployment still depends on technical success and launch-cost compression. That is the most defensible version of the bullish case. It is explicitly forward-looking.

That is also why the giant near-term rhetoric feels overstretched. Even the favorable models that serious researchers are willing to publish point toward the 2030s for larger systems. The present market behaves as if that long runway did not exist. Valuations, filings, and public talk about gigawatt-scale clusters create an impression of imminent disruption. The underlying schedules being admitted by researchers and cautious executives do not read that way.

The hardest part to call is timing

The hardest part to call is not the physics. The physics do not block orbital computing. The hardest part to call is when a useful in-orbit service becomes a large, repeatable commercial market for customers who do not already operate spacecraft. That line still looks distant. A satellite operator buying edge compute for an imaging mission is easy to picture. A mainstream enterprise shifting material workloads from terrestrial cloud contracts to an orbital AI cluster is much harder to picture with confidence today.

That gap between technical plausibility and customer migration is where most speculative booms get into trouble. Demos can be real. Pilot customers can be real. Grant-backed or defence-backed use cases can be real. None of those automatically adds up to the kind of market that can justify language about moving hyperscale cloud off-planet in the near term. The sector may get there. The buyer behavior that would prove it has not yet shown itself in public.

Orbital congestion turns giant compute constellations into a public problem

Once plans move from tens of satellites to tens of thousands or a million, the issue stops being just commercial. It becomes a shared-orbit question. The European Space Agency says low Earth orbit is already crowded enough that evasive action is unavoidable in certain bands and that, without changes in behavior, the risk level passes beyond sustainability. ESA’s environment material says roughly a quarter of active constellation payloads are now flying below 500 kilometers and that fragmentation events keep adding debris faster than natural re-entry can clean it away. Orbital compute systems would not enter a pristine environment. They would enter a stressed one.

That makes the FCC details on the SpaceX filing especially striking. The February 4, 2026 public notice says SpaceX is seeking waivers from milestone requirements, surety-bond obligations, and other standard processes for its proposed orbital-data-center system of up to one million satellites. The filing may be a design-flexibility move rather than a literal near-term deployment plan, and giant applicants often ask for more room than they will use. Even so, the fact pattern matters. It shows that scaling orbital data centers is not just a matter of better chips and cheaper launches. It also depends on how much congestion, waiver-seeking, and regulatory latitude the public sphere is willing to accept.

This is one reason giant orbital-AI visions should be treated more skeptically than small in-space compute services. A small hosted payload or relay node mainly has to prove its usefulness to a customer. A giant compute constellation has to prove usefulness and survive policy questions about traffic, debris, re-entry, spectrum, and externalized risk. Those are not minor add-ons to the business model. They are part of the business model.

The funding boom is not proof of a settled market

Money is flowing into the category, and that matters. Starcloud has reached unicorn status, Axiom Space continues to expand its orbital-data-center roadmap, Google Research has put its reputation behind an exploratory moonshot, and SpaceX has filed its giant plan. This is not fringe talk anymore. It is backed by serious actors.

Yet investor attention should not be mistaken for market resolution. Capital often runs toward a genuine bottleneck and then overshoots badly on the first proposed cure. The history of communications constellations, launch services, satellite imaging, and broadband is full of that pattern. Orbital data centers now have the ingredients investors love: a real macro bottleneck, a giant addressable market, a romantic technological leap, and a handful of demos that make the impossible sound suddenly practical. Those ingredients are enough to create billion-dollar narratives before there is anything like a normal operating business underneath them.

The current signal from financing is best read as conviction that the category deserves to be explored, not confirmation that it will become a giant independent market soon. That sounds less exciting, though it is probably closer to the truth.

The strongest customers are in space already

A useful way to test the market is to ask who the first durable customers are likely to be. The answer is not retail consumers. It is not ordinary enterprise software buyers. It is not most public-cloud users. The most believable first customers are already operating in orbit or buying data from orbit: Earth-observation companies, defence agencies, civil space agencies, communications networks, in-space manufacturing platforms, and operators that need autonomy during interrupted links. That is why Kepler talks about hosted payloads, optical relay, secure routing, and always-available coverage in low Earth orbit. That is why Axiom Space talks about data fusion, mission data, cybersecurity, and sovereign use cases. That is why OroraTech is using the architecture for wildfire intelligence.

This customer profile is not a weakness. It is the real market hiding inside the hype. An orbital data center that serves satellites may become a valuable product line without ever becoming a broad replacement for terrestrial cloud. The pressure to sell it as more than that comes from valuation logic, not from present demand. If the sector were described more carefully, it might look smaller but healthier. Instead, it is being asked to carry both a genuine niche market and a gigantic future-cloud dream at the same time.

The label becomes more believable again when the ambition gets smaller

The case improves once the ambition is reduced. An orbital node that preprocesses synthetic aperture radar imagery, flags anomalies, trims bandwidth, or keeps a mission running during a ground outage is easy to justify. A station-based compute platform that lets researchers run AI or storage jobs close to where experiments occur is easy to justify. A sovereign node that gives a government more control over orbital data flows is easy to justify. A relay constellation with onboard compute that supports optical links and hosted payloads is easy to justify. Every one of those businesses already has visible technical grounding.

By contrast, a giant orbital cluster intended to underprice or outscale terrestrial AI infrastructure faces a much harsher standard. It has to beat or complement power purchase agreements, new nuclear deals, grid expansion, terrestrial chip efficiency gains, liquid cooling, modular microgrids, colocated generation, and ordinary improvements in server utilization. The off-planet option is not competing against a frozen Earth. It is competing against an industry that is also changing quickly. That is one reason the fantasy can survive so easily in public discussion. It gets framed as a race against today’s terrestrial constraints, not against the terrestrial solutions already under construction.

Real opportunity or speculative fantasy?

The right answer is both, but not in equal measure. Orbital data centers are a real business opportunity when the term means compute and storage in orbit for spacecraft, sensing networks, stations, and mission operators who benefit from processing data before it reaches Earth. That market has working ancestry, current prototypes, current contracts, and a clear customer problem. HPE, AWS, Axiom Space, Kepler, D-Orbit, and Unibap have already shown the category’s lower rungs are real.

It is speculative fantasy when the same term is used to imply that orbit is close to becoming a routine home for vast AI training clusters serving mainstream terrestrial demand. Not because the idea violates physics. Not because smart people are not working on it. Not because funding is absent. It remains speculative because the launch economics are not yet closed, thermal design is not yet proven at that scale, servicing is still ugly, regulation is unsettled, debris costs are socialized, and the customer migration from terrestrial cloud to orbital compute has not yet appeared in public in a serious way. The most defensible 2026 view is that orbit will expand computing at the edge of space long before it relocates a meaningful share of Earth’s cloud into the sky.

Summary

The market for orbital data centers should be divided before it is valued. The first segment is already here: in-orbit compute and storage for satellites, stations, sovereign users, and time-sensitive data services. It is a modest market with genuine utility, visible technical progress, and a customer base that already lives inside the space economy. That segment deserves to be taken seriously.

The second segment is the one dominating the headlines: giant orbital AI clouds that promise to escape grid limits, land scarcity, and cooling strain on Earth. That segment may become real over a longer arc. As of March 2026, it still belongs closer to speculative infrastructure than to an operating market. The danger is not that no one will build it. The danger is that investors and policymakers will talk about both segments as though they were already the same business. They are not, and the gap between them is where most of the hype is hiding.

Appendix: Top 10 Questions Answered in This Article

What is an orbital data center?

An orbital data center is computing and storage infrastructure placed in space to process or hold data before it is sent to Earth or passed to other spacecraft. In practice, the term now covers everything from small in-orbit edge-compute nodes to proposed giant AI constellations.

Are orbital data centers already real?

Yes, in a limited sense. Spaceborne computers on the ISS, AWS Snowcone experiments, Axiom prototypes, Kepler compute nodes, and earlier D-Orbit and Unibap missions show that useful compute workloads can already run in orbit.

What is the most credible current business use?

The strongest present use case is edge processing for space missions. That includes filtering imagery, compressing data, running inference close to sensors, and keeping missions functional when links to Earth are slow or interrupted.

Why is the idea suddenly attracting so much interest?

AI is driving a surge in power demand, capital spending, and grid stress for terrestrial data centers. That has made solar-powered compute in orbit look more attractive to investors and large technology firms, even though the business case is still unsettled.

What makes orbital power attractive?

In selected orbits, solar panels can operate with very high duty cycles and near-continuous sunlight. That offers a tempting alternative to grid bottlenecks, land constraints, and local permitting problems that affect large terrestrial data centers.

Why is cooling still a problem in space?

Heat can only be rejected through radiation in vacuum, which demands radiator area, careful design, and lower effective power density than many people imagine. Space does not remove the cooling problem. It changes the way the problem must be solved.

What did Google’s Project Suncatcher actually prove?

It proved that a serious research team sees the concept as physically and economically worth studying. It did not prove that hyperscale orbital AI infrastructure is commercially ready. Google’s own materials still describe major engineering hurdles and an early prototype timeline.

What is the strongest argument against giant orbital AI clouds right now?

The biggest argument is still economics. Launch mass, servicing, thermal hardware, regulation, debris exposure, and customer adoption all remain unresolved at scale, and cloud-industry leadership has said the concept is still far from economic reality.

Could orbital data centers still become a major market later?

Yes. If launch costs fall sharply, if hardware survives long enough in orbit, if thermal systems scale well, and if customers accept the service, the market could expand a great deal in the 2030s. That path is possible, but it has not yet been proven.

What is the best overall verdict in 2026?

Small orbital compute is a real business opportunity. Giant off-planet cloud replacement for mainstream AI demand is still speculative fantasy. Treating both as the same market is the main source of confusion.

YOU MIGHT LIKE

WEEKLY NEWSLETTER

Subscribe to our weekly newsletter. Sent every Monday morning. Quickly scan summaries of all articles published in the previous week.

Most Popular

Featured

FAST FACTS