
In the vast, silent expanse of low Earth orbit, some 250 miles above the planet, the International Space Station (ISS) glides at over 17,000 miles per hour. It is a beacon of human ingenuity and our most complex technological achievement in space. Inside this orbiting laboratory, amidst the tangle of life-support systems and science racks, a quiet revolution has been taking place. It’s not a new rocket engine or a life-finding telescope. It’s a series of experiments that look, on the surface, like unassuming server boxes.
This is the Hewlett Packard Enterprise (HPE) Spaceborne Computer experiment, a multi-year project that is fundamentally rewriting the rules for how humanity will compute, process, and analyze data as it ventures back to the Moon and on to Mars.
For decades, space exploration has operated on a simple, one-way data model: a satellite or rover gathers enormous amounts of raw information, and then, at great effort and expense, it beams that data back to Earth. On the ground, massive supercomputers sift through this data, searching for insight. The HPE experiment, in partnership with NASA, asked a disruptive question: What if, instead of bringing the data to the supercomputer, we bring the supercomputer to the data?
The answer has proven to be a pivotal development, unlocking capabilities for real-time artificial intelligence, on-orbit data analysis, and a new level of autonomy that will be essential for the next generation of space exploration.
The Problem: Data, Delay, and Danger
To understand why sending a high-performance computer to space is so significant, one must first appreciate the significant challenges of working beyond Earth’s protective atmosphere. Space is not just empty; it’s an environment of extremes, actively hostile to the delicate electronics that power our modern world.
The Great Data Bottleneck
Modern science is built on data. A single Earth observation satellite with a hyperspectral imager can generate terabytes of information in a single day – far more than it can possibly send home. Astronauts on the ISS conduct complex experiments, from DNA sequencing to advanced medical imaging, all of which produce massive files.
The pipeline to get this information back to Earth is incredibly small. Communications rely on a limited number of ground stations and a network of relay satellites, like NASA’s Tracking and Data Relay Satellite System (TDRS). Bandwidth is at a premium, and the ISS must share its connection with countless other missions.
This creates a frustrating lag. An astronaut might perform an ultrasound, but the analysis by a team of radiologists on Earth could be days away. A scientist might wait months for the complete dataset from their on-orbit experiment to be downloaded.
This latency problem becomes an existential one as we look toward deep space. The one-way light-time delay to Mars ranges from four to 24 minutes. This means a 40-minute round trip for a simple “stop” command. It’s impossible to tele-operate a rover in real-time. A future crewed mission to Mars would be significantly isolated, unable to rely on Earth for immediate help. If a medical emergency occurs, they can’t simply send a high-resolution MRI scan and wait for a diagnosis. They need the diagnostic power with them.
The Hostile Environment
The second, and more immediate, barrier is the environment itself. The components inside a laptop or data center server are designed to operate within a very narrow, very protected range of conditions. Space has none of them.
The primary enemy is radiation. Earth’s magnetic field and atmosphere shield us from a constant bombardment of high-energy particles. In low Earth orbit, and even more so in deep space, electronics are exposed to two main sources of this radiation: galactic cosmic rays (GCRs), which are high-energy particles from distant supernovae, and solar particle events (SPEs) from our own Sun.
When one of these particles strikes a microchip, it can cause a single-event effect (SEE). This can manifest as a “bit flip,” where a 0 in the computer’s memory becomes a 1, or vice versa. This might be harmless, or it could corrupt a piece of data, crash the software, or change a critical command. Worse, a particle can trigger a “latch-up,” a short-circuit that can permanently fry the component. The ISS is particularly vulnerable when it passes through the South Atlantic Anomaly, a region where Earth’s magnetic field is weaker and a higher concentration of charged particles dips closer to the planet.
Beyond radiation, there are the physical stresses. First, the violent vibration and acoustic shock of a rocket launch. Then, the constant thermal cycling in orbit, swinging from extreme cold to extreme heat as the station passes in and out of sunlight. Even microgravity poses a problem, as it eliminates the “hot air rises” principle of convection, forcing all cooling to rely on fans or liquid pumps.
The Traditional Solution: Slow and Expensive
For 60 years, aerospace engineers have solved this problem with one blunt instrument: radiation hardening.
“Rad-hardened” components are built from the ground up to be resilient. They use different materials, like silicon-on-insulator, and are shielded in heavy metals like titanium or tantalum. They are designed with redundant circuits and larger transistors that are less susceptible to particle strikes.
This approach works, but it comes at a staggering cost.
- Expense: A rad-hardened processor can cost over 100 times more than its commercial equivalent.
- Power: These systems are power-hungry, a critical flaw on a spacecraft where every watt is generated by solar panels.
- Performance: The design and manufacturing process is so long – often taking a decade – that by the time a rad-hardened computer flies, its processing power is generations behind the curve. The primary computer on the Perseverance rover, which landed on Mars in 2021, is based on a PowerPC 750 processor. It’s a reliable chip, famous for powering the 1998 Apple iMac G3.
This “performance gap” is the central problem. NASA wants to run 21st-century artificial intelligence models, but its space-rated hardware has the processing power of a 1990s desktop. It’s impossible to explore the solar system efficiently if we’re forced to use technology from a museum.
Spaceborne Computer-1: The Hypothesis
HPE and NASA proposed a radical new idea. What if we stop trying to build impenetrable hardware? What if, instead, we use the one thing that has kept pace with Moore’s Law: software?
The hypothesis was this: We can fly Commercial Off-The-Shelf (COTS) computer systems, the same high-performance HPE Apollo 40-class servers you’d find in an earthly data center, and protect them using intelligent software. Instead of a physical shield, it would have a digital one.
This experiment became Spaceborne Computer-1 (SBC-1).
The system was launched in August 2017 on the SpaceX CRS-12 mission. It was a standard 1U server, running Linux, with no physical modifications. It was slotted into a standard ISS experiment rack, its only concession to space being a water-cooling system that was required by the station.
The aerospace community was, to put it mildly, skeptical. The consensus was that the unmodified hardware would fail quickly. Some experts gave it a public life expectancy of just four days.
They were wrong.
The Software Shield
The genius of SBC-1 was its “software-hardening.” The system ran a constant, high-awareness check on its own health. It monitored voltage, temperature, and internal error logs. Most importantly, it monitored the external radiation environment in real-time.
When the system’s sensors detected that the ISS was entering a high-radiation zone, like the South Atlantic Anomaly, the software would dynamically react. It didn’t just shut down; it throttled its performance. By lowering the processor’s clock speed and power consumption, it made the microchips a “smaller target” for particle strikes, drastically reducing the chance of a bit flip or a latch-up.
When the station passed back into a safer region, the software would automatically throttle the system back up to full power. It was an elegant, adaptive shield that balanced performance with protection.
If an error did occur, the software-hardening was designed to catch it, correct it (using standard server-grade ECC memory), and, if necessary, isolate the faulty component and reboot, just as an IT administrator on Earth would.
The Groundbreaking Results
The original mission was planned for one year. Spaceborne Computer-1 operated flawlessly for 615 days.
During its 1.7-year mission, it orbited the Earth over 8,900 times and traveled through the South Atlantic Anomaly more than 6,800 times. It endured numerous solar particle events. It didn’t just survive; it worked.
HPE and NASA ran the industry-standard High-Performance LINPACK benchmark, a test used to rank the world’s fastest supercomputers. SBC-1 successfully achieved over one teraflop – one trillion calculations per second. It was, by an enormous margin, the most powerful computer ever to have operated in space.
As a practical test, NASA’s Langley Research Center used SBC-1 to run complex code simulating a spacecraft’s entry, descent, and landing on Mars. The experiment ran the code over 2,000 times without a single bit error.
In June 2019, SBC-1 was unplugged, packed up by astronauts, and returned to Earth on the SpaceX CRS-17mission. Analysis back in the lab confirmed the mission’s stunning success: COTS hardware, protected by smart software, could not only survive in space but thrive. The paradigm had shifted.
Spaceborne Computer-2: The Edge Computing Revolution
The success of SBC-1 was a significant proof of concept. It answered the question, “Can it survive?” The follow-up mission, Spaceborne Computer-2 (SBC-2), was designed to answer the next, more important question: “What can it do?”
Launched in February 2021 on a Northrop Grumman resupply mission, SBC-2 represented a major leap in capability. This wasn’t just a server; it was a complete edge computing system.
“Edge computing” is the practice of processing data where it is generated, rather than sending it to a distant cloud or data center. For the ISS, the “edge” is orbit. SBC-2 was designed to be the station’s first true on-orbit data center, a service that any researcher on the station could use.
A Leap in Hardware
The SBC-2 hardware is built around the HPE Edgeline Converged EL4000 system, a rugged computer designed for harsh “edge” environments on Earth, like factory floors or oil rigs. It also includes the workhorse HPE ProLiant DL360 server for general-purpose computing.
The system more than doubled the performance of its predecessor, capable of over two teraflops.
The most significant upgrade was the inclusion of powerful NVIDIA T4 GPUs (Graphics Processing Units). While CPUs are the “brains” of a computer, good at general-purpose tasks, GPUs are specialized for parallel processing – doing thousands of simple calculations at the same time. This capability is the engine behind the modern artificial intelligence and machine learning boom.
By adding GPUs, HPE and NASA were giving the ISS the ability to run sophisticated AI models in space for the first time.
A later hardware refresh, launched in January 2024, added another critical component: storage. This upgrade equipped SBC-2 with over 130 terabytes of high-speed flash storage from KIOXIA, giving it the capacity to hold the massive datasets that it would be analyzing.
Real-World Applications: AI at the Edge
SBC-2 is not just an experiment; it’s a functioning, productive science platform. In partnership with the ISS National Laboratory, it has been used by dozens of researchers to run experiments that were previously impossible.
1. Checking Astronaut Gloves with AI
One of the first and most compelling experiments was a collaboration between NASA and Microsoft. Astronauts on spacewalks rely on their gloves for life support, but the gloves are susceptible to damage from micrometeoroids or sharp edges. After every spacewalk, astronauts must meticulously photograph their gloves so engineers on Earth can scan the images for any tears or damage – a process that can take days.
Using Microsoft’s Azure cloud, an AI model was trained on Earth to recognize glove damage. That model was then uploaded to SBC-2. Now, astronauts can take photos of their gloves, and the AI on SBC-2 analyzes them in minutes. It doesn’t send the high-resolution video feed to Earth; it simply runs its analysis and, if it finds a potential problem, generates a small, annotated image highlighting the area of concern. This tiny file is all that needs to be sent to the ground, giving engineers near-instant awareness of a potential safety failure.
2. On-Orbit DNA Sequencing
Astronauts are exposed to significant radiation, which can increase their risk of health problems, including cancer. One way to monitor this is by sequencing their DNA in space to look for new genetic mutations. The Genes in Space program has pioneered on-orbit sequencing, but the analysis of the raw genomic data is incredibly compute-intensive.
In one experiment, a 12.2-hour analysis process that required downloading the data to Earth was performed on SBC-2. The result? The on-orbit supercomputer completed the entire analysis in just six minutes. This transforms genomic sequencing from a research project into a viable, real-time diagnostic tool. The same principle applies to identifying unknown microbes (like a fungus) growing on the station or processing medical images, like ultrasounds, to give an astronaut a rapid diagnosis.
3. Simulating 3D Printing in Space
For long-duration missions to the Moon or Mars, astronauts can’t bring every spare part they might need. The solution is 3D printing, but parts printed in microgravity may have hidden flaws. Researchers from Cornell University developed complex software to simulate the 3D printing of metal parts, predicting stresses and potential failure points before a print even begins.
This software is far too large to run on a normal laptop. By running it on SBC-2, they successfully validated that they can digitally simulate and “test” a part in space, saving precious time and materials and ensuring a printed tool is safe to use.
4. Instant Earth Observation
The ISS is a premier platform for observing the Earth, but it generates a torrent of high-resolution imagery. NASA’s Jet Propulsion Laboratory (JPL) has used SBC-2 to run deep-learning models that automatically scan this imagery.
Imagine a wildfire breaking out in a remote area. Instead of downloading terabytes of satellite data and having an analyst on Earth find the smoke, the AI on SBC-2 detects it in real-time. It can then send an immediate, tiny alert packet (with the exact coordinates) directly to first responders. This “data triage” – sifting the signal from the noise at the source – can be used to track floods, hurricanes, and the effects of climate change, providing insights in seconds, not days.
These experiments all highlight a central metric of success. In one typical test, a raw dataset would have taken over 12 hours to download to Earth. By processing it on SBC-2, the researchers gathered the key insights, compressed the results into a 92-kilobyte file, and sent it to Earth in two seconds. This represents a speed-up of more than 20,000 times.
The Future: Computing for the Moon and Mars
The Spaceborne Computer experiment series is more than just an upgrade for the ISS. It is a critical technology pathfinder for the entire future of human space exploration. NASA’s Artemis program, which aims to establish a sustainable human presence on the Moon, relies on this technology.
A key component of the Artemis architecture is the Lunar Gateway, a small space station that will orbit the Moon. Unlike the ISS, the Gateway will not be permanently crewed. It will be empty for long stretches, requiring a high degree of autonomy.
The Gateway must be able to manage its own systems, operate robotic arms, maintain life support, and run science experiments, all without real-time human control from Earth. It needs an “onboard brain.” The lessons learned from SBC-2 – its software-hardening, AI capabilities, and edge-computing architecture – are the direct precursors to the computers that will power the Gateway. HPE is, in fact, already contracted to provide this core computing capability for the station.
This concept extends to Mars. The 40-minute communications lag makes Earth-based control impossible. A future Mars base will need its own data center. Astronauts will need to be able to analyze geological samples, run medical diagnostics, and use AI to navigate rovers, all locally. The Spaceborne Computer has proven that the powerful, reliable, and cost-effective COTS hardware to do this is finally ready for the journey.
Summary
The HPE Spaceborne Computer began as a daring experiment to challenge a 60-year-old paradigm. It set out to prove that the massive performance gap between Earth-bound computing and space-rated systems could be closed, not with expensive, heavy shielding, but with intelligent, adaptive software.
Spaceborne Computer-1 proved that COTS hardware can survive the harsh environment of space.
Spaceborne Computer-2 proved that it can do valuable work, becoming the ISS’s first artificial intelligence and edge computing platform. It has given astronauts and scientists on-demand supercomputing power, accelerating time-to-insight for everything from medical diagnostics to disaster response from months and days to mere minutes.
This experiment has successfully moved the “data center” off the planet. In doing so, it has provided the blueprint for the autonomous, intelligent systems that will enable humanity to live and work on the Moon, Mars, and beyond.

