The Case for Orbital Compute
Why People Are Serious About Putting Data Centers in Space
This is the first article in a series exploring orbital data centers, what’s real, what’s hype, and what it might mean for the future of AI infrastructure. I’m learning this as I go, and I’m inviting you to learn with me.
I first started seeing the headlines last fall. Google announcing something called Project Suncatcher. A startup called Starcloud claiming to have trained an AI model in orbit using an NVIDIA H100 GPU. Scientific American and IBM running features about space-based computing.
My first reaction was somewhere between “that’s wild” and “that’s insane.” But I’ll confess: this stuff hits a soft spot. I grew up watching Star Trek reruns in the ‘70s, became a famously obsessive Star Wars fan (ask anyone who has been on a video meeting with me), and as a kid I was lucky enough to attend one of the Space Shuttle Enterprise test landings at Edwards Air Force Base. Space has been woven into my imagination for most of my life. So when someone tells me we’re putting data centers in orbit, the kid in me says “finally” while the 30-year tech veteran says “show me the math.”
The more I read, the more the basics started to click.
Then, on February 2nd, Elon Musk merged SpaceX and xAI in a $1.25 trillion deal and said the quiet part loud: orbital data centers are the plan. Not a side project. The plan.
So here I am, trying to figure out whether this is the future of compute or the most expensive hype cycle in history. I don’t have the answer. But I’ve been doing my homework, and I want to share what I’ve found.
If you know more about this than I do (and many of you do), I want to hear from you. Seriously. Comment, email, send a carrier pigeon. This series is my attempt to learn in public, and I need help.
The basics actually make sense (and that surprised me)
Let’s start with the part that clicked first.
Data centers have two fundamental problems that are getting worse every year: power and heat. AI training has made both dramatically more urgent. The International Energy Agency projects that “data centers will account for nearly half of US electricity demand growth between now and 2030.” Every new GPU cluster needs megawatts of power and sophisticated cooling systems to keep from melting itself.
Space, it turns out, addresses both problems.
Power. Solar panels in orbit receive sunlight that hasn’t been filtered through an atmosphere. No clouds, no night cycle (in the right orbit), no seasonal variation. Google’s Project Suncatcher team calculated that solar panels in space are up to 8x more productive than identical panels on Earth. That’s not a marginal improvement. That’s a fundamentally different energy equation.
Cooling. On Earth, data centers spend enormous energy and water removing heat from processors. In the vacuum of space, you can radiate heat away directly. No chillers, no cooling towers, no millions of gallons of water. Starcloud claims this translates to a 95% energy cost reduction compared to terrestrial equivalents. (I’d love to see the math behind that)
Land. There’s no zoning board in orbit. No neighbors protesting. No competition with housing or agriculture for land. No need to negotiate with local utilities for grid connections that take years to build.
When you stack all of that up, you can see why smart people are taking this seriously. The value proposition isn’t “space is cool.” It’s “physics limits us on the ground.”
But I have so many questions
Here’s where my skepticism kicks in, not because I think the concept is wrong, but because the gap between concept and execution looks enormous.
Will the equipment actually work up there? Radiation in orbit destroys conventional electronics. Cosmic rays cause bit flips. There’s no gravity to help with convection cooling at the component level. Google says they’ve tested their TPUs (Tensor Processing Units, their custom AI chips) for radiation tolerance, with a prototype launch planned for early 2027. Starcloud says they’ve already run an H100 in orbit and trained a model called Gemma on it. That’s genuinely impressive, but it’s one GPU. One. The gap between “one GPU worked” and “millions of GPUs working reliably” is the gap this whole series is about.
How do you get the data up and the models down? This is the question that nags at me the most. AI training requires massive datasets. The trained models need to get back to Earth to be useful. Satellite internet today tops out in the hundreds of megabits per second for consumers. Training runs move petabytes. How does that work? Blue Origin seems to think it’s a solvable problem. In January, they announced TeraWave, a 5,408-satellite constellation promising terabit-per-second optical speeds, with deployment starting in late 2027. But “announced” and “deployed” are very different words.
What does the economics actually look like? Launch costs are the elephant in every room this topic enters. Yes, SpaceX has driven costs down dramatically. But we’re talking about launching, at the extreme end, a million satellites. Each one loaded with expensive compute hardware that has (per SpaceX’s own FCC filing) a five-year operational life. What’s the cost per GPU-hour compared to a terrestrial data center? I genuinely don’t know, and I’m not sure anyone has published credible numbers yet. On an episode of Stripe co-founder John Collison’s “Cheeky Pint” podcast, co-hosted with Dwarkesh Patel, Musk predicted orbital data centers will be cost-competitive by 2028. I’d love to see the assumptions behind that timeline.
The SpaceX-xAI merger changes the conversation
Okay. Let’s talk about the thing that made me decide this series was worth writing.
On February 2, 2026, SpaceX formally acquired xAI (which already owned X, formerly Twitter) in an all-stock deal valuing the combined entity at $1.25 trillion. It’s the largest merger in history.
Musk’s stated rationale was blunt: “Global electricity demand for AI simply cannot be met with terrestrial solutions, even in the near term, without imposing hardship on communities and the environment.”
That’s a big claim. But what makes this merger structurally different from anything else happening in this space is the vertical integration. Think about what this single entity now controls:
Rockets (SpaceX, including the world’s most cost-effective launch vehicle)
Satellite manufacturing and deployment (Starlink’s existing infrastructure and supply chain)
AI models and compute demand (xAI and Grok)
A data and distribution platform (X/Twitter)
And soon, orbital data center hardware
Nobody else has anything close to that stack. Google has the AI and the chips, but not the rockets. Blue Origin has the rockets but not the AI demand. Starcloud has ambition but is still pre-scale.
Days before the merger announcement, SpaceX filed with the FCC for a constellation of up to 1 million orbital data center satellites, operating at altitudes of 500 to 2,000 kilometers, targeting 100 gigawatts of compute capacity, solar-powered, with a five-year operational life per satellite.
That’s almost the plot of Moonraker.
The FCC accepted the filing on February 5th and opened a public comment period running through March 6th. FCC Chairman Brendan Carr publicly endorsed the filing on X. Make of that what you will.
The thing I can’t stop thinking about
Here’s where I’ll be honest about my own uncertainty. The SpaceX-xAI merger was announced ahead of a planned mid-2026 IPO. Multiple analysts have noted that xAI was burning roughly $1 billion per month at the time of the merger.
I say this not to be dismissive. Musk has a track record that genuinely complicates the skeptic’s job. People laughed at reusable rockets, and now Falcon 9 lands itself routinely. People said Starlink would never work, and it has over 4 million subscribers. But he also promised full self-driving “next year” for about a decade running, and Hyperloop never happened.
So which pattern does orbital compute follow? The one where the audacious bet pays off, or the one where it doesn’t?
I don’t know. That’s what this series is about.
What’s coming next
Over the next four articles, I’m going to dig into the specific questions that I think determine whether orbital compute is real:
The Engineering Gauntlet: What does it actually take to run AI hardware in space? Radiation destroys electronics, “cold space” isn’t as simple as it sounds, connectivity between orbit and ground is a massive unsolved problem, and nobody can send a technician when a GPU fails. (Plus: what happens when you put a million new satellites in an already crowded orbit?)
The Economics and Timeline: What do the cost models actually look like, and when (if ever) does the math work? Musk says 2028. Google says mid-2030s. A European study says 2050. Someone is very wrong.
Geopolitics and Governance: Who owns the cloud when it orbits every 90 minutes? Jurisdiction, data sovereignty, export controls, defense entanglements, and the unprecedented concentration of power that the SpaceX-xAI merger represents.
A Possible Future: What does the orbital compute landscape look like by 2040 if this works? What if it doesn’t? And what should we be debating now while the infrastructure is still being designed?
I’m not pretending to be an expert here. I’m a technologist who has spent 30+ years watching computing paradigms shift, from mainframes to client-server to cloud, and I think I’m watching the early innings of another one. Or maybe I’m watching a very expensive bubble. That’s what I want to figure out.
If you have expertise in any of this, satellite engineering, orbital mechanics, data center economics, space law, FCC regulatory process, I want to hear from you. Drop a comment, send me a note. Help me get this right.
And if you’re like me and just trying to make sense of what you’re reading, welcome. Let’s figure this out together.
Next in the series: The Engineering Gauntlet — radiation, heat, and hardware in the void.



Pete, I love this discussion, and I am very focused on it in my current work (I'm an investor in xAI and SpaceX pre-merger, and in the process of additional names that you know). We also have the data angle in common (I'm an investor in MDB and come from ORCL). Please message me here or in LNKD (I'm not Premium there and have no more connection allowances this month lol) - I would love to have a quick discussion if you are interested. I am linkedin.com/in/leebank.