Industry Synthesized from 1 source

One Million Orbital Data Centers Face the Thermodynamics Test

Key Points

  • SpaceX filed FCC application for up to one million orbital data centers
  • Sun-synchronous orbit keeps equipment above 80°C—no current cooling solution
  • Starcloud tested H100 GPU in orbit November 2025, proving small-scale concept
  • Google plans 80-satellite test constellation, targeting 2027 launch
  • US AI data centers consume roughly 10 billion gallons water annually for cooling
  • Starship offers 150-ton payload capacity, improving orbital economics
References (1)
  1. [1] SpaceX Plans One Million Orbital Data Centers for AI — MIT Technology Review AI

One million data centers. Zero proven thermal solutions. That is the scale of SpaceX's FCC filing—and the engineering gap at its core.

In January, SpaceX applied to deploy up to one million data centers in Earth's orbit. The pitch: AI's hunger for power and water has become a planetary crisis, and space offers an escape hatch. Solar power without atmospheric interference. Heat dissipation into the cold vacuum. Launch costs dropping toward viability. It reads like a solution to the most pressing infrastructure problem of the AI era.

The resource argument is real. AI data centers already consume roughly 10 billion gallons of water annually for cooling in the United States alone. Power grids from Virginia to Iowa are straining under compute demand. Communities near hyperscale campuses face higher electricity prices and environmental consequences. The pressure is not abstract—it is measured in utility rate hikes and river temperature warnings.

The math works in space. A sun-synchronous orbit provides uninterrupted solar power with no atmospheric filtering. The cold vacuum absorbs waste heat without water. With launch costs declining and Starship offering 150-ton payload capacity, the economics shift.

But physics creates a hard ceiling. That same sun-synchronous orbit—the one providing constant illumination—keeps equipment at or above 80°C. That is already too hot for silicon. Add processing heat and thermal gradients, and temperatures climb further. Processors require active cooling: pumps, fluid loops, radiators. Each adds mass, complexity, and failure points. In orbit, you cannot call a technician. Systems must operate for years without maintenance. At this scale, redundancy becomes structurally impossible.

The thermal problem is not theoretical—it is engineering no one has solved. Starcloud tested an H100 GPU in orbit last November, proving small-scale feasibility. Google's plan for 80 satellites represents a more conservative near-term approach. But scaling to millions of units introduces challenges that may not be solvable on any practical timeline.

The fundamental question is whether orbital data centers can scale before terrestrial infrastructure becomes the binding constraint. That math does not yet exist. What exists is a filing that signals the industry's intent—and a thermodynamic problem that will determine whether that intent is visionary or structurally impossible.

The filing matters regardless of outcome. Whether SpaceX's servers launch in five years or never, the fact that such an application exists tells us something: the tech industry is treating space as a pressure valve for Earth's resource problems. The underlying demand for AI compute is growing faster than any solution—ground-based or orbital—can realistically address. These filings mark the moment that calculus began.

0:00