DataCentersX > Types > Orbital and Space Datacenters


Orbital & Space Datacenters


Orbital datacenters relocate AI compute infrastructure from Earth's surface to low Earth orbit, replacing the terrestrial constraint set (grid interconnection, water rights, land acquisition, mechanical cooling) with an orbital constraint set (radiation exposure, vacuum thermal management, launch mass budgets, optical networking across orbital planes). The architecture emerged from first principles: as AI compute demand grew past what terrestrial grids could deliver on reasonable timelines, the marginal cost of orbital power began to trend favorably against the marginal cost of terrestrial power for large AI workloads, and the reusable heavy-lift launch capability required to close the economics reached operational scale.

The category is emerging rather than mature. As of April 2026, the largest filed constellation (SpaceX's Orbital Data Center system) has FCC acceptance for review but no deployed hardware yet. Earlier pilot-scale demonstrations have flown rad-tolerant compute nodes on existing satellites, and Blue Origin's competing TerraWave LEO proposal was filed in early 2026. The page below covers the architectural shape of the category and the primary deployment programs shaping it.


Why orbital compute is architecturally distinct

Every terrestrial datacenter type on DatacentersX shares a common constraint set: electrical power has to be delivered from somewhere (grid, onsite, microgrid), waste heat has to be rejected to atmosphere through air or water, and the building has to sit on permitted land near adequate utilities. Orbital compute breaks all three constraints simultaneously, which is why it constitutes a distinct TYPES entry rather than a specialty variant of an existing type.

Constraint Terrestrial Datacenter Orbital Datacenter
Power source Grid interconnection, onsite generation, utility tariffs, queue delays Near-continuous direct solar (roughly 5x surface irradiance; 99+% duty cycle in sun-synchronous orbit)
Cooling Air, liquid, cooling towers, chillers, water rights and permits Radiative heat rejection to deep space through dedicated radiator panels; no fluid cooling required
Siting and permitting Land acquisition, utility permits, local community and water authority approval FCC spectrum and orbital deployment license; orbital debris and coordination requirements
Hardware Commodity accelerators; standard rack and chassis form factors Rad-tolerant silicon; thermal architecture engineered for radiative-only cooling; launch-survivable mechanical
Networking Terrestrial fiber, cross-connects, carrier peering Optical inter-satellite links; Ka-band ground downlinks; integration with existing LEO broadband mesh

The constraint-set substitution is not symmetric. Orbital compute trades stable terrestrial operations for a harder, more distributed failure mode: satellites age, radiation degrades silicon, component replacement is not possible, and the entire constellation has to be designed around a steady-state replenishment launch cadence. The calculus only closes when launch cost per kilogram to orbit is low enough that satellite replacement becomes a planned operational expense rather than a capital crisis.


The enabling technology stack

Orbital datacenters depend on four concurrent technology deliveries. If any one is missing, the economics do not close.

Heavy-lift reusable launch. SpaceX Starship is the announced launch platform for the largest filed program. Its second-stage payload capacity and reusability profile are what drive per-kilogram launch cost below the threshold where orbital compute becomes competitive with grid-constrained terrestrial compute. A fully expendable launch architecture cannot deliver the mass flow required to populate a megaconstellation at reasonable cost.

Rad-tolerant high-performance silicon. Conventional AI accelerators are not designed for the cosmic ray and trapped-proton radiation environment outside Earth's magnetic shield. Bit flips, latch-up events, and cumulative dose damage all accumulate faster in orbit than the terrestrial hardware can absorb. The orbital compute program therefore requires its own silicon, architected at the transistor level for radiation resilience. The Tesla D3 chip (covered below) is the first announced dedicated program for AI-scale orbital compute.

Radiative thermal architecture. In vacuum, heat cannot be rejected through convection (no fluid) or conduction (nothing to conduct to). All heat leaves the spacecraft as infrared radiation through dedicated radiator panels. Radiator area scales linearly with heat load, which makes thermal architecture a dominant design constraint: a 100 kW satellite needs roughly 100 square meters of radiator surface, and megawatt-class future variants scale proportionally.

Optical mesh networking. Orbital compute nodes need to exchange data with each other and with the ground at bandwidth levels appropriate for AI workloads. Optical inter-satellite links (already proven at scale in Starlink) provide the petabit-class mesh connectivity; Ka-band ground links carry telemetry, command, and a fraction of payload traffic. The integration with existing terrestrial broadband constellations (Starlink in SpaceX's case) provides the ground-side downlink fabric without building a separate ground station network.


The SpaceX Orbital Data Center constellation

SpaceX filed FCC application SAT-LOA-20260108-00016 on January 30, 2026, seeking authorization for a non-geostationary satellite system described in the filing as the SpaceX Orbital Data Center System. The FCC Space Bureau accepted the application for review on February 4, 2026, opening it for public comment. The filing proposes a constellation of up to one million satellites operating at altitudes between 500 and 2,000 kilometers, in 30-degree and sun-synchronous orbit inclinations, within narrow orbital shells up to 50 km thick to allow different clusters to serve different workload profiles.

The satellite design publicly referenced as AI Sat Mini is, despite the name, a large spacecraft. At roughly 170 meters deployed length, it exceeds Starship V3's 124-meter stack height and is dominated by large solar arrays. Each satellite provides approximately 100 kilowatts of onboard power dedicated to AI processing, supported by roughly 100 square meters of radiator surface for heat rejection. Future satellite iterations are targeted to reach megawatt-class per-satellite power.

Spectrum allocations in the filing are 18.8 to 19.3 GHz for space-to-Earth and 28.6 to 29.1 GHz for Earth-to-space, both in Ka-band designated non-geostationary fixed satellite service bands. Inter-satellite connectivity is proposed via high-capacity optical links integrating with the existing Starlink mesh. SpaceX requested FCC milestone waivers reflecting the unprecedented scale of the constellation and the non-interference basis of its spectrum requests.

The filing frames the constellation in Kardashev terms, explicitly positioning it as a step toward harnessing a larger fraction of solar output for compute. The aggregate capacity targets are correspondingly large: SpaceX projects that launching one million tons of orbital compute per year at 100 kW per ton would add 100 gigawatts of AI capacity annually, with the ultimate program target of one terawatt of annual compute capacity aligned with the Terafab silicon supply program.


The Tesla D3 (Dojo 3) chip

D3, designated internally as Dojo 3 and publicly announced at the Terafab launch event on March 21, 2026, is the dedicated silicon designed to populate the SpaceX orbital compute constellation. D3 is the architectural successor to Tesla's earlier D1 and D2 Dojo chips, which had been aimed at terrestrial AI training for Full Self-Driving video model work. The D3 represents a strategic pivot: rather than competing directly with NVIDIA in terrestrial training, Tesla's custom silicon program is being redirected toward the orbital compute market where rad-tolerant high-performance AI silicon has no current commercial supplier.

The chip's announced design objectives reflect the orbital environment rather than the terrestrial one. D3 is architected to operate at higher package temperatures than terrestrial accelerators, a direct consequence of the radiator-limited thermal architecture of the spacecraft: a chip that tolerates higher junction temperatures permits a smaller radiator or a denser compute footprint at a given radiator area. D3 is also radiation-hardened at the architectural level, incorporating error detection and correction, redundant logic, and physical process choices aimed at reducing single-event upset and total ionizing dose sensitivity compared to commercial silicon.

D3 is scheduled to be produced at the Terafab Advanced Technology Fab on the north campus of Giga Texas in Austin, with Musk's stated allocation directing approximately 80 percent of Terafab's output to the orbital compute program and 20 percent to terrestrial applications. The Terafab program (SX coverage: forthcoming) is the supply-side counterpart to SpaceX's orbital demand. The two programs are architected as a tightly coupled silicon-and-deployment system: Terafab produces D3, SpaceX launches AI Sat Mini spacecraft populated with D3, and the resulting constellation feeds xAI's model training and inference workloads alongside commercial and sovereign customers over the medium term.

Whether D3 is a low-power design in the conventional terrestrial sense has not been publicly disclosed. The architectural priorities as described are high-temperature operation and radiation tolerance rather than performance-per-watt in the terrestrial sense. The thermal envelope of the AI Sat Mini spacecraft (100 kW to a single satellite) suggests that per-chip power is not the dominant constraint; the constraint is per-chip thermal dissipation at orbital operating conditions and radiation tolerance over mission life.


Competing and adjacent programs

SpaceX is not the only entity pursuing orbital compute. Several programs of different scales and postures are now visible in the public record.

Program Operator Positioning Status
SpaceX Orbital Data Center SpaceX / Tesla / xAI (post-acquisition) Megaconstellation; commercial AI compute at Kardashev-scale ambition FCC application accepted for review February 2026
Blue Origin TerraWave Blue Origin Rad-hardened edge compute with government and sovereign customer focus Proposal unveiled late 2025; in early filing stages
Starcloud Starcloud (independent) Demonstration-scale orbital compute nodes; commercial pilot deployments Demonstration missions planned and in progress
OrbitsEdge OrbitsEdge (independent) Edge computing payloads on existing satellite buses Demonstration missions planned and in progress

The structural split across these programs is between megaconstellation commercial players (SpaceX) and smaller-scale edge and sovereign-customer players (Blue Origin, Starcloud, OrbitsEdge). The smaller programs have near-term deployment paths that do not depend on Starship-class launch economics; the SpaceX program requires Starship cadence to close.


Engineering challenges the category has not yet solved

The public case for orbital compute emphasizes its structural advantages over terrestrial deployment. Several engineering challenges remain open and affect the timeline to competitive deployment.

Hardware aging under radiation. Commercial silicon has operating lifetimes of roughly a decade in datacenter use; rad-tolerant silicon in LEO faces cumulative dose and single-event-upset environments that reduce effective lifetime unless the architecture and process are tuned for that environment. Actual D3 mission-life numbers are not yet public and will only emerge once flight-hour data accumulates.

Latency for training vs inference. Ground-to-orbit round-trip latency at LEO altitudes is a few milliseconds and is not a blocker for model training or for most inference workloads. Latency across a distributed constellation with optical inter-satellite links varies with network topology; how well large-scale training (where gradient synchronization across thousands of nodes is critical) actually runs across an orbital fabric is an open engineering question.

Space debris and orbital congestion. A one-million-satellite constellation constitutes an unprecedented orbital population. Collision avoidance, deorbit planning, and coordination with existing LEO operators are active areas of regulatory scrutiny and may constrain the achievable constellation size regardless of launch capacity.

Cost parity timing. Musk's stated projection of terrestrial and orbital cost parity within two to three years is a maximum-optimism case. Independent analysis (including Deutsche Bank and multiple industry publications) projects the 2030s as the more realistic parity window. The difference depends almost entirely on Starship's operational cadence and reusability maturity, which is the single variable that most affects the economics.


Where orbital fits in the TYPES taxonomy

Orbital datacenters are the only TYPES entry where the datacenter is not on Earth's surface, which justifies its standalone treatment rather than inclusion as a variant of edge or modular. It is also the category with the deepest cross-network integration on SiliconPlans: rad-tolerant silicon (Tesla D3) is an SX coverage area; launch economics and spacecraft engineering touch the Go-Astronomy network; the AI compute workloads running on the constellation feed directly into AI Training and AI Inference. Orbital is the farthest-out TYPES entry both literally and architecturally, and its maturation timeline is tied to launch and silicon programs that are themselves in early production stages.


Related coverage

Types | Edge DCs | AI Factory | Modular DCs | AI Inference | AI Training | Chips and Silicon | Starlink Ground Infrastructure | SemiconductorX (Terafab coverage)