Abstract
Starcloud have claimed that a single 100-ton Starship launch could suffice to create a 40 MW space data centre (SDC) for $8.2 M. My analysis finds that this is infeasible in a single launch but requires a total of upto 22 launches. The SDC’s solar arrays require 4 launches determined by examining existing solar arrays on the ISS. Similarly, the ISS’s radiator benchmarks indicate that 13 launches would be needed for the SDC’s thermal management system. The server racks would require an addition 5 launches. I have not analysed the effects of MMOD/radiation shielding and the impact of propellant use for in-orbit assembly on launch numbers—this requires specifications and mission architectures that have not been made public and might not yet be fully developed. On the note of launch costs, the whitepaper’s (miscalculated) assumed launch cost is $30/kg. This makes their comparative economic analysis to terrestrial data centres unmoored from reality in the near term. Some experts speculate that $1000/kg would be an optimistic launch cost, which means $100M per launch and a total cost of $103.2M In 2021 dollars, a Falcon-9 launch costs $2600/kg and a Falcon Heavy’s at $1500/kg. So, even $500/kg is also a fairly optimistic estimate. . So, even if costs drop to $500/kg, a single launch results in an overall cost of $53.2M, not the purported $8.2M. If a second launch is needed, then the worst case number is $200M making it more than their reported cost of running a terrestrial data center (TDC).
On Earth, data centres run on the existing electricity grid that, crudely put, use a combination of fossil fuels or terrestrial solar. Recently, technologists and entrepreneurs have talked up placing data centres in space to resolve three issues with terrestrial data centres (TDC):
Now, Sam Altman has also talked up nuclear energy as a solution, which I suspect is maybe a more desirable solution from an energy and climate angle but the regulatory barriers need to be resolved. So, space, in theory, sounds like a speedier answer from a regulatory perspective—as a space person, I’d love nothing more than for there to be a strong economic case for space1. But delivering a GW-scale SDC requires engineering solar arrays in the km scale, which will not be easy. Even the 40 MW system, that Starcloud used to benchmark against TDCs, needs a square of side 357 m. This would far exceed the span of the largest space structure ever built—the ISS is about 100 m in its longest dimension.
So, there’s now at least one YCombinator-backed company, Starcloud Inc., working on building SDCs—they released a whitepaper on this and I decided to dive in (with Claude to speedrun my analysis, of course). They begin by pointing us to some of the unique benefits of space solar, the main one being its 95%+ capacity factor versus just a median capacity factor of 24% for US terrestrial solar (under 10% in northern Europe). They continue to say that combined with 40% higher peak power due to no atmospheric losses, you get over 5x the energy output from the same solar array. This is not exactly my forte so I am not fact-checking these claims—let’s accept them as true.
If I can claim a bit of domain expertise, it’s on the space side. Reading Starcloud’s whitepaper, I felt I could use my limited expertise from designing mission for in-space assembled large space telescopes and analysing them to understand their techno-economic analysis.
Now, in-space assembly of large space structures, like large aperture telescopes, comes with its own challenges. For the sake of this analysis, I will classify them in the same three categories as I did at the start for TDCs but present them in reverse order:
Now, I will treat that last item as speculative mostly because it is out of my wheelhouse. However, if it is true, then we will need some alternative (either nuclear or space-based data centres) but by examining the first two aspects, I imagine we will know how well the business case of this company adds up.
While one could begin by asking how much compute workload should be moved to space to make a meaningful dent from a climate angle—a really good reason to do so—economic incentives that guarantee large returns on investment are what appeal to private investors, at the end of the day. This is why Starcloud exists but space agencies haven’t invested in the area. So, this analysis begins by examining Starcloud’s numbers to justify their business case for SDC.
The whitepaper presents a table where the total costs of running a 40MW data centre cluster over ten years is determined to be $167M on Earth versus $8.2M for space; launch is the largest contributor to Starcloud’s total costs and they presume that one launch shall be enough, which I was skeptical about. As I show later, 300 of their benchmark Nvidia racks alone require 3 launches. That said, the whitepaper’s breakdown of costs for TDCs and SDCs are as follows:
Terrestrial:
Space:
Now this means their projected energy cost is $0.002/kWh in space versus $0.045-0.17/kWh terrestrially—this is between 22 to 85 times cheaper. This raises questions about feasibility.
Launch costs are calculated from launch numbers, whose estimation requires some design specifications of the SDC (mass and geometry). As I read the whitepaper, it was unclear how SDC’s total mass would be 100 tonnes (or 167 tonnes) as these aren’t publicly shared—it is either proprietary information or yet to be defined. However, there are other ways to derive these design specs to verify the launch claims by using information in the whitepaper and filling in the gaps by examining state-of-the-art systems.
With these specs, a mass-based estimate of launches can be derived but one can also determine launch numbers that account for how the SDC’s elements fits into a rocket. Here, one essentially breaks the SDC into its subsystems to works out if/how their geometries fit into the volume of a launcher’s fairing. So, even if the mass estimates indicate the SDC fits into a single launcher, its volume might not necessarily be as accommodating.
The remainder of this blog is dedicated to estimating these launch numbers for three main part of the SDC: solar arrays, radiators, and servers.
Starcloud’s long-term goal is to build a 5 GW system, for which they require solar arrays spanning an area of 4km × 4km. This is a power density of 312 W/m² from which we can determine that their smaller 40 MW SDC needs 128,000 m² of solar panels. To pack this into a single Starship with a fairing volume of 1000 m³, we can determine the desired areal packing density which is the area of these arrays divided by the Starship’s fairing volume. This works out to 128 m²/m³.
\[\begin{align} {(Packing \, density)}_{desired} &=\frac{128,000}{1000}\\ &= 128 \, m^2/m^3 \end{align}\]This means that we would need to fit 128 m² of solar panels into a m³ of Starship where we have assumed that all of the paylod bay’s fairing volume is usable; but such packing efficiency is impractical but we will stick with this optimistic estimate for now. However, a more realistic estimate might permit about 80% of the available 1000 m³ to be used in which case the areal packing density is 160 m²/m³.
Next, I examine the performance of two space-proven designs for deployable solar arrays (of the three options that Starcloud propose to use as per their whitepaper). The first design is the Z-folds arrays which are the legacy design used on the ISS’s Solar Array Wings (SAW) and the second, called roll-out solar arrays (ROSA), augmented to the SAW’s and are set to become its next-generation replacements; this ISS variant is called iROSA.
The image below shows one wing of the ISS Solar Array Win (SAW) and a small a Roll-Out Solar Array (ROSA).
The ISS has 8 such (SAWs) attached to trusses; four each on its port and starboard side—which explains why the truss names are prefixed with P’s and S’s (e.g., P-6 and S-6). Altogether, the eight solar array wings generate about 240 kilowatts in direct sunlight, or about 84 to 120 kilowatts average power (cycling between sunlight and shade).
Each wing generates nearly 31 kilowatts (kW) of direct current power from two solar “blankets”. When fully extended, the pair span 35 metres in length and 12 metres in width. These are the largest ever deployed in space. The power density based on this wing span is 71.43 W/m² but a more appropriate estimate can be determined from the specs of the photovoltaic blanket. Each blanket comprises 16,400 cells of 8-cm by 8-cm; this gives the real actual light collecting area of each blanket and multiplying by two results in that for a single SAW.
So the power density of a wing with two blankets works out to 147.7 W/m² from:
\[\begin{align} {(Power \, Density)}_{SAW} &= \frac{Power}{Area} \\ &= \frac{31000W}{32800 \times .08^2} \\ &= 147.7 \, W/m^2 \end{align}\]So, achieving Starcloud’s assumed power density of 312 W/m² requires SAW technology to be 2.1x more efficient.
The packing density of one SAW module (i.e., a pair of deployable blankets) can be determined from using its stowed volume within a launch vehicle. The data suggest that the module packs into a cuboid of square face of 4.57 m and 0.51 m thick—the result is a packing density of
\[\begin{align} {(Packing \, density)}_{SAW} &= \frac{35 \times 12}{4.57^2 \times 0.51}\\ &= 39.43 \, m^2/m^3 \end{align}\]This is far lower than the desired packing density desired by Starcloud to fit the solar arrays into a single Starship. The number of launches can thus be computed from the ratio of the Starcloud and SAW packing densities—a dimensionless number—which is 3.24.
\[N_{launches,volume} = \frac{128}{39.43} = 5 \text{ launches }\]This means we would need 4 launches using SAW technology. If we used the more realistic estimate packing density (160 m²/m³), it would need 5 launches.
Each SAW wing has a documented mass of 1,100 kg and deploys 420 m² of active solar collection area (35 m × 12 m). This yields the mass density characteristic of the Z-fold SAW technology:
\[\begin{align} \rho_{mass,SAW} &= \frac{m_{SAW}}{A_{deployed,SAW}} \\ &= \frac{1100 \text{ kg}}{35 \times 12 \text{ m}^2} \\ &= \frac{1100 \text{ kg}}{420 \text{ m}^2} = 2.62 \text{ kg/m}^2 \end{align}\]This mass density reflects the integrated Z-fold system including photovoltaic cells, accordion deployment mechanisms, structural backing, electrical distribution networks, and mounting hardware designed for long-term space operations.
Applying this empirical mass density to Starcloud’s 128,000 m² solar array requirement:
\[\begin{align} m_{Starcloud,solar,SAW} &= A_{required} \times \rho_{mass,SAW} \\ &= 128,000 \text{ m}^2 \times 2.62 \text{ kg/m}^2 \\ &= 335,360 \text{ kg} = 335.4 \text{ tonnes} \end{align}\]Comparing mass-limited versus volume-limited launch requirements reveals a critical distinction from the iROSA case:
\[\begin{align} N_{launches,volume} &= 5 \text{ launches (from packing analysis)} \\ N_{launches,mass} &= \lceil \frac{335.4}{100} \rceil = 4 \text{ launches} \end{align}\]The ISS Roll Out Solar Arrays (iROSA) were launched in two pairs in June 2021 and November 2022 to augment to the first SAWs, launched in 2000 and 2006 and attached to the P6 and P4 Trusses. These SAWs were noticeably degrading towards the end of their 15-year life. Six of the intended 8 iROSAs have been added in following sequence:
Each iROSA generates nearly 20 kilowatts (kW) of power from two rolled-up solar blankets. When fully extended, the pair span 18.3 metres in length and 6 metres in width. The gap between the blankets does not appear to be in the public domain but appears to be negligible than that between the pair of SAW blankets; the specifications of the solar cells and their arrangement are also not known.
So, the power density here is based purely on the wing span, which works out to about 182.1 W/m² from:
\[\begin{align} {(Power \, Density)}_{iROSA} &= \frac{Power}{Area}\\ &= \frac{20000}{18.3 \times 6}\\ &= 182.1 \, W/m^2 \end{align}\]So, to achieve Starcloud’s assumed power density of 312 W/m², their solar technology would need to be 1.71x more efficient than iROSA.
iROSA canisters stowed in cargo Dragon’s trunk. Source
As done with the SAW module analysis (i.e., a pair of deployable blankets), we can use the stowed volume of an iROSA module to compute the number of launches. Sadly, this data is also not public but estimates can be made by examining its imagers stowed in a cargo Dragon as well as alongside humans for scale. The iROSAs packed into a cargo Dragon trunk and each blanket packed into a canister; the length of this canister is assumed to be 3 m, a dimension that remains unchanged for either blanket as it rolls out. Each blanket’s 18.3 m deployed span can be assumed to pack into a canister of diameter of 0.3 m. So two such canisters per iROSA leads to a packing density of
\[\begin{align} {(Packing \, Density)}_{iROSA} &= \frac{18.3 \times 6}{2\pi \times 0.15^2 \times 3} \\&= 258.78 \, m^2/m^3 \end{align}\]Again, one can determine the number of launches for the SDC’s solar panels by computing the ratio of the desired and iROSA packing densities. At 0.49, this is well under a single Starship launch.
While the packing density analysis suggested favorable volumetric efficiency for iROSA technology, the mass constraint presents a secondary limitation that requires careful examination. Each iROSA unit has a documented mass of 340 kg and deploys 109.8 m² of active solar collection area yielding a mass density for modern roll-out solar technology:
\[\begin{align} \rho_{mass,iROSA} &= \frac{m_{iROSA}}{A_{deployed,iROSA}} \\ &= \frac{340 \text{ kg}}{18.3 \times 6 \text{ m}^2} \\ &= \frac{340 \text{ kg}}{109.8 \text{ m}^2} = 3.10 \text{ kg/m}^2 \end{align}\]This mass density reflects the integrated system including photovoltaic cells, deployment mechanisms, structural backing, electrical harnesses, and mounting hardware required for autonomous space deployment. Scaling this empirical mass density to Starcloud’s 128,000 m² solar array requirement reveals the magnitude of the mass challenge for their power generation system:
\[\begin{align} m_{Starcloud,solar} &= A_{required} \times \rho_{mass,iROSA} \\ &= 128,000 \text{ m}^2 \times 3.10 \text{ kg/m}^2 \\ &= 396,800 \text{ kg} = 396.8 \text{ tonnes} \end{align}\]Comparing the mass-limited and volume-limited launch requirements reveals the constraining factor for solar array deployment:
\[\begin{align} N_{launches,volume} &= \frac{V_{required}}{V_{Starship}} = \frac{494.6}{1000} \approx 1 \text{ launch} \\ N_{launches,mass} &= \lceil \frac{m_{Starcloud,solar}}{m_{Starship,payload}} \rceil \\ &= \lceil \frac{396.8}{100} \rceil = 4 \text{ launches} \end{align}\]The analysis reveals that mass emerges as the limiting constraint for solar array deployment, requiring 3 launches compared to the single launch suggested by volumetric analysis alone. This represents a 3× penalty where mass considerations override the favorable packing density characteristics of roll-out solar technology.
The comparison between SAW and iROSA mass densities reveals important technological evolution patterns:
Despite being newer technology, iROSA exhibits higher mass density due to the robust deployment mechanisms required for roll-out architecture. However, iROSA’s superior volumetric packing efficiency (258.78 vs 39.43 m²/m³) more than compensates for this mass penalty, resulting in overall lower launch requirements.
Our calculations thus far are summarised below, where the pessimistic launch cost is based on a $100M Starship launch and an optimistic cost uses Starcloud’s $5M launch cost assumption:
Mass constraints dominate solar array deployment requirements
Array Design | Launches | Optimistic cost ($) | Pessimisitic cost ($) |
---|---|---|---|
Z-fold | 5 | 25M | 500M |
Roll-out | 4 | 20M | 400M |
The iROSA analysis reveals that mass, not volume, constrains solar array deployment—requiring 3 launches despite favorable packing density. This pattern emerges consistently across both power generation and thermal management systems, where mass penalties systematically exceed volumetric limitations for large-scale space infrastructure.
So this begs the question if on what the launch estimates for the radiators look like.
Since space lacks convective and conductive heat transfer, all waste heat must be radiated cooling to reject the full 40 MW thermal load generated by the data center. This is typically done via deployable surfaces. Starcloud propose a radiator system operating near 20 °C. Its theoretical limit is governed by the Stefan–Boltzmann Law, which tells us that
\[P_{\text{body}} = \cdot \varepsilon \cdot \sigma \cdot T^4\]where, emissivity for a black body, \(\varepsilon = 1\); the Stefan-Boltzmann constant is \(\sigma=5.67 \times 10^{-8} \, \text{W}\text{m}^{−2}\text{K}^{−4}\); and radiator temperature is \(T = 293.15\,\text{K}\) so we can determine that the heat radiated from both sides of a \(1 \, \text{m}^2\) plate is
\[P_{\text{radiator}} = 2 \, P_{\text{body}}= 770.48\,\text{W}\]So, with practical adjustments for real materials and environmental exposure, the net heat radiated by the plate also depends on the heat absorbed from the Sun \((P_{\text{Sun}})\) and Earth \((P_{\text{Earth}})\).
The net heat radiated is then:
\[P_{\text{net}} = P_{\text{radiator}} - (P_{\text{Sun}} + P_{\text{Earth}})\]One side of the radiator is in direct sunlight so the heat it absorbs is calculated as:
\[\begin{align} P_{\text{Sun}} &= (\alpha \cdot S) \\ &= 0.09 \cdot 1366 \\ &= 122.94 , \text{W/m}^2 \end{align}\], where the plate’s absorptivity is \(\alpha = 0.09\), and solar irradiance in space is \(S = 1366 \text{W}/\text{m}^2\). The thermal energy absorbed by the plate from the Earth’s albedo and blackbody radiation, is determined from:
\[\begin{align} P_{\text{Earth}} &= \alpha \cdot F \cdot (Al \cdot S + \sigma \cdot T_{\text{earth}}^4) \\ %% %%&= 0.09 \cdot 0.25 \cdot (0.3 \cdot 1366 + 5.67 \times 10^{-8} \cdot (273.15 - 20)^4) \\ &= 14.46 \,\text{W/m}^2 \end{align}\]where, we have some additional terms such as the view factor \((F = 0.25)\); Earth’s black body temperature \((T_{Earth} = -20 °C)\); and Earth’s albedo \((Al = 0.3)\).
Thus, the net radiative power per square meter of a passive radiator system operating near 20 °C is therefore:
\[\begin{align} P_{\text{rad, net}} &= \underbrace{770.48}_{\text{Radiated (both sides)}} - \underbrace{122.94}_{\text{Sun absorbed}} - \underbrace{14.46}_{\text{Earth absorbed}}\\ &= \boxed{633.08\,\text{W/m}^2} \end{align}\]This can be used to compute the area needed to radiate 40 MW of waste heat (assuming as much heat is generated as electricity is produced):
\[A_{\text{rad}} = \frac{40{,}000{,}000}{633.08} \approx \boxed{63{,}183.1\,\text{m}^2}\]This is roughly 0.063 km² of radiator surface that their whitepaper claims also needs packing in the same Starship’s fairing volume of 1,000 m³. While we already know this is unlikely—because of the 5 launches already needed for the solar arrays—we determine that a single launch Starship will dictate that the radiator has an areal packing density of 63.18 m²/m³.
\[\begin{align} {(Packing \, density)}_{desired} &=\frac{63{,}183}{1000}\\ &= 63{.}18 \, m^2/m^3 \end{align}\]Again, as was the case with the solar arrays, a more realistic estimate would be based on 80% of the fairing volume being usable which would lead to 79 m²/m³ as the areal packing density.
The ratio of solar to radiator areas is then:
\[\frac{A_{\text{solar}}}{A_{\text{rad}}} = \frac{128{,}000}{63{,}183} \approx 2.02\]which clarifies Starcloud’s statement that the radiator area needed is indeed roughly half that of the solar array. However, examining the power density ratio of the radiator to solar arrays tells us that the performance of the radiator is just about twice better than the power generation capacity of the solar panels.
For an idealized two-sided blackbody plate’s power density, this ratio is 2.68; this is closer to the paper’s statement of “roughly three times the electricity generated per square meter by solar panels”. Thus, it is important to clarify that under Starcloud’s assumed radiator and environmental parameters, the whitepaper’s commentary could be strengthened by focusing on a radiator system’s heat rejection capability being approximately twice, not three times, the power per square meter as the solar array generates electricity. Now, we estimate the sizing based on the ISS’s benchmarks.
The ISS’s systems and experiments consume a large amount of electrical power, almost all of which is converted to heat.. So, ~120 kW of electrical power essentially all of which becomes waste heat that must be radiated away. Its radiators operate at -40°C—much colder than Starcloud’s assumed 20°C. Keeping same emissivity as the Starcloud system (\(\epsilon=0.92\)), we can determine that the heat to be radiated per unit area as \(P_\text{ISS} = 308.7\,\text{W/m²}\) and the net heat radiated, after accounting for environmental effects, is \((P_{\text{ISS}})_\text{net} = 171.3\,\text{W/m²}\). To achieve thermal control and maintain components at acceptable temperatures, this heat requires radiators with an area of
\[A_{\text{required}} = 120,000/171.3 W/m² = 700\,\text{m²}\]For this purpose the ISS makes use of two systems. The Active Thermal Control System (ATCS) handles heat rejection when the combination of the ISS external environment and the generated heat loads exceed the capabilities of the Passive Thermal Control System (PTCS). The PTCS is made of external surface materials, insulation such as MLI, and heat pipes. The ATCS Overview comprise equipment that provide thermal conditioning via fluid flow, e.g. ammonia and water, and includes pumps, radiators, heat exchangers, tanks, and cold plates.
The ATCS mechanically pumps fluid in closed-loop circuits to perform three tasks: collect heat, transport heat, and reject heat. Waste heat can be removed via two structures—cold plates and heat exchangers—both cooled by a circulating ammonia loops on the outside of the station. The heated ammonia circulates through large radiators located on the exterior of the Space Station, releasing the heat by radiation to space that cools the ammonia as it flows through the radiators.
From a practical standpoint, the ATCS radiates heat generated by two sources—from the solar arrays and from inside the ISS modules. The Photovoltaic Thermal Control System (PVTCS) handles the former whereas the latter heat is radiated by the Internal Active Thermal Control System (IATCS) and External Active Thermal Control System (EATCS). They are discussed further below:
The EATCS consists of an internal, non-toxic, water coolant loop used to cool and dehumidify the atmosphere—this is the Internal Active Thermal Control System (IATCS). It transfers collected excess heat from electronic and experiment equipment and distributes it to the Interface Heat Exchangers. From these heat exchangers, ammonia is pumped into external radiators—the External Active Thermal Control System (EATCS)—that emit heat as infrared radiation and this ammonia cycles back to the station. In this way, the EATCS cools the US modules, Kibō, Columbus, and also the main power distribution electronics of the S0, S1 and P1 trusses. It can reject up to 70 kW, which is more than the 14 kW of the Early EATCS (or EEATCS).
The Photovoltaic Thermal Control System (PVTCS) consists of ammonia loops that collect excess heat from the Electrical Power System (EPS) components in the Integrated Equipment Assembly (IEA) on P4 and eventually S4 and transport this heat to the PV radiators (located on P4, P6, S4 and S6) where it is rejected to space. The PVTCS consist of ammonia coolant, eleven coldplates, two Pump Flow Control Subassemblies (PFCS) and one Photovoltaic Radiator (PVR).
The ISS thermal control systems provide empirical mass data for space-qualified radiator technology. Each system exhibits distinct mass characteristics reflecting their different operational requirements and design constraints.
EATCS Radiator: The EATCS comprises 6 Orbital Replacement Units, each of which has a documented mass of 1,122 kg and deploys 79.2 m² of radiating surface. This yields a mass density of:
\[\begin{align} \rho_{mass,ORU} &= \frac{m_{ORU}}{A_{deployed,ORU}} \\ &= \frac{1122 \text{ kg}}{79.2 \text{ m}^2} = 14.16 \text{ kg/m}^2 \end{align}\]PVTCS Radiator (PVR): The Photovoltaic Thermal Control System comprise 4 radiator panels, being smaller and serving dedicated solar array cooling, exhibit higher mass density. Each PVR unit masses 741 kg across 42.4 m² of radiator surface area:
\[\begin{align} \rho_{mass,PVR} &= \frac{m_{PVR}}{A_{deployed,PVR}} \\ &= \frac{741 \text{ kg}}{42.4 \text{ m}^2} = 17.48 \text{ kg/m}^2 \end{align}\]Combined Systems Mass Performance: The complete ISS thermal control system comprises the PVTCS units and ATCS, yielding a system-level mass density of:
\[\begin{align} m_{ISS,total} &= 4 \times 741 + 6 \times 1122 = 9696 \text{ kg} \\ \rho_{mass,ISS} &= \frac{9696}{645} = 15.03 \text{ kg/m}^2 \end{align}\]Scaling these empirical mass densities to Starcloud’s 63,190 m² radiator requirement reveals the magnitude of the mass challenge:
\[\begin{align} m_{Starcloud,ORU} &= 63190 \times 14.16 = 894.9 \text{ tonnes} \\ m_{Starcloud,PVR} &= 63190 \times 17.48 = 1103.4 \text{ tonnes} \\ m_{Starcloud,ISS} &= 63190 \times 15.03 = 949.8 \text{ tonnes} \end{align}\]With Starship’s 100-tonne payload capacity to LEO, the mass-limited launch requirements become:
\[\begin{align} N_{launches,ORU} &= \lceil \frac{894.9}{100} \rceil = 9 \text{ launches} \\ N_{launches,PVR} &= \lceil \frac{1103.4}{100} \rceil = 12 \text{ launches} \\ N_{launches,ISS} &= \lceil \frac{949.8}{100} \rceil = 10 \text{ launches} \end{align}\]Our radiator mass analysis reveals the fundamental constraint limiting Starcloud’s single-launch architecture. The calculations are summarised below, where the pessimistic launch cost is based on a $100M Starship launch and an optimistic cost uses Starcloud’s $5M launch cost assumption:
ISS radiators launch manifest based on mass estimates.
Radiator Technology | Launches | Optimistic Cost ($) | Pessimistic Cost ($) |
---|---|---|---|
EATCS (ORU) | 9 | 45M | 900M |
PVTCS (PVR) | 12 | 60M | 1.2B |
ISS Combined | 10 | 50M | 1B |
The challenge of launching radiators for a 40 MW SDC requires at least 9 launches if we are building on ISS technology. This also shows that the solar array mass density of 3.10 kg/m² proves remarkably efficient compared to radiator systems (14-17 kg/m²), reflecting the fundamental difference between power generation and thermal management technologies. Solar arrays primarily consist of thin photovoltaic films with minimal structural requirements, while radiators demand substantial mass for heat transfer fluids, thermal exchange surfaces, and robust mounting systems. The radiator mass suggests that revolutionary advances in thermal management technology—achieving 90% mass reduction relative to ISS systems—would be necessary to approach single-launch viability. Such improvements exceed materials advances and represent unprecedented engineering breakthroughs for space-qualified thermal control systems.
The radiator packing density calculation requires careful consideration of how panels fold and stack when stowed. For the ISS radiator systems, we can model the folding geometry as follows. Each radiator unit consists of multiple panels that fold accordion-style along their longest dimension. When stowed, these panels stack atop one another, creating a total thickness equal to the number of panels multiplied by individual panel thickness. Unlike the previous analysis on solar panels where a fixed 0.51 m thickness for the stowed volume was inherited from solar panel empirical data, radiator panels are not as well documented publicly. We examine each of the radiator designs below:
EATCS Radiator (ORU) Analysis: The External Active Thermal Control System radiator deploys as a 23.3 m × 3.4 m array with 8 panels folding along the 23.3 m dimension:
\[\begin{align} W_{stowed} &= W_{deployed} = 3.4 \text{ m} \\ L_{stowed} &= \frac{L_{deployed}}{N_{panels}} = \frac{23.3}{8} = 2.91 \text{ m} \\ T_{stowed} &= N_{panels} \times t_{panel} = 8 \times 0.2 = 1.6 \text{ m} \end{align}\] \[\begin{align} V_{ORU} &= 3.4 \times 2.91 \times 1.6 = 15.84 \text{ m}^3 \\ \rho_{ORU} &= \frac{79.2}{15.84} = 5 \text{ m}^2/\text{m}^3 \end{align}\]PVTCS Radiator (PVR) Analysis: The Photovoltaic Thermal Control System radiator deploys as a 3.12 m × 13.6 m array consisting of 7 individual panels. When folded, each panel maintains its 3.12 m width but reduces its length to 13.6/7 = 1.94 m. Assuming each panel has a thickness of 0.2 m, the stowed configuration becomes:
\[\begin{align} W_{stowed} &= W_{deployed} = 3.12 \text{ m} \\ L_{stowed} &= \frac{L_{deployed}}{N_{panels}} = \frac{13.6}{7} = 1.94 \text{ m} \\ T_{stowed} &= N_{panels} \times t_{panel} = 7 \times 0.2 = 1.4 \text{ m} \end{align}\]The stowed volume and packing density for a single PVR unit are:
\[\begin{align} V_{PVR} &= W_{stowed} \times L_{stowed} \times T_{stowed} \\ &= 3.12 \times 1.94 \times 1.4 = 8.47 \text{ m}^3 \\ \rho_{PVR} &= \frac{A_{deployed}}{V_{stowed}} = \frac{42.4}{8.47} = 5 \text{ m}^2/\text{m}^3 \end{align}\]Combined ISS Performance: So for their combined performance on the the ISS, we account for the 6 ORU radiators of the EATCS and 4 radiators of the PVTCS to yield:
\[\begin{align} V_{ISS,total} &= 4 \times V_{PVR} + 6 \times V_{ORU} \\ &= 4 \times 8.47 + 6 \times 15.84 = 128.9 \text{ m}^3 \\ \rho_{ISS,combined} &= \frac{645}{128.9} = 5.00 \text{ m}^2/\text{m}^3 \end{align}\]Despite variations in their deployed areas, these radiator systems have the same packing densities. The number of launches is then computed as 13 from the ratio of Starcloud’s desired packing density to the ISS benchmarks above
\[N_{launches} = \lceil \frac{63.18}{5} \rceil = 13 \text{ launches}\]The volume-based launches (13) and mass-based launches (9-12) are quite similar despite the math demonstrating that volume emerges as the dominant constraint for large-scale radiator deployment. This reflects the fundamental physics of thermal management systems, which require substantial structural mass (for heat transfer fluids, manifolds, mounting hardware, and thermal exchange surfaces) that also pack less efficiently than the roll-out solar arrays. Recalling that 63.18 m²/m³ is assuming all of the payload bay is accessible—more realistically this could be 79 m²/m³, which means this could be up to 16 launches.
While this analysis demonstrates that volume, not mass, is the critical limiting factor for large-scale radiator deployment, it should be noted that this is dependent on the assumed panel thickness panel of 0.2 m. Reducing panel thickness to 0.05 m reduces volume-defined launches to 4 but then we fall back to radiator mass being the critical limiting factor so we will still need between 9-12 launches when using ISS-like technology. This transforms the claimed $5M single-launch deployment into a $1B+ multi-launch operation using realistic launch costs and flight-proven thermal management technology.
Having established that radiators are more dominant than power systems on the SDC launch manifest, it is also worth deriving the SDC’s implicit server mass assumptions and compare them to industry benchmarks. Their total compute deployment of ~40 MW is to be achieved using 300 Nvidia GB200 NVL72 racks with each rack needing 120 kW per rack. This is claimed to take up 50% of Starship’s payload bay volume. First, I clarify that the calculations align with the actual specs of the rack:
\[\begin{align} P_{effective,per-rack} &= \frac{P_{total}}{N_{racks}} \\ &= \frac{40{,}000 \text{ kW}}{300} \\ &= 133.3 \text{ kW per rack} \end{align}\]This is closer to the stated power needs of 132 kW per rack. The rack apparently weighs 1.36 metric tonnes. So, for 300 racks we have:
\[\begin{align} m_{servers,Starcloud} &= N_{racks} \times m_{rack} \\ &= 300 \times 1{,}360 \text{ kg} \\ &= \boxed{408 \text{ tonnes}} \end{align}\]These will require 5 Starship launches: \(N_{launches,servers} = \lceil \frac{408}{100} \rceil = 5 \text{ launches}\)
The racks alone are over a single Starship launch.
So after this analysis, the launch profile looks like this:
Mass constraints dominate solar array deployment requirements
Component | Mass (tonnes) | Launches | Optimistic Cost ($M) | Pessimistic Cost ($B) |
---|---|---|---|---|
Servers | 408.0 | 4-5 | 20-25 | .4-.5 |
Solar Arrays | 396.8 | 4-5 | 20-25 | .4-.5 |
Radiators | 894.9-1103.4 | 9-16 | 45-80 | .9-1.6 |
Total System | 1,686.6 | 17-22 | 85-130 | 1.7-2.2 |
Even using Starcloud’s own optimistic specifications, the support infrastructure still outweighs servers by nearly 4 times, requiring upto 22 total launches versus their claimed single launch—representing a 2,200% cost increase from $5M to $110M using an optimistic launch cost ($5M/launch), or $2.2B using a pessimistic $100M/launch ($100M/launch). While there is more one could analyse, the analysis demonstrates that regardless of server mass assumptions—commercial rack deployment (300 tonnes) or with optimized space hardware (maybe 150 tonnes?)—the fundamental constraint remains thermal management volume, which systematically dominates launch requirements for large-scale space-based computing systems. This work is not to say that SDCs have no value but that the case for SDC needs more realistic techno-economic analysis.
Space roboticist here.
As with a lot of things, it isn't the initial outlay, it's the maintenance costs. Terrestrial datacenters have parts fail and get replaced all the time. The mass analysis given here -- which appears quite good, at first glance -- doesn't including any mass, energy, or thermal system numbers for the infrastructure you would need to have to replace failed components.
As a first cut, this would require:
- an autonomous rendezvous and docking system
- a fully railed robotic system, e.g. some sort of robotic manipulator that can move along rails and reach every card in every server in the system, which usually means a system of relatively stiff rails running throughout the interior of the plant
- CPU, power, comms, and cooling to support the above
- importantly, the ability of the robotic servicing system toto replace itself. In other words, it would need to be at least two fault tolerant -- which usually means dual wound motors, redundant gears, redundant harness, redundant power, comms, and compute. Alternately, two or more independent robotic systems that are capable of not only replacing cards but also of replacing each other.
- regular launches containing replacement hardware
- ongoing ground support staff to deal with failures
The mass analysis also doesn't appear to include the massive number of heat pipes you would need to transfer the heat from the chips to the radiators. For an orbiting datacenter, that would probably be the single biggest mass allocation.
I've had actual, real-life deployments in datacentres where we just left dead hardware in the racks until we needed the space, and we rarely did. Typically we'd visit a couple of times a year, because it was cheap to do so, but it'd have totally viable to let failures accumulate over a much longer time horizon.
Failure rates tend to follow a bathtub curve, so if you burn-in the hardware before launch, you'd expect low failure rates for a long period and it's quite likely it'd be cheaper to not replace components and just ensure enough redundancy for key systems (power, cooling, networking) that you could just shut down and disable any dead servers, and then replace the whole unit when enough parts have failed.
Exactly what I was thinking when the OP comment brought up "regular launches containing replacement hardware", this is easily solvable by actually "treating servers as cattle and not pets" whereby one would simply over-provision servers and then simply replace faulty servers around once per year.
Side: Thanks for sharing about the "bathtub curve", as TIL and I'm surprised I haven't heard of this before especially as it's related to reliability engineering (as from searching on HN (Algolia) that no HN post about the bathtub curve crossed 9 points).
https://accendoreliability.com/the-bath-tub-curve-explained/ is an interesting breakdown of bath tub curve dynamics for those curious!
Wonder if you could game that in theory by burning in the components on the surface before launch or if the launch would cause a big enough spike from the vibration damage that it's not worth it.
I suspect you'd absolutely want to burn in before launch, maybe even including simulating some mechanical stress to "shake out" more issues, but it is a valid question how much burn in is worth doing before and after launch.
Vibration testing is a completely standard part of space payload pre-flight testing. You would absolutely want to vibe-test (no, not that kind) at both a component level and fully integrated before launch.
PSA: do not vibe-code the hardware controller for your vibration testing rig. This does not pass the vibe test.
Maybe they are different types of failure modes. Solar panel semiconductors hate vibration.
And then, there is of course radiation trouble.
So those two kinds of burn-in require a launch ti space anyway.
Ah, the good old BETA distribution.
Programming and CS people somehow rarely look at that.
The analysis has zero redundancy for either servers or support systems.
Redundancy is a small issue on Earth, but completely changes the calculations for space because you need more of everything, which makes the already-unfavourable space and mass requirements even less plausible.
Without backup cooling and power one small failure could take the entire facility offline.
And active cooling - which is a given at these power densities - requires complex pumps and plumbing which have to survive a launch.
The whole idea is bonkers.
IMO you'd be better off thinking about a swarm of cheaper, simpler, individual serversats or racksats connected by a radio or microwave comms mesh.
I have no idea if that's any more economic, but at least it solves the most obvious redundancy and deployment issues.
> The analysis has zero redundancy for either servers or support systems.
The analysis is a third party analysis that among other things presumes they'll launch unmodified Nvidia racks, which would make no sense. It might be this means Starcloud are bonkers, but it might also mean the analysis is based on flawed assumptions about what they're planning to do. Or a bit of both.
> IMO you'd be better off thinking about a swarm of cheaper, simpler, individual serversats or racksats connected by a radio or microwave comms mesh.
This would get you significantly less redundancy other than against physical strikes than having the same redundancy in a single unit and letting you control what feeds what, the same way we have smart, redundant power supplies and cooling in every data center (and in the racks they're talking about using as the basis).
If power and cooling die faster than the servers, you'd either need to overprovision or shut down servers to compensate, but it's certainly not all or nothing.
There is a neat solve for the thermal problem that York Space systems has been advocating (based on Russian tech)… put everything in an enclosure.
https://www.yorkspacesystems.com/
Short version: make a giant pressure vessel and keep things at 1 atm. Circulate air like you would do on earth. Yes, there is still plenty of excess heat you need to radiate, but dramatically simplifies things.
Many small satellites also increases the surface area for cooling
Like a neo-fractal surface? There's no atmosphere to wear it down.
even a swarm of satellites has risk factors. we treat space as if it were empty (it's in the name) but there's debris left over from previous missions. this stuff orbits at a very high velocity, so if an object greater than 10cm is projected to get within a couple kilometers of the ISS, they move the ISS out of the way. they did this in April and it happens about once a year.
the more satellites you put up there, the more it happens, and the greater the risk that the immediate orbital zone around Earth devolves into an impenetrable whirlwind of space trash, aka Kessler Syndrome.
serious q: how much extra failure rate would you expect from the physical transition to space?
on one hand, I imagine you'd rack things up so the whole rack/etc moves as one into space, OTOH there's still movement and things "shaking loose" plus the vibration, acceleration of the flight and loss of gravity...
Yes, an orbital launch probably resets the bathtub to some degree.
I suspect the thermal system would look very different from a terrestrial component. Fans and connectors can shake loose - but do nothing in space.
Perhaps the server would be immersed in a thermally conductive resin to avoid parts shaking loose? If the thermals are taken care of by fixed heat pipes and external radiators - non thermally conductive resins could be used.
Connectors have to survive the extreme vibration of a rocket launch. Parts routinely shake off boards in testing even when using non-COTS space rated packaging designed for extreme environments. That amplifies the cost of everything.
The Russians are the only ones who package their unmanned platform electronics in pressure vessels. Everyone else operates in vacuum, so no fans.
>>immersed in a thermally conductive resin
sounds heavy
The original article even addresses this directly. Plus hardware returns over fast enough that you'll simply be replacing modules with a smattering of dead servers with entirely new generations anyways.
Really? Even radiation hardened hardware? Aren’t there way higher size floors on the transistors?
It would be interesting to see if the failure rate across time holds true after a rocket launch and time spent in space. My guess is that it wouldn’t, but that’s just a guess.
I think it's likely the overall rate would be higher, and you might find you need more aggressive burn-in, but even then you'd need an extremely high failure rate before it's more efficient to replace components than writing them off.
The bathtub curve isn’t the same for all components of a server though. Writing off the entire server because a single ram chip or ssd or network card failed would limit the entire server to the lifetime of the weakest part. I think you would want redundant hot spares of certain components with lower mean time between failures.
We do often write off an entire server because a single component fails because the lifetime of the shortest-lifetime components is usually long enough that even on-earth with easy access it's often not worth the cost to try to repair. In an easy-to-access data centre, the component most likely to get replaced would be hot-swappable drives or power supplies, but it's been about 2 decades since the last time I worked anywhere where anyone bothered to check for failed RAM or failed CPUs to salvage a server. And lot of servers don't have network devices you can replace without soldering, and haven't for a long time outside of really high end networking.
And at sufficient scale, once you plan for that it means you can massively simplify the servers. The amount of waste a sever case suitable for hot-swapping drives adds if you're not actually going to use the capability is massive.
I'd naively assume that the stress of launch (vibration, G-forces) would trigger failures in hardware that had been working on the ground. So I'd expect to see a large-ish number of failures on initial bringup in space.
Electronics can be extremely resilient to vibration and g forces. Self guided artillery shells such as the M982 Excalibur include fairly normal electronics for GPS guidance. https://en.wikipedia.org/wiki/M982_Excalibur
On the ground vibration testing is a standard part of pre-launch spacecraft testing. This would trigger most (not all) vibration/G-force related failures on the ground rather than at the actual launch.
The big question mark is how many failures you cause and catch on the first cycle and how much you're just putting extra wear on the components that pass the test the first time and don't get replaced.
Yes. I think I read a blogpost from Backblaze about running their Red Pod rack mounted chassis some 10 years ago.
They would just keep the failed drives in the chassi. Maybe swap out the entire chassi if enough drives died.
A new meaning to the term "space junk"
Appreciate the insights, but I think failing hardware is the least of their problems. In that underwater pod trial, MS saw lower failure rates than expected (nitrogen atmosphere could be a key factor there).
> The company only lost six of the 855 submerged servers versus the eight servers that needed replacement (from the total of 135) on the parallel experiment Microsoft ran on land. It equates to a 0.7% loss in the sea versus 5.9% on land.
6/855 servers over 6 years is nothing. You'd simply re-launch the whole thing in 6 years (with advances in hardware anyways) and you'd call it a day. Just route around the bad servers. Add a bit more redundancy in your scheme. Plan for 10% to fail.
That being said, it's a complete bonkers proposal until they figure out the big problems, like cooling, power, and so on.
Indeed, MS had it easier with a huge, readily available cooling reservoir and a layer of water that additionally protects (a little) against cosmic rays, plus the whole thing had to be heavy enough to sink. An orbital datacenter would be in a opposite situation: all cooling is radiative, many more high-energy particles, and the weight should be as light as possible.
> In that underwater pod trial, MS saw lower failure rates than expected
Underwater pods are the polar opposite of space in terms of failure risks. They don't require a rocket launch to get there, and they further insulate the servers from radiation compared to operating on the surface of the Earth, rather than increasing exposure.
(Also, much easier to cool.)
The biggest difference is radiation. Even in LEO, you will get radiation-caused Single Events that will affect the hardware. That could be a small error or a destructive error, depending on what gets hit.
Power!? Isnt that just PV and batteries? LEO has like 1.5h orbit.
As mentioned in the article the Starcloud design requires solar arrays that are ~2x more efficient than those deployed on the ISS. Simply scaling them up introduces more drag and weight problems as do the batteries needed to suffice for the 45 minutes of darkness the satellite will receive.
It's a Datacenter... I guess solar is what they're planning to use, but the array will be so large it'll have its own gravity well
Had they said "the array will be so large it'll have its own gravity." then you'd be making a valid point.
But they didn't say just "gravity", they said "gravity well".
> "First, let us simply define what a gravity well is. A gravity well is a term used metaphorically to describe the gravitational pull that a large body exerts in space."
- https://medium.com/intuition/what-are-gravity-wells-3c1fb6d6...
So they weren't suggesting that it will be big enough to get past some boundary below which things don't have gravity, just that smaller things don't have enough gravity to matter.
Given all mass has gravity, and gravity can be metaphorically described by a well, all mass has a gravity well. It is not necessary for mass to capture other mass in its gravity. A well is a pleasant and relative metaphor humans can visualize - not a threshold reached after certain mass.
"Large" is almost meaningless in this context. Douglas Adams put it best
> Space is big. Really big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist, but that's just peanuts to space.
From an education site:
> Everything with mass is able to bend space and the more massive an object is, the more it bends
They start with an explanation of a marble compared to a bowling ball. Both have a gravity well, but one exerts far more influence
https://www.howitworksdaily.com/the-solar-system-what-is-a-g...
Power is solar and cooling is radiators. They did the math on it, its feasible and mostly an engineering problem now.
Did Microsoft do any of that with their submersible tests?
My feeling is that, a bit like starlink, you would just deprecate failed hardware, rather than bother with all the moving parts to replace faulty ram.
Does mean your comms and OOB tools need to be better than the average american colo provider but I would hope that would be a given.
>The mass analysis also doesn't appear to include the massive number of heat pipes you would need to transfer the heat from the chips to the radiators. For an orbiting datacenter, that would probably be the single biggest mass allocation.
And once you remove all the moving parts, you just fill the whole thing with oil rather than air and let heat transfer more smoothly to the radiators.
Oil, like air, doesn't convent well in 0G, you'll need pretty hefty pumps and well designed layouts to ensure no hot spots form. Heat pipes are at least passive and don't depend on gravity.
Mineral oil density is around 900kg / cubic meter.
Not sure this is such a great idea.
I would wager that its lighter than:
Repair robots
Enough air between servers to allow robots to access and replace componentry.
Spare componentry.
An eject/return system.
Heatpipes from every server to the radiators.
A light oil has a density of 700kg per cubic meter. Most common oils are denser.
Then you'd need vanes, agitators, and pumps to keep the oil moving around without forming eddies. These would need to be fairly bulky compared to fans and fan motors.
I'd have to see what an engineering team came up with, but at first glance the liquid solution would be much heavier and likely more maintenance intensive.
I would wager it isn't.
First, oil is much heavier than air.
Second: you still need radiators to dissipate heat that is in oil somehow.
Why does it need to be robots?
On Earth we have skeleton crews maintain large datacenters. If the cost of mass to orbit is 100x cheaper, it’s not that absurd to have an on-call rotation of humans to maintain the space datacenter and install parts shipped on space FedEx or whatever we have in the future.
If you want to have people you need to add in a whole lot of life support and additional safety to keep people alive. Robots are easier, since they don't die so easily. If you can get them to work at all, that is.
Life support can be on the shuttle/transport. Or it can be its own hab… space office ? Space workshop ?
Presumably those needs are handled on the habitat where the orbital maintenance team lives when they aren’t visiting satellite data centers.
Treat each maintenance trip like an EVA (extra vehicular activity) and bring your life support with you.
Thats life support.
That isn't going to last for much longer with the way power density projections are looking.
Consider that we've been at the point where layers of monitoring & lockout systems are required to ensure no humans get caught in hot spots, which can surpass 100C, for quite some time now.
You might be thinking of 100F, a toasty summer day. 100C on the other hand (about 212F) is fatal even in zero humidity.
Well, after a while. A decently hot Finnish sauna...
No, I mean like you crumple to the ground and cook to death if there isn't someone close enough to grab you within a few minutes. 212F ambient air. Like the inside of a meat smoker, but big enough for humans.
DC's aren't quite there yet, but the hot spots that do occur are enough to cause arc flashes which claim hundreds of lives a year.
This sort of work is ideal for robots. We don't do it much on Earth because you can pay a tech $20/hr to swap hardware modules, not because it's hard for robots to do.
Bingo.
It's all contingent on a factor of 100-1000x reduction in launch costs, and a lot of the objections to the idea don't really engage with that concept. That's a cost comparable to air travel (both air freight and passenger travel).
(Especially irritating is the continued assertion that thermal radiation is really hard, and not like something that every satellite already seems to deal with just fine, with a radiator surface much smaller than the solar array.)
It is really hard, and it is something you need to take into careful consideration when designing a satellite.
It is really fucking hard when you have 40MW of heat being generated that you somehow have to get rid of.
It's all relative. Is it harder than getting 40MW of (stable!) power? Harder than packaging and launching the thing? Sure it's a bit of a problem, perhaps harder than other satellites if the temperature needs to be lower (assuming commodity server hardware) so the radiator system might need to be large. But large isn't the same as difficult.
Neither getting 40MW of power nor removing 40MW of heat are easy.
The ISS makes almost 250KW in full light, so you would need approximately 160 times the solar footprint of the ISS for that datacenter.
The ISS dissipates that heat using pumps to move ammonia in pipes out to a radiator that is a bit over 42m^2. Assuming the same level of efficiency, that's over 6km^2 of heat dissipation that needs empty space to dissipate to.
That's a lot.
Wait, so we need 40MW of electricity and have 40MW of thermal energy. Can't we reuse some of that?
Musk is already in the testing phase for this. His starship rockets should be reusable as soon as 2018!
And in the meantime, he has responsibly redistributed and recycled their mass. Avoiding any concern that Earth's mass could be negatively impacted.
Well sure. If you think fully reusable rockets won’t ever happen, then the datacenter in space thing isn’t viable. But THAT’S where the problem is, not innumerate bullcrap about size of radiators.
(And of course, the mostly reusable Falcon 9 is launching far more mass to orbit than the rest of the world combined, launching about 150 times per year. No one yet has managed to field a similarly highly reusable orbital rocket booster since Falcon 9 was first recovered about 10 years ago in 2015).
How will he overtake all the other reusable rockets at this rate?
Yeah, just attach a Haven module to the data center.
I used to build and operate data center infrastructure. There is very limited reason to do anything more than a warranty replacement on a GPU. With a high quality hardware vendor that properly engineers the physical machine, failure rates can be contained to less than .5% per year. Particularly if the network has redundancy to avoid critical mass failures.
In this case, I see no reason to perform any replacements of any kind. Proper networked serial port and power controls would allow maintenance for firmware/software issues.
I suspect they'd stop at automatic rendezvous & docking. Use some sort of cradle system that holds heat fins, power, etc that boxes of racks would slot into. Once they fail just pop em out and let em burn up. Someone else will figure out the landing bit
I won't say it's a good idea, but it's a fun way to get rid of e-waste (I envision this as a sort of old persons home for parted out supercomptuers)
seems to be an industry standard
Seems like a bit of pointless whataboutism when we're still using leaded fuel in planes and helicopters
At least that's relatively local pollution and isn't raining down on me given it's banned in the entire EU.
Don’t you need to look at different failure scenarios or patterns in orbit due to exposure to cosmic rays as well?
It just seems funny, I recall when servers started getting more energy dense it was a revelation to many computer folks that safe operating temps in a datacenter should be quite high.
I’d imagine operating in space has lots of revelations in store. It’s a fascinating idea with big potential impact… but I wouldn’t expect this investment to pay out!
What, why would you fly out and replace it? It'd be much cheaper just to launch more.
What if we just integrate the hardware so it fails softly?
That is, as hardware fails, the system looses capacity.
That seems easier than replacing things on orbit, especially if StarShip becomes the cheapest way to launch to orbit because StarShip launches huge payloads, not a few rack mounted servers.
Depends what you want to use it for. Ping time to the moon and back is about 2.5 seconds best case.
you don't replace it, you just let it fail and over time the datacenter wears out.
I think what you actually do is let it gradually degrade over time and then launch a new one.
Seems prudent to achieve fully robotic datacenters on earth before doing it in space. I know, I’m a real wet blanket.
If mass is going to be as cheap as is needed for this to work anyway, there's no reason you can't just use people like in a normal datacenter.
Space is very bad for the human body, you wouldn't be able to leave the humans there waiting for something to happen like you do on earth, they'd need to be sent from earth every time.
Also, making something suitable for humans means having lots of empty space where the human can walk around (or float around, rather, since we're talking about space).
Underwater welder, though being replaced by drone operator, is still a trade despite the health risks. Do you think nobody on this whole planet would take a space datacenter job on a 3 month rotation?
I agree that it may be best to avoid needing the space and facilities for a human being in the satellite. Fire and forget. Launch it further into space instead of back to earth for a decommission. People can salvage the materials later.
The problem isn't health “risk”, there are risks but there are also health effects that will come with certainty. For instance, low gravity deplete your muscles pretty fast. Spend three month in space and you're not going to walk out of the reentry vehicle.
This effect can be somehow overcome by exercising while in space but it's not perfect even with the insane amount of medical monitoring the guys up there receive.
“just”
It's theoretically possible for sure, but we've never done that in practice and it's far from trivial.
Good points. Spin “gravity” is also quite challenging to acclimatize to because it’s not uniform like planetary gravity. Lots of nausea and unintuitive gyroscopic effects when moving. It’s definitely not a “just”
Yeah, “just.”
Every child on a merry go round experiences it. Every car driving on a curve. And Gemini tested it once as well. It’s a basic feature of physics. Now why NASA hasn’t decided to implement it in decades is actually kind of a mystery.
Relevant Scott Manley video: https://youtu.be/nxeMoaxUpWk?si=QOO9KJCGS_Q8JeyR
Relevant tom Scott video: https://youtu.be/bJ_seXo-Enc?si=m_QjHpLaL8d8Cp8b
There is a lot of research, but it’s not as simple as operating under real gravity. Makes many movements harder and can result in getting sick.
If it’s that straightforward, why haven’t you done it?
1g of acceleration is enormous compared to a child in a merry go round actually.
> And Gemini tested it once as well.
From Wikipedia:
They were able to generate a small amount of artificial gravity, about 0.00015 g
So yes, you need an effect 60 000 times stronger than this.
And you want that to be relatively uniform over the size of an astronaut so you need a very big merry go round.
Nuclear fission is also a basic feature of physics, that doesn't mean engineering a nuclear power plant is straightforward.
What makes the economics better in space?
Are there any unique use-cases waiting to be unleashed?
Regular maintenance methods are cheap on earth and infeasible in space.
Keep in mind economics is all about allocation of scarce resources with alternative uses.
No, they don’t work the same. They are much more difficult in every way in space.
[dead]
I worked in aerospace for a couple of years in the beginning of my career. While my area of expertise was the mechanical design I shared my office with the guy who did the thermal design and I learned two things:
1. Satellites are mostly run at room temperature. It doesn't have to be that way but it simplifies a lot of things.
2. Every satellite is a delicately balanced system where heat generation and actively radiating surfaces need to be in harmony during the whole mission.
Preventing the vehicle from getting too hot is usually a much bigger problem than preventing it from getting too cold. This might be surprising because laypeople usually associate space with cold. In reality you can always heat if you have energy but cooling is hard if all you have is radiation and you are operating at a fixed and relatively low temperature level.
The bottom line is that running a datacenter in space makes not much sense from a thermal standpoint and there must be other compelling reasons for a decision to do so.
It's like a thermos flask where the spaceship is the contents and space is the insulating vacuum.
They address that issue in the link; The propose a 63m^2 radiator for heat dissipation.
Sure it is doable. My point is that at room temperature convection is a so much more efficient heat transfer mechanism that I wonder why someone would even think about doing without it.
The comment at https://news.ycombinator.com/item?id=44399181 says the ISS radiator is 42m^2. Radiating so much more at just 63m^2 seems hype-based.
The caveat to this is that that this also dependent on where in the lifecycle of the satellite you are at. For example after launch, you might just have your survival heaters on, which will keep you within generally an industrial range (e.g. >-40c), and you might not reach higher temps until you hit nominal operations. But a lot of the hardware specs for temperature often are closer standard "industrial" specs rather than special mil or NASA specs.
What is room temperature in this context? The temp of the space it's sitting in or a typical room temp on Earth?
Room temperature on earth. In physics room temperature is used as a technical term and actually pretty universally defined as 20°C (293.15 K).
Traditionally in European papers it used to be 18°C, so if Einstein and Schrödinger talk about room temperature it is that.
I've heard in chemistry and stamp collecting they use 25°C but that is heresy.
Maybe you did mean heresy, which would be funny (but a perfectly valid opinion to have)...
But I suspect that's a typo, and you meant 'heresay'? :D
Lay people associate space with cold because nearly every scifi movie has people freezing over in seconds when exposed to the vacuum of space (insert Picard face-palm gif).
Even The Expanse, even them! Although they are otherwise so realistic, that I have to say I started doubting myself a bit. I wonder what would really would happen and how fast...
People even complained that Leia did not freeze over (in stead of complaining about her sudden use of the force where previously she did not show any such talents.)
Well empty space has a temperature of roughly -270c...so that's pretty cold.
But I think what people/movies don't understand is that there's almost no conductive thermal transfer going on, because there's not much matter to do it. It's all radiation, which is why heat is a much bigger problem, because you can only radiate heat away, you can't conduct it. And whatever you use to radiate heat away can also potentially receive radiation from things like the Sun, making your craft even hotter.
> Well empty space has a temperature of roughly -270c...so that's pretty cold.
What is this “empty space” you speak of? Genuinely empty space is empty and does not have a clearly defined temperature. If you are in space in our universe, very far from everything else, then the temperature of the cosmic microwave background is what matters, and that’s a few K. If you’re in our solar system in an orbit near Earth, the radiation field is wildly far from any sort of thermal equilibrium, and the steady state temperature of a passive black body will depend strongly on whether it’s in the Earth’s shadow, and it’s a lot hotter than a few K when exposed to sunlight.
Wouldn't a body essentially freeze dry as a wet being exposed to vacuum? I.e. the temperature of the space is still irrelevant and the cooling comes from vaporization.
Any exposed fluids (mostly saliva) technically boils but you can think of it as evaporation to avoid layperson associations with heat -- it's all about low pressure, not about heat in layperson terms.
Whether you freeze or not depends on whether you're in the sun or not. Spacesuits are white to reflect as much light as feasible mostly to keep the astronauts from cooking. For example, surface of the moon can heat to 120° C / 250° Fahrenheit / 400 K.
Over time I'm sure all the liquids will manage to escape. Here's what happens to blood not contained by blood vessels and skin: https://www.youtube.com/watch?v=jU3MOLqA3WA
I'm reading through the Expanse books and a passage that I read after seeing your comment jumped out at me. It's a quote from Havelock:
"As a boy living planetside, he had always thought of space as cold. And while that was technically true, mostly it was a vacuum. And so a ship, mostly, was a thermos. The heat from their bodies and systems would bleed into the void off over years and decades of it had the chance."
So at least the books got it right!
It’s not that dumb- if a human gets exposed to space the water in their exposed tissues will boil off, leading to evaporative cooling. In a vacuum, evaporative cooling can get you ~arbitrarily cold, as long as you’re giving up enough fluids. I don’t know whether you freeze over or dry out first, but I’m sure someone at NASA has done the math.
2001 did it pretty close to right, but watch it with normies and they'll laugh at it because it doesn't meet their expectations.
Why do they want to put a data center in space in the first place?
Free cooling?
Doesn't make much sense to me. As the article points out the radiators need to me massive.
Access to solar energy?
Solar is more efficient in space, I'll give them that, but does that really outweigh the whole hassle to put the panels in space in the first place?
Physical isolation and security?
Against manipulation maybe, but not against denial of service. Willfully damaged satellite is something I expect to see in the news in the foreseeable future.
Low latency comms?
Latency is limited by distance and speed of light. Everyone with a satellite internet connections knows that low latency is not a particular strength of it.
Marketing and PR?
That, probably.
EDIT:
Thought of another one:
Environmental impact?
No land use, no thermal stress for rivers on one hand but the huge overhead of a space launch on the other.
I've talked to the founder of Starcloud about this, there is just going to be a lot of data generative stuff in space in the future, and further and further out into space. He thinks now is the right time to learn how to compute up there because people will want to process, and maybe orchestrate processing between many devices, in space. He's fully aware of all of the objections in this hn comments section, he just doesn't believe they are insurmountable and he believes interoperable compute hubs in space will be required over the next 20/30 years. He's in his mid 20s, so it seems like a reasonable mission to be on to me.
Seems far more likely that the "data generative stuff" will get smaller and cheaper to run (like cell phones with on-device models) much faster than "run a giant supercomputer in orbit" will become easy.
My headlights aren't good enough so I'm unsure but generally that maps. To me the interoperability part is what is interesting, your data and my data in real time being consumed by some understanding agent doing automated research? I could imagine putting something like a Stoffel MPC layer in there, then nations states can more easily work together? I presume space data/research will be highly competitive, even friendly nations may want to combine data without knowing the underneath. We're so far out here that it's kinda silly, but I don't think we're out to lunch? Have a great weekend Chris! :)
There are certainly nation states that are looking for ways to 1) prevent their satellites colliding with one another (https://eprint.iacr.org/2013/850.pdf) and 2) being able to do forms of computation that might be risky to do on earth for national security reasons.
> forms of computation that might be risky to do on earth for national security reasons
Such as...?
Earth is the closest spot for most of space. It makes most sense for satellites to send data back to Earth. They would have to find a use where with lots of compute but latency really matters.
For farther out, computer on ships, stations, or bases makes sense, but that is different than free floating satellites. They already have power, cooling, and maintenance.
It is like saying there should be compute in the air for all the airplanes flying around.
> there is just going to be a lot of data generative stuff in space in the future
Why?
Because near all analysts have it on somewhere between a 5% and 7% CAGR.
My initial thought was: ambiguous regulatory environment.
Not being physically located the US, the EU, or any other sovereign territory, they could plausably claim exemption from pretty much any national regulations.
This might be true, but unrealistic.
If you run amiss of US (or EU) regulators, they will never say, "well, it's in space, out of our jurisdiction!".
They will make your life hell on Earth.
I don't see it.
The US government does questionable things to people in places like Guantanamo Bay because the constitution gives those people rights if they set foot on US soil. Data doesn't have rights, and governments have the capability to waive their own laws for things like national security.
Corporations operating in space are bound to the laws of the country the spacecraft belongs to, so there's no difference between a data harbor in Whogivesastan vs. a data harbor on a spacecraft operated by Whogivesastan.
Space is terrible for that. There's only a handful of countries with launch vehicles and/or launch sites. You obviously need to be in their good graces for the launch to be approved.
If you want permissive regulatory environment, just spend the money buying a Mercedes for some politician in a corrupt country, you'll get a lot further...
A bit like international waters. I wonder when we'll see the first space pirates.
> A bit like international waters.
Which is a good analogy; international waters are far from lawless.
You're still subject to the law of your flag state, just as if you were on their territory. In addition to that, you're subject to everyone's jurisdiction if you commit certain crimes - including piracy. https://en.wikipedia.org/wiki/Universal_jurisdiction
Quick, we need a new Cryptonomicon, in space!
Ability to raise money from gullible investors.
One of the answers in OP is
> A lot of waste heat is generated running TDCs, which contributes to climate change—so migrating to space would alleviate the toll on Earth’s thermal budget. This seems like a compelling environmental argument. TDCs already consume about 1-1.5% of global electricity and it’s safe to assume that this will only grow in the pursuit of AGI.
The comparison here is between solar powered TDCs in Space vs TDCs on Earth.
- A TDC in space contributes to global warming due to mining+manufacturing emissions and spaceflight emissions.
- A comparable TDC on Earth would be solar+battery run. You will likely need a larger solar panel array than in space. Note a solar panel in operation does not really contribute to global warming. So the question is whether the additional Earth solar panel+battery manufacturing emissions are greater than launching the smaller array + TDC into space.
I would guess launching into space has much higher emissions.
Low Earth orbits are in the dark about 49% of time, but suffer no seasonal variability. Low Earth orbit is also very hot, and regular solar panels become less efficient the hotter they get.
The only sensible way to count pollution from solar+battery power manufacturing & disposal is do it on a per kWh basis.
The size of solar panels and radiators needed for an actual data center would be crazy in LEO. LEO still touches the atmosphere. ISS needs to be pushed higher regularly because of atmospheric drag.
Apparently the only way to make renewable energy cool is to put it in space
It’s actually very hot in low Earth orbit.
Speed of light is actually quite an advantage, in theory at least. Speed of light in optical fiber is quite a bit slower (takes 50% longer) than in vacuum.
On the environmental front, when it comes to the of life the entire data center is incinerated in the Earth's upper atmosphere
> Why do they want to put a data center in space in the first place?
At https://news.ycombinator.com/item?id=44397026 I speculate that in particular militaries might be interested.
Not every datacenter use case is latency sensitive. Backup storage or GPU compute, for example.
But then why bother with the added expense of launching into space? It's definitely not for environmental reasons.
I think you’re a little too dismissive of the 24/7 always available solar power, and the free cooling.
There’s no free cooling in space.
In space there’s no ambient environment to speak of, so you’re limited to radiative cooling, which is massively inferior to refrigeration.
There’s also no 24/7 solar in low Earth orbit, which is where you want to be for latency and serviceable.
That’s actually something I never considered. In a true vacuum since there are no particles, temperature is undefined.