Itay Gissis is vice president of research and development at nT-Tao.
When the next wave of artificial intelligence models fires up its data-center cluster, it may consume as much electricity as an entire city. That’s not science fiction — it’s the near future.
According to the International Energy Agency, data centers already use about 415 TWh of electricity annually — roughly 1.5% of global supply — and are on track to nearly double that by 2030, approaching Japan’s total national consumption.
For more than a century, the global energy system has revolved around a few massive baseload plants: coal, gas, nuclear fission and large hydro. They feed continental grids, optimized for economies of scale. But the digital economy is rewriting those equations.
As data centers cluster around major hubs, they are stressing local grids, prompting policy shifts like the U.S. Department of Energy’s recent directive to accelerate grid connections, and sparking questions about who gets power, when, and at what price.
These are no longer just engineering problems. They’re political, economic and even military issues. A single outage can silence AI services, industrial zones or defense systems.
This energy inflection point is one reason compact fusion power is suddenly looking strategic. Not as a far-off dream, but as a possible near-term element of distributed, resilient, high-density power systems. Instead of one colossal plant hundreds of miles away, imagine dozens or hundreds of 20–100 MW fusion modules sitting beside data hubs, industrial clusters, ports or remote bases. Distributed baseload, built for resilience.
The question now is not whether fusion will matter, but how we build it small, fast and local. To understand why that matters and why it is so difficult, we must first revisit the physics foundations.
Why fusion and why now
Fusion is the process that powers the stars: hydrogen nuclei merging into helium, a small loss of mass releasing enormous energy through E = mc².
To achieve it on Earth, we must heat hydrogen gas (the isotopes deuterium and tritium) into plasma, an electrically-charged gas, at a hundred million degrees Celsius, and hold it long enough for enough collisions to fuse.
The difficulty, of course, is that hot plasmas want to expand and cool. Fusion performance depends on the product of plasma density and confinement time: How many particles you can pack together and how long you can keep them hot.
Historically, the dominant approach to fusion has been magnetic confinement, which relies on achieving longer confinement time in a lower-density plasma.
The most famous magnetic confinement system is the Tokamak. Its younger sister, the Stellarator, is rapidly gaining momentum as modern designs demonstrate excellent confinement and stability.
Both concepts rely on the same basic principle: powerful magnetic fields shaping and enclosing a hot plasma. Their performance improves strongly with machine size and magnetic-field strength — a scaling rooted in fundamental plasma physics and confirmed by thousands of experiments across many devices worldwide. This extensive evidence gave physicists confidence that magnetic confinement could reach power-plant conditions by building larger machines with stronger magnets. The challenge then shifted from physics to engineering.
But reality has been less forgiving. The ITER project in France illustrates the scale of the engineering burden: enormous superconducting magnets, a massive vacuum vessel, complex blankets and shielding, and megawatt-class heating systems. The result is a machine that takes decades to construct and costs tens of billions of dollars.
Magnetic confinement is still the likeliest near-term path to demonstrating sustained net energy. Had I been choosing purely for physics conservatism, I might have selected it too. But the next question — commercialization and deployment rate — looks very different when capital expenses reach this scale.
This leads to a fundamental question: How do we balance the triangular trade-off between robust physics, achievable engineering and the economic realities of commercialization?
Making the reactor compact relaxes many of the hardest engineering constraints but it also changes the physics regime. As the core becomes smaller, confinement time falls, so plasma density and heating rate must rise to compensate. The result is a reactor that does not operate in steady-state but instead produces short, intense pulses of fusion power.
The analogy is closer to an internal combustion engine than to a steady furnace. Fuel gas is injected, rapidly compressed and heated to fusion conditions; the resulting burst of neutrons deposits heat in the surrounding blanket, and the plasma is exhausted. This cycle repeats roughly once per second.
In practice, this approach requires:
- Delivering extreme bursts of energy via pulsed heating or magnetic compression rather than continuous input.
- This is achieved using pulsed-power systems that generate high-voltage, high-current bursts.
- Stabilizing denser plasmas under much sharper gradients, ensuring that instabilities do not unravel the plasma.
- Engineering reactor walls and materials positioned closer to the plasma and exposed to higher instantaneous heat and neutron loads.
The hurdles are significant, but the rewards are equally large. Smaller core means smaller critical components, including magnets, vessels and blankets, which directly reduces capital expenses. A compact fusion system requires less structural mass, less specialized fabrication and fewer bespoke components. This shrinks both the upfront cost and the economic barrier to deployment, making fusion more competitive with existing energy technologies.
A second advantage is that scaling power through pulse-power relies on electrical energy storage and delivery rather than on enlarging heavy infrastructure. High-voltage capacitors, fast switches, and, soon, solid-state modulators can deliver hundreds of megawatts per pulse with high efficiency. These components belong to industrial supply chains that improve at the pace of modern semiconductor and power-electronics industries.
The third benefit relates to the private sector pushing fusion, and we have no time to spare: Compact systems can be built, tested and iterated far more quickly. This enables rapid build-test-learn cycles. This agility is impossible with multi-billion-dollar, decade-scale fusion cores, and it accelerates both physics validation and engineering refinement.
Thus, although the scientific road to breakeven in compact fusion may be a bit longer, on this road, the potential payoff is transformative: a fusion platform that scales through replication rather than size, built from components supported by mature industries, enabling rapid cost reduction and global deployment.
From challenge to momentum
The landscape of fusion is shifting. What was once the domain of billion-dollar megaprojects is now being populated by agile, commercially focused efforts pursuing compact, rapidly iterated designs.
Several forces are driving this transition. Advances in high-field magnets, precision materials and power electronics are enabling reactors that are smaller, denser and more controllable than their predecessors. At the same time, global demand for clean, reliable, local power is soaring. AI infrastructure, industrial clusters and defense installations all need energy sources that bypass grid bottlenecks and geopolitical constraints.
Compact fusion does not replace large projects like ITER, DEMO or commercial magnetic-confinement systems — it complements them. And when the first compact fusion unit achieves continuous net power, it will do more than flip a switch; it will bridge the worlds of big science and fast tech, turning fusion from a monumental experiment into an everyday utility.
The goal is not to build a bigger star, but to bring the power of the stars within reach.