Arushi Sharma Frank is senior adviser on power and utilities for Emerald AI, which is partnering with Nvidia on its first power-flexible artificial intelligence data center.
About seven months ago, I read an article written by David Mytton, a powerhouse veteran of the enterprise data center and compute industry.
He wrote, "Data center demand response is one of those topics that academics love.... This is a classic example of academic research being completely disconnected from commercial realities."
I found this article fitting and intellectually honest in describing the status quo at the time, full of critical points on the legacy of pre-AI large electronic loads. Then I decided to invest my energy in changing the status quo. If we get this right, we stop treating data centers as opaque loads and start treating them as grid assets that happen to compute. Utilities gain a tool to manage peaks without blindly overbuilding. Communities see affordability protected rather than eroded. And builders, working with partners that build the hardware, software and infrastructure, finally have a repeatable road map that lets AI growth strengthen the power system instead of straining it.
Part of pushing that change is advising strong founders to enable a rapid shift for AI workloads. We need to scale AI to solve some of the hardest human problems, and we need those parameters to scale to match the needs and experience of the power grid.
Now, I can talk about that shift.
Emerald AI is partnering with the nonprofit Electric Power Research Institute, regional grid operator PJM Interconnection, data center real estate company Digital Realty and Nvidia on Nvidia’s first power-flexible 96-MW Aurora AI factory, a data center set to go live in mid-2026 in Manassas, Virginia.
The facility will be the first to implement a new industrywide reference design being created through the Aurora site’s partnerships, led by Nvidia.
Emerald AI is a startup I advise and have helped integrated into the power utilities and ISO/RTO ecosystem. Just a few months back, I took the Emerald team to visit PJM, introduce the company, and collect information on the parameters Emerald needs to build its capacities and launch its objectives in the toughest data center/consumer affordability market in the country.
Since that meeting, Emerald AI has steadily expanded its stack to be the brain in the middle, balancing grid needs, site limits and workload priorities against the customer’s mission.
My role is to help the company push the envelope — to take what batteries, performance-banded and telemetered virtual power plants, tightly integrated power-train solutions and synchronous machines can already do and make the AI electronic load deliver those capabilities.
Software solutions can protect service-level agreements for AI compute workload while the data center still responds to grid needs by providing
- Peak load demand response. It can hold targeted reductions (e.g., 20%-30%) for multi-hour windows on utility peaks. That includes no post-event rebound (snap-back) so the site behaves like a dispatchable resource with a customizable duration curve. (Instead of power-limiting duration, consider elements in time and space to extend/stretch out/compress the duration of a load shift, cap, or drop response signal.)
- Long-duration curtailments. It can sustain reductions for up to 10 hours.
- Multi-event days. It can respond to two or more discrete events in one day with different depths and durations.
- Ramp shaping. It can execute smooth, linear ramps down and up over 5, 15, or 30 minutes to match emergency vs. forecasted events.
- Variable notice. It can honor both rapid (about 10-minute) and planned (up to 2-hour) notices.
- Carbon-aware dispatch. It can track a 5-minute marginal CO2 signal and modulate site power accordingly.
- Tariff/market signals. It can map locational marginal price bands into power targets, form day-ahead dispatchable resource bid curves and deliver cleared reductions.
- Custom grid profiles. It can follow a utility-provided load shape hour by hour and replay historical grid events (e.g., the Polar Vortex, 2-hour capacity calls) as proofs.
The beauty of software-enabled control stacks, scheduling, firming and power-capping is that they create a new governance profile for how utilities and grid operators can expect these sites to behave.
We are still beleaguered today as an industry attempting to onboard electronic computing with the challenges in operating these assets in grid-synchronized modes. All the parts need to work together from the perspective of the load, but the other side of the high-voltage system — the system managed by the utility and grid controls — needs to see information that is informative, measurable and verifiable.
These announcements showcase the strength and speed of partnerships as the essential way of the future grid and create the momentum for grid flexibility to truly go mainstream.