Sean Burri is an infrastructure engineer for Dominion Energy. This article reflects his personal views.
The narrative by now is all too familiar: Deploy artificial intelligence, stabilize the grid and eliminate outages. Executives repeat it. Journalists amplify it. Vendors sell it. And that simplification is terribly wrong.
AI has rapidly evolved from a laboratory innovation to a vital component of decision-making in the modern energy industry. Electric utilities worldwide are turning to AI-driven forecasting, outage response and optimization systems to improve reliability and sustainability.
But the notion that simply deploying AI automatically guarantees better reliability is a misunderstanding that is costing utilities real money and real credibility. Reliable performance depends on how utilities manage data governance, workforce training and infrastructure modernization, not on which algorithm they purchase.
I have spent years in IT operations at Dominion Energy watching this play out in real time. The projects that succeeded were never the ones with the most sophisticated models. They were the ones embedded within enterprise strategy, supported by continuous feedback loops, and evaluated against operational metrics rather than abstract innovation goals — the ones that brought in frontline employees to understand the process on an intimate level. Projects that struggled often treated AI as a plug-and-play solution rather than a transformation requiring organizational commitment.
The evidence is real, but so are the conditions.
The empirical case for AI in energy is genuine. Utilities adopting AI-based forecasting tools have achieved notable accuracy gains and operational resilience. One Turkish utility replaced its conventional time-series forecasting model with a generative adversarial network approach, yielding sharper demand predictions, reduced overloads and fewer cascading failures.
But those improvements were not due solely to mathematics. They required managers who integrated model outputs into planning meetings and adjusted maintenance schedules accordingly.
On the outage management side, machine learning models using real-world utility data have outperformed baseline statistical methods by meaningful margins in predicting outage durations. More accurate estimates allow dispatch centers to prioritize repairs efficiently and communicate credible restoration times to customers.
In an era when reliability indices directly shape regulatory ratings, AI's predictive capability becomes a measurable operational asset, but only when people know how to use it.
Countries employing AI to coordinate renewable energy have achieved greater stability during peak fluctuations and extreme weather events. By learning from meteorological and consumption patterns, AI enables utilities to anticipate strain and reroute power before failure occurs.
Yet technical progress continues to outpace regulatory readiness. AI alone cannot resolve bottlenecks if policies, cybersecurity and human expertise lag behind. That lag is where most utilities currently find themselves.
3 obstacles between pilot and scale
Despite impressive results in controlled deployments, scaling AI remains difficult. Three persistent obstacles define the landscape: fragmented data governance, legacy hardware and cultural resistance to automation.
Many utilities still operate on supervisory control systems designed decades ago. These older platforms cannot handle the high-frequency data streams required for modern analytics. Upgrading them demands significant capital and cross-departmental coordination, neither of which moves quickly in a regulated industry built on long planning cycles.
The second obstacle is organizational, not technical. AI projects stall when utilities treat them as technology add-ons rather than as full organizational transformations. Executives may fund algorithms without restructuring workflows or retraining staff, producing isolated islands of automation that never connect to actual operations. The misunderstanding here lies in assuming that reliability stems from software alone. In truth, reliability emerges from integration.
The third obstacle is human. Employees accustomed to manual control sometimes distrust machine recommendations. Utilities that successfully implemented AI invested early in communication and change management, explaining what AI does, how it learns and when human intervention remains critical. And human intervention absolutely remains critical. This training reframes AI as a tool for empowerment rather than replacement, reducing internal resistance and increasing accountability. Without it, even technically sound deployments fail in the field.
Human oversight is not a limitation. It's the point.
While automation increases speed and efficiency, human oversight ensures accountability and ethical decision-making. Reliability depends on transparent human-AI collaboration. Supervisory engineers must interpret predictions, adjust thresholds and verify results. When operators understand model logic, they can intervene effectively during anomalies. My own experience in IT operations confirms this: Predictive analytics are only as reliable as the people maintaining them.
There is also a strong case for explainable AI frameworks that make algorithms interpretable. If engineers cannot see why a model forecasts a 45-minute restoration instead of 60, they may ignore the output, wasting AI's potential. Embedding transparency builds confidence and prevents blind reliance on opaque black boxes.
Cybersecurity adds another layer of oversight that the industry has been slow to take seriously. Adversarial attacks could manipulate AI inputs or corrupt training data, causing false predictions and system instability. Reliability now includes data integrity audits and model-drift detection. Ethical supervision requires vigilance not only over human bias but also over algorithmic vulnerabilities, a lesson that will become harder to ignore as AI systems take on greater operational authority.
Strengthening AI through data governance
Data governance forms the foundation of trustworthy AI, and it is consistently the piece utilities underinvest in.
Data interoperability, how information moves across departments, directly determines model accuracy. Many utilities maintain separate databases for operations, maintenance and customer service, creating silos that hinder holistic analysis. Integrating these datasets allows AI to contextualize predictions and improve optimization. Without integration, models are predicting in the dark.
Standardized architectures and common data taxonomies enhance both performance and accountability. Shared frameworks allow multiple utilities to train and validate algorithms collaboratively while protecting sensitive information through anonymization.
A common misunderstanding among policymakers is assuming that data sharing undermines competition. In reality, structured collaboration boosts collective resilience. By pooling non-proprietary operational data, utilities create an ecosystem where every participant benefits from improved reliability models — much like the electrical grid itself, interconnected and interdependent.
Policy is where technical optimism collides with bureaucracy.
The United States remains hindered by fragmented oversight among federal, state and local agencies. Without unified guidance, utilities face a patchwork of regional regulations that hinder adoption. Poorly regulated AI could create inequities in service reliability, favoring data-rich urban regions over rural areas.
The contrast with other countries is instructive. Among nations that aligned their energy, data and industrial strategies, Denmark and Norway are the clearest examples. They outperform others in grid stability and renewable integration. Clarifying jurisdictional boundaries and data standards would allow AI projects in the U.S. to scale efficiently. That clarification requires political will that has been largely absent.
Reframing the sustainability narrative
Beyond short-term reliability, AI is reshaping how utilities meet long-term sustainability goals.
Predictive algorithms help integrate renewable resources by forecasting variability and adjusting generation accordingly. Through digital twins, utilities can simulate thousands of energy-transition scenarios before implementing them, revealing trade-offs between cost, reliability and emissions that allow data-driven decisions in support of climate targets.
But the public narrative consistently oversimplifies the relationship between AI and sustainability. Many assume that any AI adoption automatically promotes clean energy, ignoring the carbon cost of data centers and model training. Understanding AI's environmental footprint is essential to evaluating its true role in decarbonization. The industry owes its customers, its regulators and itself a more honest accounting.
Looking forward, AI's role in energy reliability will expand from prediction to autonomous coordination. Self-healing grids that can isolate faults and reroute power autonomously are no longer speculative. Decentralized AI agents managing microgrids through peer-to-peer communication, each node learning from its neighbors to maintain stability, are in active development.
These advances promise faster recovery times but introduce new challenges: interoperability, cybersecurity and the need for regulatory frameworks that define accountability in automated decisions.
The misunderstanding most likely to persist is the belief that AI is an end point rather than an evolving partnership between technology and people. The future grid will not be fully automated; it will be collaborative. A system where algorithms augment rather than replace human decision making.
Genuine improvement arises from the synergy of data, infrastructure and human expertise. AI's true power lies not in autonomous prediction but in its integration into the social and organizational fabric of utility operations. Making AI work for utility companies requires more than powerful algorithms.
It requires a change in power itself, from technology as a master to technology as a partner.