Brandon N. Owens is the founder AIxEnergy, an independent thought leadership platform.
I have spent the past two decades analyzing energy systems — across corridors, markets and control rooms. If there is one pattern that repeats, it’s this: technology moves faster than governance. Today, artificial intelligence is on the cusp of reshaping how we forecast demand, dispatch energy, manage outages and allocate investment across the grid. But while the technology races ahead, the frameworks to guide it have barely begun to form.
What we’re witnessing is not just digital transformation. It is cognitive infrastructure — a system that anticipates, learns and optimizes. In some control centers, AI is already being used to balance distributed energy, identify faults and forecast system stress. In others, it is guiding major capital decisions based on probabilistic scenario modeling. And soon, these systems may be making ethical decisions — without ever being trained in ethics.
The risk is not some science-fiction singularity. It is something far more practical: optimization without deliberation.
Imagine an AI model designed to restore power after an outage. If it is trained solely to maximize economic productivity, it may prioritize large warehouses over nursing homes — not out of malice, but because that’s what the objective function rewards. Or consider forecasting algorithms that perpetuate underinvestment in low-income neighborhoods because historical usage was low — not because demand is low, but because access has been limited.
These are not hypothetical edge cases. They are quietly becoming part of real-world grid operations — often buried deep within optimization engines, procurement models and DER orchestration platforms.
The problem isn’t that AI is malfunctioning. The problem is that it’s working exactly as designed — and we haven’t designed it to align with public values.
The industry has been here before. For much of the 20th century, infrastructure decisions — where to route highways, site power plants or deploy upgrades — often reinforced systemic inequities. The damage was not abstract: it was felt in asthma rates, disconnection notices and neighborhood disinvestment. What we now call “energy justice” began as a response to systems that operated with blinders on.
AI threatens to replicate that pattern — at scale, and at speed — if we do not act now to build in governance, explainability and accountability.
What does that look like?
- Certifiable AI: Just as hardware components require testing and approval, AI systems operating in critical infrastructure should undergo model validation, behavior audits and drift detection.
- Explainability Protocols: Grid-facing AI should not be black boxes. Operators, regulators and the public need to know how decisions are made — and have mechanisms for challenge or override.
- Trust Frameworks: We need to define the rules of the road: who is responsible when AI decisions go wrong? What values are embedded in system objectives? And who gets to update them?
These guardrails are not barriers to innovation — they are enablers. They ensure that as AI becomes integral to grid operations, it operates with civic intention, not just technical efficiency.
With climate volatility, data center-driven load growth and electrification accelerating rapidly, grid operators and utilities are under enormous pressure to modernize. AI will absolutely be part of the solution. But without governance, it could also amplify the very inequities the clean energy transition seeks to correct.
It is time to stop treating AI as just a tool — and start treating it as a decision-maker in need of direction.