Eric Swidey is the founder of Thirty Seven Inc., which offers AI decision-making frameworks for regulated businesses.
The electric utility industry is about to learn an expensive lesson about artificial intelligence and regulatory compliance.
Critical Infrastructure Protection Reliability Standard CIP-015-1 requires internal network security monitoring by October 2028, and utilities are racing to deploy AI tools that can process the thousands of daily alerts human analysts cannot. What most haven't realized is that these tools create a documentation problem that traditional compliance auditing cannot solve.
When a North American Electric Reliability Corp. auditor asks why an AI system flagged one event but not another, "the model determined it" is not documentation. It is an audit finding waiting to happen.
NERC Critical Infrastructure Protection standards compliance has always been about evidence. Every access authorization, every configuration change, every vulnerability assessment requires documentation that can withstand auditor scrutiny. The industry spent two decades building these paper trails.
AI changes the equation. A machine learning model analyzing network traffic makes thousands of micro-decisions daily. Which packets warrant investigation? Which anomalies are false positives? Which alerts get escalated? Each determination has compliance implications. None of them generate the documentation auditors require.
The utilities deploying these tools today are creating a compliance gap that will become visible only when auditors start asking questions. By then, the liability has already accrued.
Consider the math. A typical security monitoring system might flag 10,000 events daily. An AI triage tool reduces that to 500 actionable alerts. What happened to the 9,500 events the AI dismissed? Can you reconstruct why? Can you demonstrate that dismissal was appropriate under CIP-015-1?
The honest answer for most utilities today is no.
Why traditional auditing cannot keep up
Post-hoc compliance auditing assumes that decisions happen slowly enough to document. A human analyst reviews an alert, makes a determination and logs the rationale. Auditors can trace that chain months or years later.
AI operates at a fundamentally different tempo. By the time an auditor examines a determination, the model that made it may have been retrained three times. The feature weights that drove the decision no longer exist. The training data has been archived or deleted. The decision is orphaned from any explainable context.
Some vendors claim their AI tools include "explainability" features. These typically generate post-hoc rationalizations, not real-time audit trails. An algorithm that says "this alert was dismissed because of factors X, Y and Z" after the fact is not the same as documenting the actual decision process in real time.
NERC auditors know the difference. They have spent years learning to distinguish genuine compliance documentation from reconstructed narratives. The same skepticism that catches backdated access logs will catch AI explanations that do not hold up under technical scrutiny.
What utilities should demand from AI vendors
The solution is not to avoid AI. The operational benefits are too significant, and the volume of security data is too large for human-only approaches. The solution is to demand AI tools that build compliance documentation into their architecture rather than bolting it on afterward.
Three capabilities matter:
- Real-time audit trails: Every AI determination should generate a contemporaneous record of the inputs, the decision logic and the output. Not a summary. Not an explanation generated later. The actual computational trace that an auditor can examine.
- Adversarial verification: Before an AI determination becomes final, a separate system should challenge it. Did the model consider the right inputs? Are there edge cases that would change the outcome? Is the determination consistent with similar cases? This challenge-and-response creates the documentation that demonstrates due diligence.
- Immutable logging: Audit trails are worthless if they can be modified. Every determination and every challenge should be cryptographically timestamped and stored in a manner that prevents tampering. When auditors examine records from 2027 in 2030, they need confidence that those records reflect what actually happened.
These are not exotic requirements. Financial services and healthcare have been building these capabilities for years. The electric utility sector is simply behind.
NERC penalties for CIP violations routinely reach seven figures. AI adds a new category of exposure that the industry has not yet priced.
The utilities that deploy AI monitoring tools without adequate documentation will face a choice when auditors arrive: admit they cannot explain their AI's determinations, or attempt to reconstruct explanations after the fact. The first option is a violation. The second is potentially worse.
Regulators are paying attention. FERC has signaled increased scrutiny of AI in critical infrastructure. State commissions are asking questions about algorithmic accountability. The window for deploying AI tools that cannot withstand regulatory examination is closing.
The path forward
Utilities planning AI deployments for CIP-015-1 compliance should add documentation requirements to their vendor evaluations now, not after contracts are signed.
The questions are straightforward: How does your system document each determination in real time? Can we produce a complete audit trail for any decision made in the past three years? How do you verify that AI determinations are appropriate before they become final? What evidence can we provide auditors that our AI is operating within its intended parameters?
Vendors that cannot answer these questions are selling tools that create compliance liability, not compliance capability.
The October 2028 deadline feels distant. It is not. Procurement cycles, implementation timelines and testing requirements mean that decisions made in 2026 will determine which utilities are audit-ready in 2028, and which are explaining to NERC why their AI cannot account for its own decisions.
The utilities that get this right will have AI tools that make their compliance programs stronger. The utilities that get this wrong will have the most sophisticated systems for generating audit findings their industry has ever seen.