Andrew Rynhard is chief technology officer for Sidero Labs.
Utilities are increasingly relying on edge computing to support fast, dynamic decision-making across distributed infrastructure. From automated grid balancing to substation control and remote fault detection, edge deployments are helping modernize aging critical infrastructure and drive efficiency.
But as utilities become more software-driven, they’re also introducing new cybersecurity challenges, and nowhere is that more true right now than at the edge.
The expanding edge footprint is a significant security shift. Edge nodes are being deployed at transformer stations, within distributed energy resources, at remote monitoring points and alongside smart meters. These systems often operate in locations where no IT staff is anywhere near, let alone onsite. They may rely on cellular or intermittent connections, and they often run continuously for years without routine maintenance cycles.
For adversaries looking to disrupt utility operations or test the resilience of national infrastructure, edge systems have become an increasingly tempting target. However, the security conversation still tends to revolve around network segmentation, threat detection or endpoint access control. Important as those are, they miss one foundational layer: the operating system.
The overlooked surface: OS-level risk in utilities
Every edge deployment runs an operating system and, in many utility environments, that OS is the weakest link. Traditional Linux distributions (originally built for servers or desktops) still underpin many OT systems, grid controllers and IoT gateways. These OSes are powerful, flexible and familiar. But they weren’t designed for today’s threat environment, nor for the realities of edge deployments that now often run on containerized architecture.
Most conventional OSes are mutable by default. Their configuration can drift, their file systems can be written to by any number of services or processes, and their security settings can be altered over time (often unintentionally). In a centralized data center or enterprise network, these issues are manageable because systems are easy to audit and maintain. At the edge, where access is limited and conditions change, they become liabilities. A system that is secure on Day 1 may no longer be secure on Day 1,000, and you may not know what changed.
For utilities now operating thousands of edge systems, the risk compounds quickly. A small misconfiguration rolled out across 10,000 nodes isn’t just a technical error, but an exploitable pattern. Attackers don’t need zero-day vulnerabilities when they can exploit outdated packages, exposed services or poorly secured update mechanisms.
Why immutability matters at the edge
To meet modern security expectations, utilities’ edge infrastructure needs more than reactive patching or policy enforcement. It also needs to be designed from the ground up to resist tampering, misconfiguration and drift. This is where the concept of an immutable operating system becomes powerful.
An immutable OS is one that cannot be altered during runtime. The system boots into a known-good state (defined and verified ahead of time) and remains in that state throughout operation. No one can log in and manually tweak firewall settings, nor can a rogue process write to the disk. Configuration is declarative, meaning it’s defined through code and automatically enforced on every single node, every single time.
This matters to utilities because the edge is largely inaccessible. If something goes wrong (whether it’s an outage, a breach, or just a silent misconfiguration), physical intervention is costly and slow. Immutable systems reduce the need for human touch. They also make it far easier to reason about security posture at scale. If every node is running the exact same image, with the exact same configuration, verified cryptographically, then audit becomes a matter of validating one system, not thousands.
Immutable systems also simplify updates. Rather than patching live systems in-place (which is a risky prospect in operational technology), you replace the running image with a new, verified version. The update is atomic, meaning it either succeeds completely or fails without altering the running system. That kind of rollback safety is critical when uptime and predictability matter more than raw agility.
As utilities begin adopting containerized architectures at the edge, often orchestrated by lightweight Kubernetes distributions, security becomes even more dependent on a trusted foundation beneath the orchestration layer. If the operating system isn’t secure, then no matter how sophisticated your containers or orchestration tools may be, you’re building on sand.
A real-world shift is underway
Some energy providers are already adopting this model, deploying thousands of edge nodes that balance national grids or support DERs with responsive load adjustment. These projects depend on lightweight, secure operating environments that can be centrally managed, even across geographically scattered deployments.
The early results are promising. Organizations are finding they can gain operational flexibility without expanding their risk surface. They’re reducing overhead by eliminating complex patching cycles and increasing resilience through standardized, locked-down software stacks.
More importantly, they’re setting a new baseline for what “secure by design” means in utilities’ edge deployments. Instead of relying on bolt-on security tools or manual oversight, they’re embedding security into the OS itself.
As the edge becomes a permanent and essential part of utility operations, utilities should look for edge systems that are minimal, declarative and tamper-resistant. The operating system should include only the components required to run its specific workload, avoiding unnecessary packages or exposed services that widen the attack surface.
System behavior should be defined through code (declaratively) and enforced automatically across every deployment. Login interfaces such as SSH should be eliminated altogether, especially in field-deployed infrastructure that cannot be reliably maintained through manual intervention. Security should be verified, not assumed, with cryptographic validation of every system image and configuration at boot. Last but not least, when it comes time to update, the process should be both safe and capable of rolling back cleanly in the event of failure, without leaving the system in an unpredictable state.
For utilities facing expanding risk and rising complexity, these tenets form the basis of a more defensible, manageable edge.
It’s easy to focus security attention on the parts of the stack utilities can see: network traffic, APIs, dashboards, etc. But the OS layer is where everything begins. It’s where services run, where controls are enforced, and where mistakes often take root. As utilities modernize, they should demand the same from their software. Security isn’t just a service, but a system property that must start at the operating system.