AI Governance for Energy and Utilities: What NIS2, DORA, and the EU AI Act Mean for Your AI Deployments
How EU energy and utility SMEs should govern AI under NIS2, DORA, and the EU AI Act. Covers grid management, trading, and critical infrastructure.
TL;DR: How EU energy and utility SMEs should govern AI under NIS2, DORA, and the EU AI Act. Covers grid management, trading, and critical infrastructure.
Energy and utility companies deploying AI face a regulatory stack that is heavier than most industries and evolving faster than most governance teams can track. The three-layer stack for energy is GDPR + EU AI Act + NIS2 (with DORA for energy trading operations and REMIT for market integrity), and the critical infrastructure designation changes everything about how quickly you must respond when something goes wrong. For a Spanish renewables operator, a Dutch grid balancer, or a Polish district heating utility, the practical answer to "how strict are the rules" is "stricter than IT".
The operational constraint that makes energy different: downtime is not a business inconvenience for the operations team or the CISO. It is a public safety issue. An AI system that manages grid load, predicts demand, or optimises energy trading operates in an environment where a wrong decision can cascade into supply disruption. Governance for these systems is not about compliance checklists; it is about operational resilience, and the clock is short. Most EU AI Act obligations apply from 2 August 2026, and DORA has been in force since 17 January 2025.
The Energy-Specific Regulatory Stack
Layer 1: EU-Wide Base
- GDPR: applies to smart meter data, customer consumption patterns, employee data, and any personalised energy management services
- EU AI Act: AI systems used in the management and operation of critical infrastructure (including energy supply) are explicitly classified as high-risk in Annex III, point 2. Most rules apply from 2 August 2026
Layer 2: Sector-Specific Regulation
NIS2 Directive (Network and Information Security):
- Energy is a Sector of High Criticality under NIS2 Annex I
- Essential entities (electricity, gas, oil, hydrogen, district heating, district cooling) must implement comprehensive cybersecurity risk management measures
- AI systems connected to operational technology (OT) networks are in scope for NIS2 cybersecurity obligations
- Incident reporting: significant incidents must be reported within 24 hours (early warning), 72 hours (incident notification), and 1 month (final report)
- Supply chain security: AI vendors and model providers are part of your NIS2 supply chain risk assessment
- National variability caveat: NIS2 obligations are EU-level, but national implementation and supervisory detail vary by Member State. Confirm the exact transposition status, the competent authority, and the local reporting templates with your national CSIRT or sector regulator before assuming a uniform European baseline
DORA (Digital Operational Resilience Act):
- Applies if your energy company engages in financial activities (energy trading, derivatives, market operations). DORA has been in force since 17 January 2025
- ICT risk management framework must cover AI systems used in trading decisions
- Third-party risk management: AI model providers are ICT third-party service providers under DORA
- Operational resilience testing: AI systems in trading operations must be included in your resilience testing programme
REMIT (Regulation on Energy Market Integrity and Transparency):
- AI used in energy trading must not create or amplify market manipulation
- Algorithmic trading in energy markets requires transparency and audit trail
Layer 3: Operational Controls
- ICS/SCADA security: AI connected to grid management systems inherits the full OT security posture, including air-gapping considerations, network segmentation, and safety-instrumented system (SIS) independence
- Real-time performance requirements: AI in grid balancing operates under millisecond-to-second latency requirements. Governance must ensure that model updates do not introduce latency that affects grid stability
- Redundancy and failover: AI systems in critical energy operations must have deterministic fallback modes. When the AI fails, the system must degrade to a safe, predictable state, not an unpredictable one
Four AI Use Cases in Energy and Their Governance Requirements
| Use Case | Risk Level | Key Governance Requirement |
| Demand forecasting | Medium | Model accuracy monitoring, bias detection across customer segments, transparent methodology for regulatory reporting |
| Grid load balancing | High | EU AI Act high-risk classification, real-time performance monitoring, deterministic failover, NIS2 incident reporting |
| Predictive maintenance (substations, turbines) | Medium | Similar to manufacturing governance: validation protocol, degradation monitoring, human override capability |
| Energy trading (algorithmic) | High | DORA ICT risk management, REMIT market integrity, audit trail for all trading decisions, explainability for regulators |
Building Energy-Specific Controls
NIS2-Compliant AI Risk Management
NIS2 requires "appropriate and proportionate technical, operational, and organisational measures to manage the risks posed to the security of network and information systems." For AI systems, this translates to:
- AI asset inventory: every AI system that touches OT networks or critical operations must be catalogued with its function, data inputs, decision authority, and failure modes
- Vulnerability management for AI: model vulnerabilities (adversarial inputs, data poisoning, prompt injection for LLM-based systems) must be included in your vulnerability management programme alongside traditional software vulnerabilities
- Access control: AI system administration, model updates, and training data management must follow the principle of least privilege with multi-factor authentication
- Backup and recovery: AI models, training data, and configuration must be included in your backup and disaster recovery plan. A corrupted model must be recoverable to a known-good state within your recovery time objective
Critical Infrastructure Incident Response for AI
Energy companies face the strictest incident reporting timelines in the EU. When an AI system causes or contributes to an incident:
- 24 hours: submit early warning to the competent authority (national CSIRT or relevant authority). Include: what happened, whether the AI system was involved, and initial assessment of impact on energy supply
- 72 hours: submit incident notification with root cause analysis (if known), affected systems, and remediation actions taken
- 1 month: submit final report with complete root cause analysis, lessons learned, and governance improvements implemented
Your AI incident response plan must include NIS2-specific escalation paths and reporting templates for AI-related incidents.
Deterministic Failover Design
The non-negotiable requirement for AI in energy: when the AI fails, the system must be safe.
- Failover mode: define the deterministic behaviour for every AI-controlled function when the model is unavailable, produces low-confidence outputs, or behaves unexpectedly
- Separation from SIS: AI systems must not be integrated into safety-instrumented systems (SIS) without meeting IEC 61511 functional safety requirements. Keep AI advisory and SIS deterministic
- Testing: failover behaviour must be tested regularly, including scenarios where the AI produces confidently wrong outputs (not just scenarios where it produces no output)
A worked example for an EU mid-market utility: a transmission operator runs an AI demand-forecast model that drives a load-balancing recommendation. When the model's confidence drops below a written threshold, the recommendation is suppressed and the system falls back to the previous deterministic forecast curve. The CISO's NIS2 incident playbook treats a sustained suppression event as a degradation incident requiring early-warning notification within 24 hours; the operations director and the regulatory affairs lead are co-named owners on the playbook.
Frequently Asked Questions
Is all AI in energy classified as high risk under the EU AI Act?
No. AI used in the management and operation of critical infrastructure is high risk. Demand forecasting for billing purposes is not. Grid load balancing is. The classification depends on the function, not the technology. AI that influences supply reliability or safety is high risk. AI that optimises back-office operations is not.
How does NIS2 affect our choice of AI vendors?
NIS2 requires supply chain security assessment. Your AI model provider is a third-party service provider whose security posture affects your compliance. You must evaluate: where the model is hosted, how training data is handled, what access the vendor has to your systems, and what happens if the vendor is compromised. This applies to cloud AI services, on-premises deployments with vendor support, and open-source models with commercial support contracts.
Do we need separate governance for IT-side AI and OT-side AI?
Yes. IT-side AI (customer analytics, billing optimisation, workforce management) follows standard IT governance. OT-side AI (grid management, SCADA-connected systems, safety monitoring) must follow OT governance including ICS security standards (IEC 62443), network segmentation, and safety system independence. The governance frameworks should share a common policy layer but have distinct operational controls.
What happens if our AI trading system causes a market anomaly?
REMIT requires that market participants do not engage in market manipulation or insider trading. If your AI trading system causes an anomalous price movement, you must be able to demonstrate that the system was operating within its designed parameters, that you had adequate controls in place, and that the anomaly was not the result of a design flaw or inadequate governance. Audit trail and explainability are your primary defences.
Further Reading
- How to Build an AI Governance Framework That Fits Your Industry
- AI Governance for Financial Services SMEs
- AI Incident Response for Engineering Leaders
- How to Build an AI Security Posture for Your Engineering Organisation
Get Energy-Sector AI Governance Right
If your energy or utility company is deploying AI without accounting for NIS2, DORA, and the critical infrastructure classification under the EU AI Act, your regulatory exposure is growing faster than your governance can track.
Our AI Readiness Assessment evaluates your AI governance against the three-layer regulatory stack for energy and utilities, identifies gaps in NIS2 compliance, OT security controls, and incident response readiness.
If you need help building the governance operating model for AI in critical infrastructure, our AI Consulting services design frameworks that satisfy both the sector regulator and your operational resilience requirements.

