
Interest in Artificial Intelligence (“AI”) has increased significantly over recent months and we are finding many of our private equity and infrastructure clients are now looking to assess how AI will impact their investments, understand AI business opportunities (whether in related infrastructure or otherwise) and also consider the risks AI presents, including recent regulatory developments, particularly in Europe with the new AI Act. Of particular relevance to infrastructure investors, the new AI Act classifies AI systems used in some critical infrastructure assets as “High-risk” and therefore potentially subject to stringent requirements. At Sidley, we have a team who specialise in understanding AI and its impact on our clients and their investments. Please do not hesitate to reach out to any of us should wish to discuss further.
EU Artificial Intelligence Act (the “AI Act”)
The AI Act was recently adopted by the European Parliament and is expected to formally enter into force next month. The AI Act has been described as the world’s first AI law and governs the offering and use of AI systems throughout the EU. Importantly, the AI Act will apply both to companies in the EU and to companies outside the EU, where the output from an AI system is used in the EU or where the company is providing (selling) AI systems in the EU.
The AI Act has a broad scope of application across all sectors and industries, including companies in the energy and infrastructure sectors. The AI Act imposes regulatory requirements on AI system providers, importers, distributors and deployers (i.e. users) in accordance with the level of risk involved with the respective AI system (being “unacceptable”, “high”, “limited” and “minimal” risk). The AI Act therefore impacts anyone using AI in systems connected to physical infrastructure, for example to model ‘normal’ operations, spot anomalies, monitor traffic networks in real time or check physical networks for faults using image recognition. The energy sector increasingly deploys AI to smooth grid balancing, for example.
High-risk AI Systems and Critical Infrastructure
AI Systems are classified into four different risk categories – Unacceptable risk, High-risk, Limited risk and Minimal risk. Unacceptable risk AI systems are banned from being offered and used in the EU, while High-risk AI systems are subject to stringent regulatory requirements which necessitate, among other things: (i) the establishment of quality and post-marketing monitoring and risk assessment systems; (ii) the training of AI systems using high-quality data sets and (iii) the implementing of AI governance programs and human oversight controls.
Importantly, the AI Act classifies AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic and the supply of water, gas, heating and electricity as High-risk. Therefore, companies in the energy and infrastructure sectors using AI could be subject to stringent requirements for High-risk systems under the AI Act. In addition, other uses of AI can involve High-risk uses, such as use of AI for human resources purposes such as recruitment.
Non-compliance with the AI Act may lead to fines of up to 7% of annual worldwide turnover. The AI Act will be implemented with a phased transition of between six months (i.e. later this year) to 36 months following entry into force depending on the regulatory obligation under the AI Act.