AI safety and liability regulation: EU AI regime
Despite the UK being one of the main players in the artificial intelligence (AI) arms race, the EU is one step ahead in regulating the safety of AI. In the previous article in this series, we discussed the changes to the EU product safety and liability regime in comparison to the UK’s proposed reforms. This article, the third in this five-part series, briefly looks at the EU Artificial Intelligence Act (AI Act although actually an EU Regulation), the world’s first comprehensive AI law, and the complementary AI Liability Directive (ALD or AI Directive).
The AI Act defines AI as “software that is developed for a given set of human-defined objectives and generates output such as content, predictions, recommendations or decisions influencing the environments they interact with” (which includes technology like Alexa or Google Assistant).
The European Commission and the EU Parliament reached a provisional agreement on the AI Act on 9 December 2023. The European Parliament formally adopted the AI Act on 13 March 2024. With some exceptions in force sooner, the AI Act is expected to come into effect in 2026.
New AI technology has created new risks not covered by the existing product safety and liability regime, so reform is necessary to protect individuals from being harmed by AI and provide them with a means of claiming damages for such loss.
What does the AI Act cover?
The AI Act covers businesses involved in placing on the market, putting into service or using AI systems. It will cover those:
- Located within the EU.
- Not located within the EU, but putting AI systems on the market and/or putting AI systems into service in the EU.
- Providers or users of AI systems in a third country, where the output produced by the AI is used in the EU.
What are the rules?
In general, the AI Act will (amongst other things) create rules for placing AI systems on the market/into service, including the prohibition of certain AI practices and specific obligations depending on the level of risk use of the AI system presents, as follows:
- Unacceptable risk – AI systems in this category will be considered a threat to the safety, livelihood and rights of people and will consequently be banned.
- High-risk – AI systems in this category will be subject to strict obligations before they can be placed on the market (e.g. risk assessments and mitigation systems).
- Limited risk – AI systems in this category must meet specific transparency obligations.
- Minimal risk – AI systems in this category, such as AI-enabled video games, may be used freely.
Following the introduction of the AI system to the market, providers must establish a monitoring system proportionate to the nature and risk of the AI system, and corrective actions must be taken if the AI system does not conform to the requirements of the AI Act.
AI Liability Directive (ALD)
If an AI system provider fails to comply with the AI Act, and an individual (or business) suffers harm as a result, the proposed ALD ensures that they can be adequately compensated for such loss.
In a similar way to the EU reforms on product liability, the ALD (amongst other things) imposes rules on the disclosure of evidence and the burden of proof in a way that will provide claimants with a higher likelihood of making a successful claim. For example:
- A claimant may make a non-contractual fault-based claim for damages (meaning that a claimant does not need to have a direct contract with an AI provider and can bring a claim on the basis that the AI provider failed in its duty of care under the AI Act).
- A national court may, in some circumstances, order the disclosure of evidence about high-risk AI systems that are suspected to have caused damage.
- There will be a presumption of non-compliance if a defendant fails to disclose evidence ordered by the court.
- There will be a presumption of causality, giving claimants a more reasonable burden of proof and a greater chance of a successful claim.
EU machinery regulation
Similarly, the introduction of AI systems (as well as digital technology generally and more sophisticated robotics) in machinery led to the Machinery Products Regulation ((EU) 2023/1230) (MPR), in force from 14 January 2027.
The old EU Machinery Directive from 2006 will continue to apply until 14 January 2027, although some provisions of the new MPR will apply before then.
The MPR aims to address shortcomings in the old Machinery Directive, including inadequate cover for emerging technologies or practices (such as increased human/robot collaboration, machinery connected online and autonomous or driverless machines), inconsistencies with other EU product safety legislation, and gaps in cover (in relation to for example, high-risk machines).
The MPR is an attempt to ensure machines are safe as well as increase users’ trust in new technologies (such as AI) by (amongst other things):
- Introducing additional essential health and safety requirements, including those related to cybersecurity and AI.
- Establishing more effective market surveillance for unsafe machinery products.
- Introducing traceability requirements and requirements for digital instructions before placing a product on the market.
- Risk assessments and immediate corrective action are required to bring products to conformity (including withdrawing or recalling the product).
UK AI Safety and Liability
There is currently no universal body of law governing the provision and use of AI in the UK, and the UK government’s position on AI regulation is still in the proposal stage.
The government published an AI White Paper in March 2023 that sets out a proposed approach the UK will take to regulate the use of AI in the UK. Rather than setting up an overarching cross-sector AI regulator, the government favours a principles-based approach, allowing existing regulators to adopt tailored guidance for AI use within their relevant sectors. Regulation will need to ensure or allow for:
- Safety, security and robustness.
- Transparency and explainability.
- Fairness.
- Accountability and governance.
- Contestability and redress.
The white paper was followed by a House of Lords report in July 2023, which recommended that the UK government consider diverging from the EU’s AI regime, although allowing for voluntary alignment by UK companies with the AI Act.
The rapidly increasing use of AI in products (both business-to-business and business-to-consumer) has created legislative difficulties for the government when it comes to regulation. In fact, the growth of AI in general has disturbed other areas of law, too, including the complex question of who owns the intellectual property rights in content created by AI systems.
The government will need to strike a balance between staying at the forefront of AI innovation and addressing the important question of safety and liability regulation for users of AI systems and AI content. There will also be a need to address concerns over potential regulatory inconsistency across sectors.
In addition, its proposals so far involve voluntary adoption and application of the principles by regulators. In time that may become a statutory obligation: the government intends to monitor the application and effectiveness of the guidelines and put matters onto a statutory footing if it is felt necessary to do so.
Practical steps
In preparation for the AI Act, ALD and MPR coming into force, businesses supplying AI systems/AI output and/or in-scope machinery in the EU (even if established in the UK) should:
- Identify which AI system(s) within the business fall within the scope of the AI Act and/or the AI Directive and conduct the appropriate risk assessments.
- Ascertain which risk category the AI system may fall into and comply with the relevant obligations (and remember that any AI systems that do not comply can be prohibited, restricted, withdrawn, or recalled).
- Be aware that non-compliance with the AI Act can lead to administrative fines of varying scales (up to 30 million euros in some cases, or 6% of total worldwide annual turnover, whichever is greater).
- implement a robust internal process ready to respond promptly to an order for the disclosure of evidence in any AI liability claim;
- Consider taking out adequate insurance cover in respect of AI risks.
- Familiarise themselves with the requirements of the new MPR and ensure products and processes are compliant.
Businesses offering AI systems in products will need to monitor the introduction of, and in due course, identify and comply with relevant sector AI regulations in the UK.
Next time
In the next article in this series, we discuss recent reforms and proposals in relation to product sustainability and ESG matters.
In the meantime, for further information or guidance on whether any of the reforms or proposed reforms apply to your business, contact Robin Adams using [email protected] or 0191 211 7949.
This article is not legal advice and we accept no responsibility for action taken in reliance on it.