AI Regulation in Europe
AI Regulation in Europe
It has been a busy summer for followers of the various European regulatory proposals to introduce a regulatory framework for the use of artificial intelligence in Europe. The EU is trying to resolve internal differences in approach to regulation, while the proposals published by the UK overtly take a more light-touch, pro‑innovation approach.
In April 2021, the EU Commission published a proposal for an EU Artificial Intelligence Act in the form of an AI Regulation which would be immediately enforceable throughout the EU. The proposal sparked a lively discussion among EU Member States, stakeholders and political parties in the EU Parliament, generating several thousand amendment proposals. On the basis of a joint draft report issued in April 2022 by parliamentary committees examining the proposals, the EU Parliament is currently attempting to work out a compromise text. And the current Presidency of the EU Council has made separate new proposals to try to broker a compromise.
The proposed AI Regulation would apply to providers and users of AI systems regardless of their country of establishment as long as the AI system is available in the EU market or its output is used in the EU (see our detailed analysis of the proposal). The proposal includes:
EU Member States (as well as political parties in the EU Parliament) have argued for diverging approaches to the regulation of AI – which are covered by the latest two compromise proposals of the Presidency of the EU Council issued in September 2022. Key points of those proposals include:
The EU Parliament is expected to vote on its compromise text in November 2022. Final coordination between the Parliament, the Council and the Commission could start in the beginning of 2023.
In September 2022, the EU Commission further published its draft for a revision of the Product Liability Directive (PLD). The PLD imposes a no-fault strict civil law liability of manufacturers for damages arising from defective products. A revision was necessary to include new categories of products emerging from digital technologies, such as AI. The PLD stipulates specific circumstances under which a product will be presumed “defective” for the purpose of a claim for damages, including the presumption of a causal link where the product is found defective and the damage is typically consistent with that defect.
With regard to AI systems, the revision of the PLD aims to clarify that:
Not specific to AI, the revised PLD further clarifies that:
Alongside the revised PLD, the EU Commission also published its draft of a AI Liability Directive that, in contrast to the AI Act, will have to be transposed into national law by the Member States within two years. The proposed AI Liability Directive is intended to facilitate the enforcement of civil law compensation for damage caused by AI systems. It complements the PLD – so, for example, while the PLD imposes a strict liability for defective products regardless of any “fault” of producer or manufacturer, the AI Liability Directive concerns cases in which damage is caused due to wrongful behavior (e.g., by breaches of privacy or safety or by discrimination due to AI applications).
The proposed Directive is deeply interwoven with the AI Regulation. For example:
Meanwhile, outside the EU, the UK government has published an AI Regulation Policy Paper and AI Action Plan confirming that it intends to diverge from the EU’s regulatory regime. And, in June 2022, the UK made proposals on one key aspect of AI – the treatment of intellectual property rights. In both cases, the UK appears to be taking an approach that favours innovation over regulation.
The UK plans to introduce a new copyright and database right exception that will permit text and data mining (TDM) for any purpose. IP rights-holders will not be able to opt out of the right, but will still have safeguards to protect their content – primarily, a requirement that content subject to TDM must be lawfully accessed. So rights-holders will be able to choose the platforms where they make their works available, including charging for access. They will also be able to take appropriate steps to ensure the integrity and security of their systems.
It is intended that the exception will speed up the TDM process, which is often a precursor to the development of AI, and will help to make the UK more competitive as a location for AI developers. Previously, the TDM exception only applied to non-commercial purposes.
On the other hand, the UK government axed other proposals. The UK has no plans to change the law regarding IP in computer-generated works. This means that works which do not have a human author will retain their copyright protection – a unique position in Europe. The government will keep the law under review and could amend, replace or remove protection in the future if the evidence supports it.
There will also be no change to UK patent law protection for AI-devised inventions. In response to government consultation, most respondents agreed that changes to the law on inventorship should be harmonised internationally and not implemented piecemeal. The counter-view is that the patentability rules ought to change to take account of the increasing contribution of AI in the R&D process and that, when AI technology reaches a stage where it can genuinely “invent”, any inventions devised by AI should be patentable. Although there will be no imminent policy change, the UK Supreme Court will consider a test case on the matter within the next two years (see our previous reporting on the multi-jurisdiction test here and also in the United States).
Separately, the UK government AI Regulation Policy Paper and AI Action Plan confirm that the UK will aim to promote innovation first and foremost. Initially, the UK will not establish an AI-specific body or regulation, or even seek to define “AI”. Rather, this responsibility will be delegated to industry and already established regulators (e.g., the Information Commissioner’s Office). This is designed to cater to the different challenges that different sectors face. However, a coherent approach will be reinforced through a set of cross-sectoral principles. As previously indicated in the UK’s AI strategy, the principles will be non‑statutory in order to maintain flexibility.
The paper takes the position that responsibility for such regimes must always lie with an identifiable person. A light-touch approach will be encouraged – such as guidance and voluntary measures. Prominent issues that are driving centralised AI regulation in the EU, such as safety, security, transparency and fairness, will instead be interpreted by individual regulators, in the context of that industry. The policy paper identifies that bodies and groups such as the Digital Regulation Cooperation Forum will have to play a key role to enable a more coordinated regime. Further details will be announced in a forthcoming White.
The UK’s Competition and Markets Authority (CMA) sounded a more cautious note than the UK government itself. It noted that AI has the potential to create business opportunities and better, personalised services. But AI can also allow the strongest market-players to increase their market strength – so clear regulatory powers will be needed to prevent abuse.
Meanwhile, the Equality and Human Rights Commission has published guidance on the use of AI in public services, furthering its intention of making AI a key focus in its three-year strategy plan. Prompted by the risks of discrimination when using AI for decision making, the guide contains advice on compliance with equality legislation (Equality Act 2010) and a checklist for public bodies when utilising AI.
Susan Bischoff, a research assistant in the Technology Transactions Group in our Berlin office, helped with the preparation of this article.