Privacy and the EU’s Regulation on AI: What’s New and What’s Not?

22 Apr 2021
Client Alert

Republished in The Journal of Robotics, Artificial Intelligence & Law.

The draft EU Regulation on Artificial Intelligence (the “Regulation,” available here) imposes a broad range of requirements on both the public and private sectors, which are summarized in this alert. Some of these requirements already apply (in a similar form) under the EU General Data Protection Regulation (GDPR). This begs the question: What is the impact of the Regulation on the privacy sector, and what requirements already apply?

GDPR requirements on the use of AI

The GDPR applies to the processing of personal data in the context of an EU establishment, or when offering goods or services to, or monitoring the behavior of, individuals in the EU. The GDPR applies regardless of the means by which personal data are processed, and therefore applies when an AI system is used to process personal data (e.g., when using an AI system to filter applications for a job vacancy).

Under the GDPR, profiling is the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.

Automated decision-making is any decision made without meaningful human involvement that has legal effects on a person, or similarly significantly affects him or her. This may partially overlap with, or result from, profiling, but this is not always the case.

The GDPR imposes specific requirements on profiling and automated decision-making. The use of an AI system in relation to individuals often involves profiling, and sometimes automated decision-making. For example, when using an AI system to filter applications for a job vacancy, profiling is used to determine whether the applicant is a good fit for the vacancy. If the AI system filters out the applicants that it considers a good fit, and the other applicants are not considered for the position, this is automated decision-making towards the latter group. Their applications were removed from consideration for the position, with no meaningful human involvement.

Legal requirements for users

The GDPR imposes legal requirements on whoever uses the AI system for profiling and/or automated decision-making purposes, even if they acquired the system from a third party. These requirements include:

  • Fairness, which includes preventing individuals from being discriminated against;
  • Transparency towards individuals, including meaningful information about the logic involved in the AI system; and
  • The right to human intervention, enabling the individual to challenge the automated decision.
Contractual requirements for providers

If a company acquires an AI system from a vendor, the company is often not in a position to comply with the above-mentioned requirements on its own. For example, the company may not know whether the AI is trained to prevent discrimination, or know the logic that the AI system relies on. In order to be able to comply with its obligations under the GDPR, a company therefore needs to rely on the vendor and will want to impose contractual obligations on the vendor to secure its cooperation and support.

What’s new under the Regulation?

Compared to the GDPR, the Regulation introduces new obligations for vendors of AI systems, prohibits certain very high-risk AI systems, and introduces more specific requirements for high-risk AI systems and users thereof. We highlight the key differences below.

Extraterritorial scope

The Regulation defines AI systems as any software that, for a set of human-defined objectives, can generate outputs such as content, predictions, recommendations, or decisions influencing the environments where they interact. Such software qualifies as an AI system if it is developed using one or more of the following approaches and techniques:
  • Machine learning approaches, including supervised, unsupervised, and reinforcement learning, using a wide variety of methods including deep learning;
  • Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; and/or
  • Statistical approaches, Bayesian estimation, and search and optimization methods.

The Regulation applies to vendors (“providers”) of AI systems and the users of AI systems. A provider is the developer that offers an AI system on the market, whereas a “user” is using an AI system under its own authority. In respect of providers and users, the Regulation applies to:

  • EU and non-EU providers that place AI systems on the EU market;
  • EU users of AI systems; and
  • Providers and users of non-EU AI systems, if the output of the AI system is used in the EU.

Prohibition on specific AI systems

Although the GDPR imposes stringent requirements on certain processing activities, it does not outright prohibit any activities. This is different under the Regulation, which prohibits a number of AI systems that are deemed too risky under any circumstances. Most of these prohibitions are limited to AI systems used by public authorities or law enforcement. The prohibited AI systems that are relevant to the private sector are those that cause physical or psychological harm to an individual by:

  • Deploying subliminal techniques to distort behavior; or
  • Exploiting vulnerabilities of a specific group of individuals due to their age or physical or mental disability.

High-risk AI systems

The majority of the requirements of the Regulation apply to high-risk AI systems only. The Regulation lists a number of AI systems that qualify as high-risk. The European Commission can add AI systems to this list, taking into account the criteria set out in the Regulation. The key high-risk AI systems for the private sector are AI systems used for:

  • “Real-time” and “after the fact” remote biometric identification of individuals (e.g., facial recognition);
  • Recruitment and selection, such as advertising job vacancies and screening or filtering applications, and evaluating candidates in the course of interviews or tests;
  • HR purposes, such as making decisions about promotions and terminations of work-related contractual relationships, for task allocation, and for monitoring and assessing performance and behavior; and
  • Evaluating creditworthiness of individuals or establishing a credit score.

General requirements for high-risk AI systems

The Regulation imposes the following general requirements on high-risk AI systems:
  • Establish a risk management system and maintain it continuously throughout the lifetime of the system to identify and analyze known and foreseeable risks, estimate and evaluate such risks, and adopt suitable risk management measures;
  • Provide training, validation, and testing data, including the relevance, representativeness, accuracy, and completeness thereof, and bias monitoring, detection, and correction, for which special categories of personal data may be used based on the substantial public interest exemption of Article 9(2)(g) GDPR;
  • Draw up technical documentation before the AI system is placed on the market that demonstrates that the AI system complies with the Regulation;
  • Create automatic logs to ensure a level of traceability of the system’s functioning;
  • Ensure transparency to enable the user of the system to interpret the AI system’s output and use it appropriately;
  • Enable human oversight on the AI system aimed at minimizing the risks to health, safety, or fundamental rights, by an individual who fully understands the system’s capabilities and limitations and can decide not to use the system or its output in any particular situation; and
  • Ensure accuracy, robustness, and cybersecurity to foster resilience regarding errors, faults, inconsistencies, technical faults, unauthorized use, or exploitation of vulnerabilities.

The Regulation introduces general requirements for high-risk AI systems, which are more specific than the requirements under GDPR. For example, the Regulation imposes specific requirements on training, validation, and testing data in order to prevent bias and discrimination, while the GDPR merely requires that any processing of personal data is fair (including not being discriminatory). Another example is the requirement of human oversight. Although the GDPR grants individuals the right to obtain human intervention in cases of automated decision-making (as set out above), this requirement applies only to the company that makes the automated decision, and not to company that provided the (AI) system. This is different under the Regulation.

Specific requirements for providers of high-risk AI systems

The GDPR applies to the processing of personal data, and not (directly) to the provider of the systems that enabled this processing. This means that companies that use third-party systems to process personal data are responsible for such systems. This is different under the Regulation, which imposes the following specific requirements on the provider of the high-risk AI system:

  • Ensure compliance with the above-mentioned requirements for high-risk AI systems;
  • Implement a quality management system, including a strategy for regulatory compliance, and procedures for design, testing, validation, data management, and recordkeeping;
  • Address (suspected) non-conformity by immediately taking the necessary corrective actions to (i) bring the system into conformity, (ii) withdraw the system, or (iii) recall the system;
  • Notify relevant authorities about nonconformity of, or serious incidents pertaining to, the AI system and the corrective measures taken in the countries in which the AI system is made available;
  • Conduct conformity assessments, which can be internal or external assessments, depending on the type of high-risk AI system in use;
  • Register in the AI database before offering a high-risk AI system on the market; and
  • Conduct post-market surveillance, by collecting and analyzing data about the performance of high-risk AI systems throughout the system’s lifetime.

Specific requirements for users of high-risk AI systems

The Regulation imposes fewer obligations on users of high-risk AI systems than on the providers thereof, which are different from the requirements that apply under the GDPR. The Regulation requires users of high-risk AI systems to:

  • Abide by the provider’s instructions on the use of the system, and take all technical and organizational measures indicated by the provider to address residual risks of using the high-risk AI system;
  • Ensure input data are relevant if the user exercises control over such data, for example, that information on the applicant’s religion is not input into an hiring AI system;
  • Monitor the operation of the system for anomalies or irregularities;
  • Maintain log files if the logs are under the control of the user; and
  • Notify the provider about serious incidents and malfunctioning, and, in such a case, suspend use of the AI system.

Specific requirements for certain AI systems

The Regulation imposes specific requirements on AI systems for remote identification (such as facial recognition) in public spaces, which are only allowed with prior authorization of the competent authority. In addition, the Regulation imposes transparency requirements on AI systems that interact with individuals, recognize emotion, and create or alter image, audio, or video (e.g., “deepfakes”).

European Artificial Intelligence Board and national authorities

The Regulation provides for the establishment of a European Artificial Intelligence Board (the “Board”) tasked to issue guidance and opinions to ensure a consistent application of the Regulation, and to collect and share best practices and standards. In addition, each EU member state will designate a competent authority that is responsible for the implementation of the Regulation.

Penalties

The Regulation provides the following penalties for noncompliance in the private sector:

  • Up to EUR 30,000,000 or 6% of the total worldwide annual turnover (whichever is higher) for noncompliance with the prohibited AI systems or the data and data governance requirements;
  • Up to EUR 20,000,000 or 4% of the total worldwide annual turnover (whichever is higher) for noncompliance with other than the above-mentioned obligations; and
  • Up to EUR 10,000,000 or 2% of the total worldwide annual turnover (whichever is higher) for providing incorrect, incomplete, or misleading information to competent authorities or conformity assessment entities.

Implementation, transition period, and next steps

The Regulation provides for an implementation period of 24 months after entering into force, and a 12-month transition period for AI systems that are placed on the EU market before the application of the Regulation.

The Regulation is now under consideration by the European Parliament and the Council of the European Union, who will debate the proposal and can propose amendments. Together with the European Commission, these three legislative bodies will then work towards finalizing the Regulation, which is a time-consuming process.

We are Morrison Foerster — a global firm of exceptional credentials. Our clients include some of the largest financial institutions, investment banks, and Fortune 100, technology, and life sciences companies. Our lawyers are committed to achieving innovative and business-minded results for our clients, while preserving the differences that make us stronger.

Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Prior results do not guarantee a similar outcome.