Germany has enacted laws and/or issued guidance on Artificial Intelligence (AI). Companies subject to German laws and regulations should be familiar with all relevant AI-related laws, regulations, and guidance, including those listed below.
Laws and Regulations
General
The AI Act establishes harmonized rules for placing on the market, putting into service, and using artificial intelligence systems (“AI Systems”) in the European Union. It prohibits certain AI practices and establishes specific requirements for high-risk AI Systems and operators of such systems, rules on market monitoring, market surveillance governance and enforcement, and measures to support innovation, with a particular focus on SMEs, including start-ups.
Principles, Studies, & Recommendations
The German Data Protection Conference (DSK) issued guidance on generative AI technologies, specifically focused on Large Language Models (LLMs). The Guidance aims to assist organizations in the deployment and development of AI systems, ensuring compliance with GDPR.
The Federal Office for Information Security ( BSI ) issued a white paper, aimed at developers, providers, and operators of AI systems, provides basic information on bias in AI and offers an overview of possible measures and techniques for identifying and reducing bias in AI systems.
The Hamburg Commissioner for Data Protection and Freedom of Information introduced a discussion paper on LLM’s and personal data. The Discussion Paper provides a legal analysis of LLMs’ data protection implications and offers practical guidance on their deployment and use, including that the DPA has determined that LLMs do not store personal information.
This provides an initial overview of selected fundamental questions about AI and data protection.
A publication by the Bavarian State Office for Data Protection Supervision that contains information on measures for the data protection-compliant use of AI. The flyer outlines eight key areas of data protection and AI including, “The Rights of Data Subjects”, “Data Protection Impact Assessment”, and “Using AI in a Legally Compliant Manner”, among others.
A checklist by the Bavarian State Office for Data Protection Supervision that outlines compliance requirements for the development and use of AI.
A paper by the State Commissioner for Data Protection and Freedom of Information Baden-Württemberg (LfDI) to help controllers in Baden-Württemberg familiarize themselves with the legal bases that data protection law provides for the use of artificial intelligence systems.
Germany’s Federal Office for Information Security and France’s Cybersecurity Agency jointly published the report, which includes recommendations for the secure use of AI programming assistants. The Report describes the opportunities of this new technology and addresses possible risks. Specific mitigation measures also are outlined for each risk. In addition, the Report includes recommendations for the use of AC coding assistants.
The Discussion Paper outlines the legal basis for how personal data can be processed for AI model training and use. This Discussion Paper is an update to the earlier version 1.0 released in November 2023. The Discussion Paper addresses, among other things: (i) the definition of personal data under the GDPR and the application regarding AI systems; (ii) processing relevant to data protection law in the context of AI, including phases such as collection and training data for AI, processing data for AI training, deploying AI applications, and use of AI applications and AI results; (iii) controller responsibilities under the data protection law; and (iv) specific legal bases applicable for public and private entities.
Guidance on compliance with the EU AI Act’s provisions for literacy and bans on certain AI uses.
The Conference of the Independent Data Protection Authorities of Germany (DSK) published guidelines for manufacturers and developers of AI systems to ensure their products comply with data protection laws throughout the AI life cycle. The guidelines include key requirements for data protection measures within AI systems and recommendations for technical and organizational measures.
Joint paper released by France's National Agency for Information Systems Security and Germany's Federal Office for Information Security, examining design standards and typical risks associated with zero-trust large language models (LLMs).
