To help organizations stay on top of the main developments in European digital compliance, Morrison Foerster’s European Digital Regulatory Compliance team reports on some of the main topical digital regulatory and compliance developments that have taken place in the third quarter of 2025.
This report follows our previous updates on European digital regulation and compliance developments for 2023 (Q1, Q2, Q3, Q4), 2024 (Q1, Q2, Q3, Q4), and 2025 (Q1, Q2).
In this issue, we highlight…
1. AI Act: Code of Practice for General-Purpose AI Models Finalized
2. AI Act: Guidance on Incident Reporting in Practice
3. Digital Fairness Act: Commission Consulted on Legislative Plans
4. Data Act: Full Application as of September 2025
5. Digital Omnibus: Plans to Simplify EU Data Legislation
6. CSAM Regulation: Germany Refuses to Support Compromise Proposal
7. Online Safety Act: Industry Implementation, Enforcement and Next Steps
8. Data Act 2025: Commencement Regulations
9. Human Rights and the Regulation of AI: UK Government Inquiry
10. Legal Challenge to the Online Safety Act Fails: Categorization Must Go On
11. Navigating the Ledger: ICO’s Draft Guidance on Distributed Ledger Technologies
12. Deepfakes: Germany’s Second Attempt at Criminalizing Non-Consensual AI-Generated Media
Following a 10-month, multi-stakeholder drafting process (see our Q4 2024 and Q1 2025 updates), the European Commission (the “Commission”) and the AI Board confirmed the final General-Purpose AI (“GPAI”) Code of Practice (the “Code” or “Code of Practice”) on July 10, 2025. The Code is a voluntary, Commission-endorsed set of guidelines that helps GPAI model providers comply with Articles 53 and 55 of the AI Act, especially during the period between the entry into force of provider obligations in August 2025 and the adoption of relevant standards.
The final Code of Practice is organized into three chapters:
While signing the Code of Practice is not mandatory, the Commission and the AI Board recognize it as an adequate voluntary instrument to demonstrate compliance with the GPAI-related obligations under the AI Act. Compared to draft versions, the finalized Code of Practice now particularly includes binding “must” requirements instead of “best efforts,” a single copyright policy for all models, a ten-year record retention rule, and the obligation for GPAI providers to define their own safety objectives instead of following a fixed standard.
Many leading GPAI providers have signed the Code of Practice. The Code of Practice took effect on 2nd August 2025, but the Commission has indicated a practical grace period for signatories until 1st August 2026. During this period, if these providers do not fully implement all commitments immediately after signing the Code of Practice, the AI Office will not consider them to have broken their commitments under the Code and will not reproach them for violating the AI Act.
The European Commission (the “Commission”) continues its publication of guidance on the EU AI Act (the “AI Act”), with its latest draft guidance on incident reporting under Article 73 of the AI Act (the “Draft Guidance”). The Commission launched a public consultation on this guidance on 26th September 2025, and also published a proposed template to report serious incidents.
Article 73 of the AI Act requires providers of high-risk AI systems to report certain incidents to the competent regulators. The Draft Guidance provides an operational roadmap for organizations, clarifying what this reporting obligation means in practice. Key points include:
The EU’s approach seeks alignment with international initiatives, such as the OECD’s AI Incidents: Common Reporting Framework, signaling an intent to harmonize taxonomies and minimize reporting burdens. The consultation will remain open until 7th November 2025, and stakeholder feedback will inform the Commission’s final guidance.
Following up on its “fitness check” on EU consumer law (see our Q3 2024 update), the European Commission (the “Commission”) took the next steps toward codifying its findings into a new “Digital Fairness Act” (“DFA”) by launching a call for evidence and public consultation on 17th July 2025.
Following its “fitness check” exercise, the Commission concluded that existing frameworks, including the Unfair Commercial Practices Directive, Consumer Rights Directive, and Unfair Contract Terms Directive, do not sufficiently address manipulative online practices. In response, the Commission is developing the DFA, aimed at strengthening consumer protection across the digital marketplace.
The Commission’s consultation on this plan aimed to gather feedback on practices such as dark patterns, addictive design, aggressive personalization, and difficult subscription or cancellation processes. It also sought stakeholder input on misleading commercial practices by social media influencers. While Member States such as France, Italy, and the Netherlands have already taken national enforcement actions against deceptive design interfaces, the Commission’s aim is to harmonize these efforts through a single EU-wide framework.
The consultation period ended on 24th October 2025. The Commission will now analyze the more than 4,300 submissions and publish a summary report within eight weeks. A legislative proposal for the DFA is expected by Q3 2026, though questions remain about whether the DFA will take the form of a directive or a regulation.
For businesses offering consumer-facing digital services—such as e-commerce platforms, social networks, or gaming providers—the DFA could reshape compliance obligations in the EU, marking a major step in the EU’s broader push toward stronger and more harmonized consumer protection.
The EU Data Act (Regulation (EU) 2023/2854) (the “Data Act”) became fully applicable on 12th September 2025, activating most obligations across the EU. Adopted in December 2023 and in force since January 2024, the Data Act establishes a horizontal framework governing access to, and use of data generated by, connected products and related services, as well as provisions facilitating cloud switching.
The Data Act introduces a user-centric data-sharing regime for IoT products and related services, underpinned by fairness and transparency principles. It prohibits certain contractual terms deemed unfair and void in data arrangements and facilitates switching between data-processing services through statutory switching rights, interoperability requirements, and a gradual phase-out of switching charges.
The regulation applies to manufacturers and service providers of connected products, data holders, users, and data recipients in B2B data-sharing contexts, and providers and users of so-called data-processing services.
Key obligations that now apply:
Enforcement and implementation:
The Data Act is enforced at the Member State level, so that each Member State must designate competent authorities and establish penalty regimes that are “effective, proportionate, and dissuasive.” In parallel, the European Commission’s model contractual terms for data-sharing agreements and standard contractual clauses for data-processing services, first published in draft form in April 2025 (see our Q1 update), remain under consultation.
While most provisions now apply, access-by-design requirements for new connected products and services take effect from 12th September 2026. Reduced switching fees may be charged during the transition period but will be prohibited after 12th January 2027. Importantly, early-termination fees under fixed-term contracts fall outside this ban. Finally, from 12th September 2027, the unfair-terms rules will extend to pre-existing data‑sharing agreements concluded before 12th September 2025.
On 16th September 2025, the European Commission (the “Commission”) published a call for evidence regarding its so-called “Digital Omnibus” as part of its “Digital Package on Simplification.” The aim of the initiative is to simplify and better harmonize the numerous EU regulations in the digital sector. This should reduce bureaucratic burdens, create legal clarity, and lower compliance costs for companies.
The Commission has launched an ambitious simplification agenda for EU legislation, as announced in its Communication on a Simpler and Faster Europe. In this context, it concluded that there are certain policy areas where the EU’s regulatory objectives can be achieved at a lower administrative cost for businesses, administrations, and citizens.
On that basis, the Commission suggests the following measures:
With the Digital Omnibus, the Commission aims to improve the legal certainty, reliability, and cost effectiveness of digital regulations. It affects all companies that process personal data, as well as those that fall within the scope of the EU AI Act.
The Commission announced that it will publish a legislative draft for the Digital Omnibus in the fourth quarter of 2025.
In May 2022, the European Commission adopted its proposal for a draft regulation to govern how online services handle child sexual materials and attempts at the online solicitation of children (“CSAM Regulation”). Since then, discussions in the EU Council ran into a deadlock—particularly over the question whether and to what extent the new law should allow authorities to impose proactive detection obligations regarding abusive materials on online services. The current Danish Council Presidency aimed to resolve this deadlock with its July 2025 compromise proposal.
Germany now has announced that it will not support the Danish Presidency’s compromise proposal proposed EU Child Sexual Abuse Regulation. During a Bundestag Digital Committee briefing on 10th September 2025, the Federal Interior Ministry stated that the Danish proposal “cannot be supported 100%,” reiterating Germany’s firm opposition to any measure that would weaken or circumvent end-to-end encryption.
Without Germany’s backing, the necessary majority threshold to adopted a Council position on the CSAM Regulation remains out of reach. Consequently, the proposal was still not even brought to a vote in the Justice and Home Affairs Council on 14th October 2025, further prolonging uncertainty for messaging, hosting, and app store providers that would be subject to detection, reporting, and takedown obligations under the draft regulation.
With that, the dossier remains politically and technically contentious, particularly over the prospect of client‑side scanning and its implications for privacy and encryption security.
Although the European Parliament adopted its position on the CSAM Regulation almost two years ago, Trilogue negotiations cannot begin until the Council agrees on a general approach. The continuing deadlock increases pressure on the European Commission, which first tabled the proposal in May 2022.
Meanwhile, the interim regime, which temporarily allows providers of number-independent interpersonal communication services to voluntarily detect and report CSAM without breaching the ePrivacy Directive’s confidentiality rules, remains in effect but is set to expire on 3rd April 2026. Unless extended again, the expiry could create a regulatory gap, reinforcing the urgency for political agreement on a permanent framework.
On 9th September 2025, the UK’s Office of Communications (“Ofcom”) published its quarterly industry bulletin on the UK Online Safety Act (the “OSA”). In its capacity as regulator of the OSA, Ofcom looks back in review at the six weeks since new online child protection rules came into force, reflecting on the resulting industry-wide “sweeping change,” reiterating what in-scope companies need to know about OSA compliance, and giving an indication of what is in Ofcom’s OSA pipeline.
Companies of all sizes across a range of industries and sectors have announced, or already deployed, age checks across their services. This includes social media, dating, messaging, gaming, and adult content service providers.
Age checks are part of a number of key measures that in-scope companies should employ to ensure that they are “providing age-appropriate experiences” and complying with Ofcom’s Codes of Practice (and related guidance).
Following our last update, the UK government has enacted three sets of regulations that bring into force several sections of the DUAA. While the DUAA is predominantly focused on revising parts of UK’s data protection framework, it also triggers changes to the (still relatively new) Online Safety Act (“OSA”).
The first Regulations were made on 21st July 2025, and brought Part 1 of the DUAA into force on 20th August 2025, establishing the legal basis for Smart Data schemes (which are frameworks that let customers and businesses securely access and share data held about them by service providers). These regulations gave the UK government power to create sector-specific schemes (for example, in finance, energy, or telecoms) and develop data-sharing rules that could require companies to make customer or business data available through approved channels in the future.
The DUAA (Commencement No. 2) Regulations 2025 will bring into force section 124 on 30th September 2025, and amends the OSA, requiring certain regulated online service providers to retain information related to child death investigations. These regulations grant Ofcom (the UK’s government-approved regulator for the OSA) additional powers to support a coroner’s investigation into the death of a child.
The DUAA (Commencement No. 3 and Transitional and Saving Provisions) Regulations 2025 brought into force certain sections of the DUAA, including provisions on legal professional privilege and national security, which took effect immediately on 5th September 2025, and introduced new exemptions from certain data‑subject rights to protect confidential communications and sensitive national-security information. Additionally, provisions on joint processing of personal data by intelligence services and relevant public authorities will come into force on 17th November 2025, setting out the conditions for shared data processing and aligning related parts of the legislation accordingly.
Notably, the UK government published its “Plans for Commencement” guidance on 25th July 2025, which outlines the staged implementation of the DUAA, and the Information Commissioner's Office (“ICO”) published a summary of key changes under the DUAA that will affect organizations on 19th June 2025, such as those relating to automated decision-making, children’s data, and international transfers.
The government confirmed that there will be further commencement regulations issued approximately six months after Royal Assent (in December 2025), including the main changes to data protection legislation in the DUAA.
It is expected that the commencement of provisions with a longer lead-in time and technology requirements will be in early 2026, such as measures to establish processes for handling data subjects’ complaints, which rely on appropriate technology being in place.
There have also been additional consultations that will result in associated pieces of guidance, including on organizations’ formal data protection complaints-handling procedures (which must be in place by June 2026, per the DUAA) and “recognised legitimate interests” and the new lawful basis for processing under the DUAA.
On 25th July 2025, the UK House of Commons’ Joint Committee on Human Rights (the “Committee”), which is tasked with evaluating whether proposed UK government bills are compatible with human rights, launched an inquiry entitled “Human Rights and the Regulation of Artificial Intelligence.” The inquiry seeks to “understand the risks and rewards that AI poses for human rights and work out if greater protections are needed,” given the transformative potential of AI.
A key part of the Committee’s role is to conduct thematic inquiries into topical sectors that have the potential to significantly impact human rights and are also likely to be the subject of future legislation.
The inquiry will explore three main areas:
To facilitate a comprehensive analysis, the Committee called for written evidence – the deadline for which was 5th September 2025.
The Committee will publish a report on its findings in the upcoming months (with a usual timeframe of three to six months between the opening of the inquiry and the publication) and request a deadline for the government’s response (typically ranging from two to four months after publication). Although the Committee’s recommendations are not legally binding, the report is likely to shape future debates on whether new legislation and regulations need to be introduced.
We expect that the report will address the need for different regulatory approaches depending on the type of AI technology, and take a view on how legislation can keep up with the pace of development. The European landscape on this topic will also be taken into account, in case it could inform any recommendations made in the report. Significantly, the report will explore the public’s methods of redress for any breach of human rights because of the use of AI, including addressing salient issues such as who should be held accountable and assigned liability in these instances.
The inquiry will question whether the private sector should be held to public sector human rights standards. Accordingly, corporate accountability may be extended beyond technical developers to include vendors and clients who utilize AI tools (such as for analytics and automation), resulting in stricter compliance requirements under the UK GDPR and new AI-specific laws. Organizations creating, using, or providing AI technologies may wish to monitor this inquiry with a view to anticipating and assessing potential implications in respect of AI governance, transparency, liability, and human-rights risk.
Those wishing to ready themselves to adapt swiftly to any reforms that arise from the ongoing inquiry and the wider evolution of the UK’s AI regulatory framework may wish to:
Earlier this summer, Wikimedia Foundation (“WF”), the owner of Wikipedia, brought a legal challenge concerning the implementation of the Online Safety Act 2023 (“OSA”) to the UK’s Royal Courts of Justice. On 22nd and 23rd July 2025, the courts heard arguments that the OSA (Category 1, Category 2A, and Category 2B Threshold Conditions) Regulations 2025 (the “Categorization Regulations”) should be set aside on the grounds that they were irrational and unlawful. However, the courts rejected this legal challenge for lack of merit. Due to the legal challenge, Ofcom has delayed publishing its register of categorized services, which it had initially estimated would be announced in summer 2025.
WF was granted permission for a judicial review of the Categorization Regulations, which set out the thresholds of the categorized services. Ofcom advised the Secretary of State (“SoS”) on the thresholds for categorization on 29th February 2024 (available, with the corresponding research). The SoS then implemented Ofcom’s recommendations regarding the Categorization Regulations, which came into force on 27th February 2025. WF challenged the Categorization Regulations, anticipating that Wikipedia would be classed as a Category 1 service.
Category 1 services are user-to-user services that either:
Category 1 services are subject to the most onerous reporting and mitigation obligations under the OSA. They also have additional obligations: the adult user empowerment duties, duties to protect users from fraudulent advertising, and duties to protect democratic and journalistic content.
The majority of WF’s arguments were grounded on the fact that WF is a non-profit organization that couldn’t sustain compliance with the obligations applicable to categorized services. However, Justice Johnson disagreed with this argument, stating that there is nothing in the OSA to show that its focus is limited to a particular type of company. In fact, he considered that the OSA is intentionally broad to capture and regulate a wide range of services. Also, the judge noted that there is nothing in the OSA that requires the SoS to take into account differing functionality between services in different sectors when making the Categorization Regulations. This indicates that there is little hope for organizations attempting to de-scope themselves as “outliers.”
Although WF and its co-claimants argued that the Categorization Regulations were incompatible with the European Convention on Human Right, this argument failed. This was because it considered a potential breach, rather than an actual breach of rights. Whether there would be a violation of Convention rights would be fact‑dependent and also shaped by the relevant (as yet unpublished) code of practice.
The court advised WF that it was premature in its claim—so, if in fact Wikipedia is categorized, the outcome of this challenge doesn’t preclude WF from challenging the categorization decision.
As part of its categorization strategy, Ofcom has been gathering information on services that it believes should be categorized throughout the year. Following notification that a service is considered a categorized service and the publishing of the categorization register by Ofcom (which is expected imminently), a categorized service may wish to consider the merits of a judicial review of the decision or wait to see the outcome of any other challenges.
Ofcom will engage in additional consultations about the duties applicable to the categorized services between October 2025 and March 2026, following the publication of the categorization register. Ofcom has already published its position on transparency reporting.
On 28th August 2025, the Information Commissioner’s Office (“ICO”) launched a public consultation on its draft guidance regarding distributed ledger technologies (“DLTs”) (the “Draft Guidance”). The Draft Guidance marks another step in the ICO’s broader effort to help organizations embed data protection by design into innovative digital systems.
DLTs such as blockchain promise greater resilience, but their decentralized and immutable design challenges traditional interpretations of data protection law. The Draft Guidance offers practical advice for GDPR compliance, including when GDPR applies, how to define controller and processor roles, and how to reconcile user rights (such as deletion) with technologies that resist alteration.
Data protection authorities in France (CNIL), Spain (AEPD), and Germany (BSI) have already issued similar guidance, emphasizing risk minimization considerations such as off-chain storage, while the European Data Protection Board continues to finalize harmonized guidance. The ICO’s draft positions the UK squarely within this broader European push to balance innovation with privacy-by-design.
Key features of the ICO guidance include the following:
The ICO consultation will close on 7th November 2025, and the ICO’s review of the consultation responses will help to inform its finalization of the Draft Guidance.
AI-generated and manipulated media (“deepfakes”) are becoming increasingly widespread. As access to AI tools grows, so do violations of personality rights, including cases of digital voyeurism and the creation of violent or pornographic content without consent. Italy has taken an early step within the EU, introducing a stand-alone criminal offense that penalizes the unlawful hence non-consensual dissemination of misleading deepfakes.
Germany may now follow suit. As reported in our Q3 2024 update, the German Federal Council (Bundesrat) had previously adopted a draft bill introducing a new Section 201b into the Criminal Code (StGB) to penalize undisclosed, seemingly “authentic” AI manipulations. The proposal failed, however, when early federal elections were called.
On 11th July 2025, the German Federal Council resubmitted a substantially identical draft to the (newly elected) Federal Parliament (Bundestag). While the legal content remains unchanged, the federal government’s stance on the proposal has clearly shifted. The prior government had questioned the need for a new offense and argued that no genuine enforcement gaps existed. The current government now acknowledges the serious risks posed by deepfakes and recognizes the need to close gaps, particularly concerning image-based sexualized abuse.
The Federal Parliament will now deliberate the bill. With Italy’s new law as a benchmark and the federal government’s revised tone, adoption appears more plausible than last year. Still, concerns over vagueness and potential over-criminalization persist, making it likely that the final wording may yet be refined as the legislative debate continues.
We are grateful to the following member(s) of MoFo’s European Digital Regulatory Compliance team for their contributions: Diya Gupta and Elena Pourghadiri, London office trainee solicitors; Antonia Albes, Berlin trainee solicitor; and Darius Schulz, Berlin office research assistant.