European Digital Compliance: Key Digital Regulation & Compliance Developments (November 2025)

04 Nov 2025
Client Alert

To help organizations stay on top of the main developments in European digital compliance, Morrison Foerster’s European Digital Regulatory Compliance team reports on some of the main topical digital regulatory and compliance developments that have taken place in the third quarter of 2025.

This report follows our previous updates on European digital regulation and compliance developments for 2023 (Q1, Q2, Q3, Q4), 2024 (Q1, Q2, Q3, Q4), and 2025 (Q1, Q2).

In this issue, we highlight…

EU

1. AI Act: Code of Practice for General-Purpose AI Models Finalized

2. AI Act: Guidance on Incident Reporting in Practice

3. Digital Fairness Act: Commission Consulted on Legislative Plans

4. Data Act: Full Application as of September 2025

5. Digital Omnibus: Plans to Simplify EU Data Legislation

6. CSAM Regulation: Germany Refuses to Support Compromise Proposal

UK

7. Online Safety Act: Industry Implementation, Enforcement and Next Steps

8. Data Act 2025: Commencement Regulations

9. Human Rights and the Regulation of AI: UK Government Inquiry

10. Legal Challenge to the Online Safety Act Fails: Categorization Must Go On

11. Navigating the Ledger: ICO’s Draft Guidance on Distributed Ledger Technologies

Germany

12. Deepfakes: Germany’s Second Attempt at Criminalizing Non-Consensual AI-Generated Media

EU

1. AI Act: Code of Practice for General-Purpose AI Models Finalized 

Following a 10-month, multi-stakeholder drafting process (see our Q4 2024 and Q1 2025 updates), the European Commission (the “Commission”) and the AI Board confirmed the final General-Purpose AI (“GPAI”) Code of Practice (the “Code” or “Code of Practice”) on July 10, 2025. The Code is a voluntary, Commission-endorsed set of guidelines that helps GPAI model providers comply with Articles 53 and 55 of the AI Act, especially during the period between the entry into force of provider obligations in August 2025 and the adoption of relevant standards.

What’s new?

The final Code of Practice is organized into three chapters:

  1. Transparency: Defines how providers should document and disclose information in compliance with their transparency obligations under the AI Act and provides a Model Documentation Form to support signatories’ compliance.
  2. Copyright: Sets out what a provider’s copyright policy should cover, including technical safeguards to ensure lawful access, points of contact for rightsholders, and mechanisms to honor text-and-data-mining (“TDM”) opt-outs.
  3. Safety and Security: Suggests that providers of GPAI models with systemic risks adopt a safety and security framework, including conducting end-to-end risk assessments, updating model reports after release, and commissioning external evaluations.

While signing the Code of Practice is not mandatory, the Commission and the AI Board recognize it as an adequate voluntary instrument to demonstrate compliance with the GPAI-related obligations under the AI Act. Compared to draft versions, the finalized Code of Practice now particularly includes binding “must” requirements instead of “best efforts,” a single copyright policy for all models, a ten-year record retention rule, and the obligation for GPAI providers to define their own safety objectives instead of following a fixed standard.

What’s next?

Many leading GPAI providers have signed the Code of Practice. The Code of Practice took effect on 2nd August 2025, but the Commission has indicated a practical grace period for signatories until 1st August 2026. During this period, if these providers do not fully implement all commitments immediately after signing the Code of Practice, the AI Office will not consider them to have broken their commitments under the Code and will not reproach them for violating the AI Act.

Back to Top

2. AI Act: Guidance on Incident Reporting in Practice

The European Commission (the “Commission”) continues its publication of guidance on the EU AI Act (the “AI Act”), with its latest draft guidance on incident reporting under Article 73 of the AI Act (the “Draft Guidance”). The Commission launched a public consultation on this guidance on 26th September 2025, and also published a proposed template to report serious incidents.

What’s new?

Article 73 of the AI Act requires providers of high-risk AI systems to report certain incidents to the competent regulators. The Draft Guidance provides an operational roadmap for organizations, clarifying what this reporting obligation means in practice. Key points include:

  • Defined thresholds: Reporting obligations apply to “serious incidents” and “widespread infringements” under the AI Act, and the Draft Guidance explores these definitions in further detail. For instance, it provides illustrative examples of each of the four categories of “serious incidents”: (1) death or harm to health; (2) disruption of critical infrastructure; (3) infringement of EU fundamental rights; and (4) serious property or environmental damage. It also describes how these definitions relate to incident reporting triggers under other legislation such as NIS 2.
  • Clearer timelines: The Draft Guidance clarifies that cooperation obligations with competent authorities in practice means communicating and reacting within 24 hours. Similarly, it explains that a deployer’s obligation to “immediately” inform providers of serious incidents means doing so within 24 hours, and providers should be considered unreachable if they do not respond within 24 hours.
  • Overlap with other incident reporting: For entities already reporting incidents under other EU legislation (such as NIS 2, DORA, or CER), AI Act reporting is limited to fundamental rights impacts, thereby streamlining compliance obligations and preventing double reporting. The Commission also plans to provide further guidance on how AI Act incident reporting interacts with GDPR and the CRA.
What’s next?

The EU’s approach seeks alignment with international initiatives, such as the OECD’s AI Incidents: Common Reporting Framework, signaling an intent to harmonize taxonomies and minimize reporting burdens. The consultation will remain open until 7th November 2025, and stakeholder feedback will inform the Commission’s final guidance.

Back to Top

3. Digital Fairness Act: Commission Consulted on Legislative Plans

Following up on its “fitness check” on EU consumer law (see our Q3 2024 update), the European Commission (the “Commission”) took the next steps toward codifying its findings into a new “Digital Fairness Act” (“DFA”) by launching a call for evidence and public consultation on 17th July 2025.

What’s new?

Following its “fitness check” exercise, the Commission concluded that existing frameworks, including the Unfair Commercial Practices Directive, Consumer Rights Directive, and Unfair Contract Terms Directive, do not sufficiently address manipulative online practices. In response, the Commission is developing the DFA, aimed at strengthening consumer protection across the digital marketplace.

The Commission’s consultation on this plan aimed to gather feedback on practices such as dark patterns, addictive design, aggressive personalization, and difficult subscription or cancellation processes. It also sought stakeholder input on misleading commercial practices by social media influencers. While Member States such as France, Italy, and the Netherlands have already taken national enforcement actions against deceptive design interfaces, the Commission’s aim is to harmonize these efforts through a single EU-wide framework.

What’s next?

The consultation period ended on 24th October 2025. The Commission will now analyze the more than 4,300 submissions and publish a summary report within eight weeks. A legislative proposal for the DFA is expected by Q3 2026, though questions remain about whether the DFA will take the form of a directive or a regulation.

For businesses offering consumer-facing digital services—such as e-commerce platforms, social networks, or gaming providers—the DFA could reshape compliance obligations in the EU, marking a major step in the EU’s broader push toward stronger and more harmonized consumer protection.

Back to Top

4. Data Act: Full Application as of September 2025

The EU Data Act (Regulation (EU) 2023/2854) (the “Data Act”) became fully applicable on 12th September 2025, activating most obligations across the EU. Adopted in December 2023 and in force since January 2024, the Data Act establishes a horizontal framework governing access to, and use of data generated by, connected products and related services, as well as provisions facilitating cloud switching.

What’s new?

The Data Act introduces a user-centric data-sharing regime for IoT products and related services, underpinned by fairness and transparency principles. It prohibits certain contractual terms deemed unfair and void in data arrangements and facilitates switching between data-processing services through statutory switching rights, interoperability requirements, and a gradual phase-out of switching charges.

The regulation applies to manufacturers and service providers of connected products, data holders, users, and data recipients in B2B data-sharing contexts, and providers and users of so-called data-processing services.

Key obligations that now apply:

  • User access and sharing: Users may access, or allow third parties to access, data generated by their connected products or related services, subject to safeguards for intellectual property, trade secrets, and personal data.
  • Unfair contract terms: Unilaterally imposed unfair terms in B2B data contracts are unenforceable. The Data Act sets out “black” and “grey” lists of such terms. The rules apply to new contracts concluded after 12th September 2025, and, subject to a two-year transitional period, to certain agreements concluded before that date.
  • Switching of data-processing services: Providers must enable customers to switch to alternative services or on-premises solutions within defined timeframes and provide reasonable assistance. All switching charges must be eliminated by January 2027. Moreover, providers must meet certain interoperability requirements and under certain circumstances must archive functional equivalence in the new environment.
  • Mandatory data sharing with public sector bodies: In cases of exceptional need, for example, during natural disasters, data holders may be required to share data with public authorities where necessary to address a public emergency.

Enforcement and implementation:

The Data Act is enforced at the Member State level, so that each Member State must designate competent authorities and establish penalty regimes that are “effective, proportionate, and dissuasive.” In parallel, the European Commission’s model contractual terms for data-sharing agreements and standard contractual clauses for data-processing services, first published in draft form in April 2025 (see our Q1 update), remain under consultation.

What’s next?

While most provisions now apply, access-by-design requirements for new connected products and services take effect from 12th September 2026. Reduced switching fees may be charged during the transition period but will be prohibited after 12th January 2027. Importantly, early-termination fees under fixed-term contracts fall outside this ban. Finally, from 12th September 2027, the unfair-terms rules will extend to pre-existing data‑sharing agreements concluded before 12th September 2025. 

Back to Top

5. Digital Omnibus Directive: Plans to Simplify EU Data Legislation

On 16th September 2025, the European Commission (the “Commission”) published a call for evidence regarding its so-called “Digital Omnibus” as part of its “Digital Package on Simplification.” The aim of the initiative is to simplify and better harmonize the numerous EU regulations in the digital sector. This should reduce bureaucratic burdens, create legal clarity, and lower compliance costs for companies.

What’s new?

The Commission has launched an ambitious simplification agenda for EU legislation, as announced in its Communication on a Simpler and Faster Europe. In this context, it concluded that there are certain policy areas where the EU’s regulatory objectives can be achieved at a lower administrative cost for businesses, administrations, and citizens.

On that basis, the Commission suggests the following measures:

  • Simplification and greater harmonization of the existing EU data law framework (including the Data Governance Act, the Regulation on the Free Flow of Non-Personal Data, and the Open Data Directive). The aim is to reduce fragmentation and facilitate the use and exchange of data for businesses.
  • Modernization of the ePrivacy Directive regarding cookies and other tracking technologies to reduce user consent fatigue while strengthening data protection rights and promoting lawful data use by companies.
  • Simplification and harmonization of notification and security requirements in horizontal and sector‑specific regulations to reduce administrative burden.
  • Practice-oriented implementation of the AI legal framework to ensure predictable application and better support, especially for SMEs.
  • Standardization and better alignment of the legal framework for electronic identification, trust services, and the proposed EU business wallet.

With the Digital Omnibus, the Commission aims to improve the legal certainty, reliability, and cost effectiveness of digital regulations. It affects all companies that process personal data, as well as those that fall within the scope of the EU AI Act.

What’s next?

The Commission announced that it will publish a legislative draft for the Digital Omnibus in the fourth quarter of 2025.

Back to Top

6. CSAM Regulation: Germany Refuses to Support Compromise Proposal

In May 2022, the European Commission adopted its proposal for a draft regulation to govern how online services handle child sexual materials and attempts at the online solicitation of children (“CSAM Regulation”). Since then, discussions in the EU Council ran into a deadlock—particularly over the question whether and to what extent the new law should allow authorities to impose proactive detection obligations regarding abusive materials on online services. The current Danish Council Presidency aimed to resolve this deadlock with its July 2025 compromise proposal.

What’s new?

Germany now has announced that it will not support the Danish Presidency’s compromise proposal proposed EU Child Sexual Abuse Regulation. During a Bundestag Digital Committee briefing on 10th September 2025, the Federal Interior Ministry stated that the Danish proposal “cannot be supported 100%,” reiterating Germany’s firm opposition to any measure that would weaken or circumvent end-to-end encryption.

Without Germany’s backing, the necessary majority threshold to adopted a Council position on the CSAM Regulation remains out of reach. Consequently, the proposal was still not even brought to a vote in the Justice and Home Affairs Council on 14th October 2025, further prolonging uncertainty for messaging, hosting, and app store providers that would be subject to detection, reporting, and takedown obligations under the draft regulation.

With that, the dossier remains politically and technically contentious, particularly over the prospect of client‑side scanning and its implications for privacy and encryption security.

What’s next?

Although the European Parliament adopted its position on the CSAM Regulation almost two years ago, Trilogue negotiations cannot begin until the Council agrees on a general approach. The continuing deadlock increases pressure on the European Commission, which first tabled the proposal in May 2022.

Meanwhile, the interim regime, which temporarily allows providers of number-independent interpersonal communication services to voluntarily detect and report CSAM without breaching the ePrivacy Directive’s confidentiality rules, remains in effect but is set to expire on 3rd April 2026. Unless extended again, the expiry could create a regulatory gap, reinforcing the urgency for political agreement on a permanent framework.

Back to Top

UK

7. Online Safety Act: Industry Implementation, Enforcement, and Next Steps

On 9th September 2025, the UK’s Office of Communications (“Ofcom”) published its quarterly industry bulletin on the UK Online Safety Act (the “OSA”). In its capacity as regulator of the OSA, Ofcom looks back in review at the six weeks since new online child protection rules came into force, reflecting on the resulting industry-wide “sweeping change,” reiterating what in-scope companies need to know about OSA compliance, and giving an indication of what is in Ofcom’s OSA pipeline.

How are in-scope companies responding to the new rules?

Companies of all sizes across a range of industries and sectors have announced, or already deployed, age checks across their services. This includes social media, dating, messaging, gaming, and adult content service providers.

Age checks are part of a number of key measures that in-scope companies should employ to ensure that they are “providing age-appropriate experiences” and complying with Ofcom’s Codes of Practice (and related guidance).

What is Ofcom doing about the new rules?
  • Enforcement – Investigations and programme

    Ofcom has stated that it is “actively checking compliance” and taking investigatory action where necessary, centering its efforts on the providers that represent the highest risk to UK users. In Q3 2025, Ofcom had opened investigations into 69 sites and apps (2 of which were initially just age-check investigations, but subsequently expanded to include investigations into potential failures to adequately respond to statutory information requests).

    There are also a number of programmes that have been launched or announced by Ofcom, including: (i) an age assurance enforcement programme; (ii) an illegal content risk assessment enforcement programme; (iii) a children’s risk assessment enforcement programme; (iv) an enforcement programme on the protection of children from harmful content through the use of age assurance; and (v) a monitoring and impact programme relating to content recommender systems and algorithms and content moderation tools
  • Compliance – What support is out there?

    It may seem like Ofcom is on an enforcement warpath, and many service providers with UK users fear that they will be made examples of and/or become collateral damage. However, it’s clear that Ofcom wants in-scope companies to be able to comply with their obligations, with the primary objective of working together to secure a safer digital world for children in the UK. Accordingly, it has made available a number of resources that may help in-scope companies navigate their obligations under the OSA in relation to protection of children and illegal content. These resources include compliance checker tools, explanatory videos, and risk assessment toolkits with helpful demos. For tailored support in understanding how the rules apply to your organization, and what practical measures you need to take to avoid falling foul of Ofcom’s enforcement warpath, reach out to MoFo’s Digital Regulatory Compliance team. 
What’s next?
  • Information Requests – Should you expect to receive one?

    Between October and December 2025, Ofcom plans to make statutory requests for information from certain service providers, age assurance vendors, and app store providers to support its: (i) consultations on additional duties for categorized services; (ii) consultation on fraudulent advertising measures; (iii) statutory report on age assurance; and (iv) statutory report on children’s app store usage.

    If Ofcom has you in its sights for these information requests (you’ll know about it if so—it will notify you directly), you should prepare to respond “fully, accurately and by the deadlines [it sets].” But, to provide some comfort, requests will be proportionate, with a “clearly stated purpose,” and given in good time.
  • Consultations – Have your say!

    Want Ofcom to take your views into account when it’s issuing OSA-related statements, guidance, and proposals? Then you should respond to the (many) current and planned consultations! These include consultations on the following:
    • Draft guidance on notification requirements in respect of online safety fees (closing 1st October 2025).
    • Draft proposals to strengthen Ofcom’s codes of practice on safety by design (closing 20th October 2025).
    • Updates to the Online Safety Information Powers Guidance in relation to Data Preservation Notices (closing 28th October 2025).
    • Draft guidance on super-complaints (closing 3rd November 2025).
    • Statement of Charging Principles relating to fees payable by providers (due to consult by the end of 2025).
  • New guidance – More clarity on the horizon

    Ofcom’s consultation on its draft guidance on improving online safety for women and girls closed on 13th October 2025. We expect the guidance to be finalized and published by the end of 2025.

Back to Top

8. Data Act 2025: Commencement Regulations

Following our last update, the UK government has enacted three sets of regulations that bring into force several sections of the DUAA. While the DUAA is predominantly focused on revising parts of UK’s data protection framework, it also triggers changes to the (still relatively new) Online Safety Act (“OSA”).

What’s new?

The first Regulations were made on 21st July 2025, and brought Part 1 of the DUAA into force on 20th August 2025, establishing the legal basis for Smart Data schemes (which are frameworks that let customers and businesses securely access and share data held about them by service providers). These regulations gave the UK government power to create sector-specific schemes (for example, in finance, energy, or telecoms) and develop data-sharing rules that could require companies to make customer or business data available through approved channels in the future.

The DUAA (Commencement No. 2) Regulations 2025 will bring into force section 124 on 30th September 2025, and amends the OSA, requiring certain regulated online service providers to retain information related to child death investigations. These regulations grant Ofcom (the UK’s government-approved regulator for the OSA) additional powers to support a coroner’s investigation into the death of a child.

The DUAA (Commencement No. 3 and Transitional and Saving Provisions) Regulations 2025 brought into force certain sections of the DUAA, including provisions on legal professional privilege and national security, which took effect immediately on 5th September 2025, and introduced new exemptions from certain data‑subject rights to protect confidential communications and sensitive national-security information. Additionally, provisions on joint processing of personal data by intelligence services and relevant public authorities will come into force on 17th November 2025, setting out the conditions for shared data processing and aligning related parts of the legislation accordingly.

Notably, the UK government published its “Plans for Commencement” guidance on 25th July 2025, which outlines the staged implementation of the DUAA, and the Information Commissioner's Office (“ICO”) published a summary of key changes under the DUAA that will affect organizations on 19th June 2025, such as those relating to automated decision-making, children’s data, and international transfers.

What’s next?

The government confirmed that there will be further commencement regulations issued approximately six months after Royal Assent (in December 2025), including the main changes to data protection legislation in the DUAA.

It is expected that the commencement of provisions with a longer lead-in time and technology requirements will be in early 2026, such as measures to establish processes for handling data subjects’ complaints, which rely on appropriate technology being in place.

There have also been additional consultations that will result in associated pieces of guidance, including on organizations’ formal data protection complaints-handling procedures (which must be in place by June 2026, per the DUAA) and “recognised legitimate interests” and the new lawful basis for processing under the DUAA.

Back to Top

9. Human Rights and the Regulation of AI: UK Government Inquiry

On 25th July 2025, the UK House of Commons’ Joint Committee on Human Rights (the “Committee”), which is tasked with evaluating whether proposed UK government bills are compatible with human rights, launched an inquiry entitled “Human Rights and the Regulation of Artificial Intelligence.” The inquiry seeks to “understand the risks and rewards that AI poses for human rights and work out if greater protections are needed,” given the transformative potential of AI.

Why is it happening?

A key part of the Committee’s role is to conduct thematic inquiries into topical sectors that have the potential to significantly impact human rights and are also likely to be the subject of future legislation.

What is it about?

The inquiry will explore three main areas:

  1. Privacy & AI – The implications of AI for privacy and data governance, assessing how AI-driven processing of personal data may trigger data-protection obligations, create risks of bias or unfair treatment, and challenge existing frameworks for accountability and redress to ensure compliance with digital and data-protection standards in an AI-enabled environment.
  2. Adequacy of Existing Law – The UK’s existing legal and regulatory framework and, in particular, whether current laws sufficiently protect human rights. The inquiry will also review the adequacy of the government’s AI implementation policy (as set out in its “AI Opportunities Action Plan”); and
  3. Required Amendments – The potential changes to the legal and regulatory framework that may be required to safeguard human rights, and mechanisms for enforcement.

To facilitate a comprehensive analysis, the Committee called for written evidence – the deadline for which was 5th September 2025.

What’s next?

The Committee will publish a report on its findings in the upcoming months (with a usual timeframe of three to six months between the opening of the inquiry and the publication) and request a deadline for the government’s response (typically ranging from two to four months after publication). Although the Committee’s recommendations are not legally binding, the report is likely to shape future debates on whether new legislation and regulations need to be introduced.

We expect that the report will address the need for different regulatory approaches depending on the type of AI technology, and take a view on how legislation can keep up with the pace of development. The European landscape on this topic will also be taken into account, in case it could inform any recommendations made in the report. Significantly, the report will explore the public’s methods of redress for any breach of human rights because of the use of AI, including addressing salient issues such as who should be held accountable and assigned liability in these instances.

The inquiry will question whether the private sector should be held to public sector human rights standards. Accordingly, corporate accountability may be extended beyond technical developers to include vendors and clients who utilize AI tools (such as for analytics and automation), resulting in stricter compliance requirements under the UK GDPR and new AI-specific laws. Organizations creating, using, or providing AI technologies may wish to monitor this inquiry with a view to anticipating and assessing potential implications in respect of AI governance, transparency, liability, and human-rights risk.

Those wishing to ready themselves to adapt swiftly to any reforms that arise from the ongoing inquiry and the wider evolution of the UK’s AI regulatory framework may wish to:

  • conduct AI and data protection risk reviews;
  • implement AI governance frameworks;
  • review the adequacy of end user communications in respect of AI-related processing;
  • conduct data protection impact assessments for all high-risk AI projects;
  • insert AI-specific clauses into supplier and customer contracts;
  • create internal policies for handling complaints or appeals where AI decisions impact individuals; and
  • provide targeted internal training on data ethics, privacy, and bias awareness.

Back to Top

10. Legal Challenge to the Online Safety Act Fails: Categorization Must Go On

Earlier this summer, Wikimedia Foundation (“WF”), the owner of Wikipedia, brought a legal challenge concerning the implementation of the Online Safety Act 2023 (“OSA”) to the UK’s Royal Courts of Justice. On 22nd and 23rd July 2025, the courts heard arguments that the OSA (Category 1, Category 2A, and Category 2B Threshold Conditions) Regulations 2025 (the “Categorization Regulations”) should be set aside on the grounds that they were irrational and unlawful. However, the courts rejected this legal challenge for lack of merit. Due to the legal challenge, Ofcom has delayed publishing its register of categorized services, which it had initially estimated would be announced in summer 2025.

What happened?

WF was granted permission for a judicial review of the Categorization Regulations, which set out the thresholds of the categorized services. Ofcom advised the Secretary of State (“SoS”) on the thresholds for categorization on 29th February 2024 (available, with the corresponding research). The SoS then implemented Ofcom’s recommendations regarding the Categorization Regulations, which came into force on 27th February 2025. WF challenged the Categorization Regulations, anticipating that Wikipedia would be classed as a Category 1 service.

Which services are Category 1?

Category 1 services are user-to-user services that either:

  • Have an average number of monthly activity UK users of over 34 million, and use a content recommender system; or
  • Have an average number of monthly active UK users of over 7 million, and use a content recommender system, and provide a functionality for users to share user-generated content.

Category 1 services are subject to the most onerous reporting and mitigation obligations under the OSA. They also have additional obligations: the adult user empowerment duties, duties to protect users from fraudulent advertising, and duties to protect democratic and journalistic content.

Why did WF bring the claim?

The majority of WF’s arguments were grounded on the fact that WF is a non-profit organization that couldn’t sustain compliance with the obligations applicable to categorized services. However, Justice Johnson disagreed with this argument, stating that there is nothing in the OSA to show that its focus is limited to a particular type of company. In fact, he considered that the OSA is intentionally broad to capture and regulate a wide range of services. Also, the judge noted that there is nothing in the OSA that requires the SoS to take into account differing functionality between services in different sectors when making the Categorization Regulations. This indicates that there is little hope for organizations attempting to de-scope themselves as “outliers.”

Although WF and its co-claimants argued that the Categorization Regulations were incompatible with the European Convention on Human Right, this argument failed. This was because it considered a potential breach, rather than an actual breach of rights. Whether there would be a violation of Convention rights would be fact‑dependent and also shaped by the relevant (as yet unpublished) code of practice.

What’s next?

The court advised WF that it was premature in its claim—so, if in fact Wikipedia is categorized, the outcome of this challenge doesn’t preclude WF from challenging the categorization decision.

As part of its categorization strategy, Ofcom has been gathering information on services that it believes should be categorized throughout the year. Following notification that a service is considered a categorized service and the publishing of the categorization register by Ofcom (which is expected imminently), a categorized service may wish to consider the merits of a judicial review of the decision or wait to see the outcome of any other challenges.

Ofcom will engage in additional consultations about the duties applicable to the categorized services between October 2025 and March 2026, following the publication of the categorization register. Ofcom has already published its position on transparency reporting.

Back to Top

11. Navigating the Ledger: ICO’s Draft Guidance on Distributed Ledger Technologies

On 28th August 2025, the Information Commissioner’s Office (“ICO”) launched a public consultation on its draft guidance regarding distributed ledger technologies (“DLTs”) (the “Draft Guidance”). The Draft Guidance marks another step in the ICO’s broader effort to help organizations embed data protection by design into innovative digital systems.

What’s new?

DLTs such as blockchain promise greater resilience, but their decentralized and immutable design challenges traditional interpretations of data protection law. The Draft Guidance offers practical advice for GDPR compliance, including when GDPR applies, how to define controller and processor roles, and how to reconcile user rights (such as deletion) with technologies that resist alteration.

Data protection authorities in France (CNIL), Spain (AEPD), and Germany (BSI) have already issued similar guidance, emphasizing risk minimization considerations such as off-chain storage, while the European Data Protection Board continues to finalize harmonized guidance. The ICO’s draft positions the UK squarely within this broader European push to balance innovation with privacy-by-design.

Key features of the ICO guidance include the following:

  • Applicability of data protection laws: On-chain data that belongs to organizations rather than individuals does not need to be considered for data protection purposes. However, for information relating to individuals (whether or not owned by an organization), off-chain data (such as Know-Your-Customer data and IP logs) and additional metadata may also need to be considered in combination with on-chain data when determining if personal information is involved. The ICO also flags that the use of permissionless blockchain is unlikely to benefit from the personal and household use exemption from GDPR applicability, due to the sharing of personal information with an unknown number of individuals anywhere in the world.
  • Controller/processor role clarity: Particularly for permissionless blockchains, identifying who determines the purposes and means of processing is complex due to the decentralized governance model of blockchains. The guidance suggests that participants who create transactions and send them for validation may be controllers (or joint controllers where there is a consortium), while participants who only operate validator nodes may act as processors (provided that they do not have broader decision-making powers).
  • Data protection by design and default: The Draft Guidance emphasizes that organizations should, as a first step, ask themselves whether blockchain is needed at all, or whether they could instead use a traditional database that may pose fewer data-protection challenges. In most cases, controllers should conduct a data protection impact assessment before deployment. The ICO encourages organizations to consider, at an early stage, what practices should be employed to achieve data protection compliance, such as zero-knowledge proofs, homomorphic encryption, and differential privacy.
  • Cross-border transfers: The Draft Guidance highlights the need to verify and vet all non-UK blockchain nodes to ensure that transfers of personal information outside of the UK comply with international transfer requirements. This could be difficult to demonstrate on a permissionless blockchain because nodes can be anywhere in the world with unknown identities.
  • Individual rights: Due to the immutable nature of blockchain, compliance with deletion and rectification rights poses unique difficulties. The ICO advises using design techniques such as off‑chain storage and linking it via a hash of the data.
What’s next?

The ICO consultation will close on 7th November 2025, and the ICO’s review of the consultation responses will help to inform its finalization of the Draft Guidance.

Back to Top

Germany

12. Deepfakes: Germany’s Second Attempt at Criminalizing Non-Consensual AI-Generated Media

Deepfakes: Germany’s second attempt at criminalizing non-consensual AI-generated media

AI-generated and manipulated media (“deepfakes”) are becoming increasingly widespread. As access to AI tools grows, so do violations of personality rights, including cases of digital voyeurism and the creation of violent or pornographic content without consent. Italy has taken an early step within the EU, introducing a stand-alone criminal offense that penalizes the unlawful hence non-consensual dissemination of misleading deepfakes.

Germany may now follow suit. As reported in our Q3 2024 update, the German Federal Council (Bundesrat) had previously adopted a draft bill introducing a new Section 201b into the Criminal Code (StGB) to penalize undisclosed, seemingly “authentic” AI manipulations. The proposal failed, however, when early federal elections were called.

What’s new?

On 11th July 2025, the German Federal Council resubmitted a substantially identical draft to the (newly elected) Federal Parliament (Bundestag). While the legal content remains unchanged, the federal government’s stance on the proposal has clearly shifted. The prior government had questioned the need for a new offense and argued that no genuine enforcement gaps existed. The current government now acknowledges the serious risks posed by deepfakes and recognizes the need to close gaps, particularly concerning image-based sexualized abuse.

What’s next?

The Federal Parliament will now deliberate the bill. With Italy’s new law as a benchmark and the federal government’s revised tone, adoption appears more plausible than last year. Still, concerns over vagueness and potential over-criminalization persist, making it likely that the final wording may yet be refined as the legislative debate continues.

Back to Top

We are grateful to the following member(s) of MoFo’s European Digital Regulatory Compliance team for their contributions: Diya Gupta and Elena Pourghadiri, London office trainee solicitors; Antonia Albes, Berlin trainee solicitor; and Darius Schulz, Berlin office research assistant.

We are Morrison Foerster — a global firm of exceptional credentials. Our clients include some of the largest financial institutions, investment banks, and Fortune 100, technology, and life sciences companies. Our lawyers are committed to achieving innovative and business-minded results for our clients, while preserving the differences that make us stronger.

Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Prior results do not guarantee a similar outcome.