EU Digital Omnibus on AI: What Is in It and What Is Not?
EU Digital Omnibus on AI: What Is in It and What Is Not?
On November 19, 2025, the European Commission published its long-awaited proposal for a Digital Omnibus package (“Digital Omnibus”). The goal of the Digital Omnibus is to strengthen EU competitiveness by reducing the regulatory burden on businesses, in particular, the time spent on administrative tasks and compliance obligations. The aim is to reduce administrative burdens at least 25% for all businesses, and at least 35% for small and medium enterprises (SMEs), by 2029.[1] The term “Omnibus” originates from Latin, meaning “for all” and signals the intent to take a holistic view across all digital legislation at EU level (AI Act, Data Act, GDPR, ePrivacy Directive, and Cybersecurity laws), aligning legal definitions, harmonizing requirements, and resolving overlaps. In this note we provide an overview of the Digital Omnibus on AI[2] and discuss what is in it and what is not.
Providers of AI systems that have been exempted from classification as high-risk under Art. 6(3) AI Act—because they are, for instance, only used for preparatory tasks—will no longer be required to register those systems in the EU database. Instead, such providers will only be obliged to document a self assessment before the system is placed on the market or put into service. This is a relief for a vast number of businesses from a disproportionate administrative burden, considering that these AI systems do not pose a significant risk of harm to the health, safety, or fundamental rights.
Providers and deployers of AI systems will no longer be required to take measures to ensure a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf. This obligation is lifted and transformed into an obligation of the Commission and the Member States to encourage providers and deployers of AI systems to take such measures. Such encouragement could consist of offering training opportunities, providing informational resources, and allowing exchange of good practices and other non-binding legal initiatives.
The Omnibus on AI introduces a new Art. 4a AI Act, which allows also providers and deployers of non-high-risk AI systems to use sensitive personal data to detect and correct bias (e.g., testing whether a job-application-screening model ranks CVs differently depending on racially identifiable features or names). The Article further relaxes the threshold. Currently such use of sensitive personal data must be “strictly necessary” to detect and correct bias, and the Omnibus proposes to lower the threshold to “necessary.” The Digital Omnibus on the Digital Legislative Framework[3] further allows the use of residual sensitive personal data when the removal thereof would require a disproportionate effort. The same Omnibus proposes several amendments to allow the use of regular personal data for the development and use of AI systems. Such use is allowed, subject to appropriate safeguards, including an unconditional right for individuals to opt out. The fact that the Member States have the right to require consent for AI development and operation is unhelpful, as it will undermine EU-wide harmonization. This topic will be discussed in detail in a separate MoFo alert.
The Omnibus proposal defers the AI Act obligations for high-risk AI systems from August 2026 until such later date when measures to support compliance, such as harmonized standards, common specifications, and Commission guidelines, are available. Communications from CEN-CENELEC’s Joint Technical Committee 21—the body responsible for drafting these standards—indicate that the full standards may not be available before December 2026.[4] Only six or 12 months after the Commission confirms their availability—depending on the type of high-risk AI system—will the rules will apply.[5] In any event, the high-risk AI requirements will apply as of December 2, 2027 (i.e., a postponement of 16 months maximum) or August 2, 2028 (i.e., a postponement of 24 months maximum), respectively.
While it defers the compliance burden on businesses, this postponement does not simplify the requirements as such, and the strict sunset date for the suspension is problematic, particularly if the development of harmonized standards faces further delays. Implementing the AI Act’s high-risk requirements is going to involve a multitude of individual standards and businesses consistently report needing a minimum of 12 months to achieve compliance with even a single standard, based on prior compliance experience.[6]
The Omnibus proposal delays the application of the transparency obligation in Art. 50(2) of the AI Act for AI systems placed on the market before August 2, 2026 for six months, until February 2, 2027. According to Art. 50(2) AI Act providers of AI systems generating synthetic audio, image, video, or text content shall ensure that the outputs of the AI system are marked in a machine-readable format and are detectable as artificially generated or manipulated (e.g., by using watermarks or metadata identifications). Presumably this is because the Code of Practice on the marking and labeling of AI-generated content is not expected to be published until May/June 2026.[7]The remaining obligations under Art. 50 AI Act continue to apply as of August 2, 2026. It is unclear why the postponement applies only to para. 2 and not to the entire Art. 50 AI Act. Given the provision’s overall lack of clarity and the possibility of delays, the entry into application should be linked to the publication of the Codes of Practice under Art. 50(7) and the Commission Guidelines under Art. 96(1)(d), mirroring the postponement mechanism established for high-risk AI systems.
The Omnibus proposal extends the possibility of benefitting from a simplified way to comply with the obligation to establish a quality management system (Art. 17 AI Act)—currently offered only to microenterprises (Art. 63 AI Act)—to all SMEs, including startups. Several regulatory flexibilities and support measures of the AI Act—originally reserved for SMEs—have been extended to also cover small mid cap companies (SMCs).
The AI Office becomes exclusively responsible for the supervision and enforcement of AI Act obligations for AI systems based on GPAI models (where the model and system are developed by the same provider)[8] and systems that constitute or are integrated into a VLOP or VLOSE under the DSA. This approach avoids overlapping responsibilities and diverging national enforcement action and ensures that the Commission’s supervisory and enforcement powers under the AI Act and the DSA are exercised in a coherent manner.
The Commission—or entrusted notified bodies acting on its behalf—shall organize and carry out pre market conformity assessments before high-risk AI systems under the AI Office’s supervision (see point 7) are placed on the market. These tests and assessments verify that the system complies with the requirements of the AI Act and may be placed on the market or put into service in the Union. The provider bears the costs for the testing and assessment.
The Omnibus proposal clarifies that in cases of dual classification of a product as a high-risk AI system[9] (e.g., use of emotion recognition systems in medical devices), the provider should follow the relevant conformity assessment procedure under the relevant sectoral product law. The obligations for high-risk AI systems of the AI Act are integrated into this sectoral conformity assessment. The proposal also clarifies that notified bodies under sectoral product legislation must apply to be designated as notified bodies under the AI Act within 18 months after the AI Act starts to apply (i.e., by February 2, 2028) if they want to assess high-risk AI systems. As a result, it can be assumed that, during the transition period until February 2, 2028, a designation under sectoral product law is sufficient. This helps avoid certification bottlenecks and delays during the ramp-up phase.
In this context, the new “once-only” principle allows providers to submit just one application and undergo one assessment for designation under both the AI Act and the relevant sectoral product law (Annex I, Section A), if such a joint procedure already exists in the sectoral rules. This is meant to avoid duplicated designation processes under both frameworks.
The existing wording of the AI Act left room for interpretation whether or not comprehensive renotification under the AI Act of already notified bodies under sectoral product law is required. It would be preferable to explicitly omit the separate AI Act (re)notification, rather than establishing a transition period, and instead provide training on AI-related requirements to notified bodies under sectoral product legislation.
Overall, the Omnibus proposal offers only limited substantive reductions in regulatory requirements and bureaucratic burden and fails to address pressing needs articulated by the industry. Considering the significant criticism voiced by industry stakeholders—mirrored in member state comments—the extent of true regulatory relief on AI remains modest at best. The considerable delay in developing the necessary support tools for implementing the obligations under the AI Act clearly highlights the complexity of the rules. If even the Commission and standardization organizations fail to meet their own clarification goals and deadlines, how can the industry be expected to comply with often complex and unclear requirements?
Against this backdrop, the Omnibus proposal falls short of expectations in several key respects, including the following most pressing areas:
Removal of fundamental rights impact assessments: The Omnibus failed to delete the requirement for fundamental rights impact assessments for providers of high-risk AI systems under Art. 27 AI Act. Apart from the fact that fundamental rights protect individuals and companies from the state, not from one another, there is an overlap with the data protection impact assessments under Art. 35 GDPR. Given this overlap, it should be clarified unequivocally that it is sufficient to carry out an enhanced data protection impact assessment,[11] rather than introducing a separate additional assessment.
Expansion of research exemption: The research exemption in Art. 2(8) AI Act should be extended to include real-world testing, rather than expressly excluding it. Furthermore, the scope of the research exemption in Art. 2(6) AI Act should be clarified and expanded to cover all AI systems that serve the purpose of scientific research and development, not only those that are specifically developed and put into service for the “sole purpose” of scientific R&D. This narrow formulation may be interpreted to exclude AI systems or models whose research outputs may later be used in the development of commercial products, such as medicines and medical devices.
Allow sandboxes to grant presumption of conformity: The Omnibus proposal broadens the use of AI regulatory sandboxes and real-world testing, allowing for the possibility of establishing an AI regulatory sandbox at the Union level for an AI system based on GPAI and extending the scope of real-world testing outside of AI regulatory sandboxes to certain high-risk AI. However, the true benefit of the regulatory sandbox mechanism will only materialize if the competent authority has the ability to certify that an AI system tested within the sandbox complies with the requirements and obligations of the AI Act, thereby creating a genuine presumption of conformity.
No additional national measures: The possibility for national authorities to adopt additional measures beyond those set out in the AI Act where a compliant AI system is nonetheless deemed to present a risk (Art. 82 AI Act) should be deleted to avoid regulatory fragmentation.
The Omnibus proposal will now be submitted to the European Parliament and the Council for discussion and adoption under the ordinary legislative procedure. At this stage, the proposal remains a draft and is likely to be subject to extensive debate and amendment by both institutions. However, the EU would be well advised to resist any watering down of the proposed simplifications and instead seize this opportunity to further reduce complexity and strengthen legal coherence. In the second phase of its effort to simplify EU digital regulation, the Commission will conduct a Digital Fitness Check, a comprehensive “stress test” of the Digital Rulebook. Stakeholders can provide input until March 11, 2026.[12]
The EU has built a broad set of new frameworks for data, cybersecurity, platforms, media, and notably AI, thereby transforming the legal landscape of Europe’s digital economy. The results have been high standards for fundamental rights and trust at the cost of growing regulatory complexity. Businesses are faced with increasingly burdensome and, at times, overlapping compliance obligations, accompanied by mounting uncertainty about their scope and interplay. As these new laws move into their critical implementation phase, the challenges are becoming even more apparent, particularly in the global race to harness the potential of AI. The Digital Omnibus package comes in response to Draghi’s landmark competitiveness report,[13] which identified further coordination in EU governance and reducing the regulatory frictions currently at play in the European economy as one of the building blocks to reduce the innovation gap that has developed in Europe vis-à-vis the United States and China, particularly in advanced technologies.
It is an interesting exercise for the EU Commission to confront the EU’s own tendency towards overregulation, and the intention to reduce and streamline regulation is certainly commendable. There is a growing perception from certain member states that the AI-related regulatory burden on businesses is unhealthy for innovation and competitiveness, applying pressure on the EU Commission to simplify such regulation. To make the time spent on the Omnibus worthwhile, this opportunity should be used to create meaningful, substantive relief for businesses.
[1] European Commission, Press Release: Simpler EU digital rules and new digital wallets to save billions for businesses and boost innovation, November 19, 2025, https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2718 (last accessed November 27, 2025).
[2] Proposal for a Regulation of the European Parliament and of the Council amending Regulations (EU) 2024/1689 and (EU) 2018/1139 as regards the simplification of the implementation of harmonised rules on artificial intelligence, https://ec.europa.eu/newsroom/dae/redirection/document/121744, COM(2025) 836 final, 2025/0359 (COD), November 19, 2025, https://ec.europa.eu/newsroom/dae/redirection/document/121744 (last accessed on November 20, 2025).
[3] Proposal for a Regulation of the European Parliament and of the Council Amending Regulations (EU) 2016/679, (EU) 2018/1724, (EU) 2018/1725, (EU) 2023/2854 and Directives 2002/58/EC, (EU) 2022/2555 and (EU) 2022/2557 as regards the simplification of the digital legislative framework, and repealing Regulations (EU) 2018/1807, (EU) 2019/1150, (EU) 2022/868, and Directive (EU) 2019/1024, COM(2025) 837 final, 2025/0360 (COD), November 19, 2025, https://ec.europa.eu/newsroom/dae/redirection/document/121742 (last accessed on November 20, 2025).
[4] CEN/CENELEC adopted measures to accelerate the delivery of key standards developed under CEN-CLC/JTC 21 “Artificial Intelligence,” and the deliverables requested under Standardization Request M/593 (and its Amendment M/613) to ensure “these standards are available by Q4 2026” see CEN/CENELEC, Update on CEN and CENELEC’s Decision to Accelerate the Development of Standards for Artificial Intelligence, October 23, 2025, https://www.cencenelec.eu/news-events/news/2025/brief-news/2025-10-23-ai-standardization/?utm_source=chatgpt.com (last accessed on November 24, 2025); see also Bertuzzi, EU’s AI Act standards to be ready on the heels of legal application deadline, May 16, 2025, https://www.mlex.com/mlex/amp/articles/2341169 (last accessed on November 25, 2025).
[5] Six months after the Commission confirms their availability, the rules for high-risk AI systems listed under Annex III (e.g., AI systems in critical infrastructure or Human Resources) will apply and 12 months after the commission confirms their availability, the rules for high-risk AI systems under Annex I (e.g., AI systems in medical devices or machinery sectors) will apply. However, the latest possible application dates are set at December 2, 2027 (i.e., a postponement of maximum 16 months) for Annex III systems and August 2, 2028 (i.e., a postponement of maximum 24 months) for Annex I systems.
[6]Kilian/Jäck/Ebel, European AI Standards – Technical Standardization and Implementation Challenges Under the EU AI Act, 24.03.2025, p. 24, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5155591 (last accessed on November 27, 2025).
[7] European Commission, Code of Practice on marking and labelling of AI-generated content, https://digital-strategy.ec.europa.eu/en/policies/code-practice-ai-generated-content (last accessed on November 26, 2025).
[8] AI systems related to products covered by sectoral product legislation (Annex I Section A) and AI systems, which are under the supervision of the European Data Protection Supervisor (Art. 74(9) AI Act), will not fall under the competence of the AI Office.
[9] This refers to cases of classification as a high-risk AI system under both sectoral product law (Annex I Section A) and Annex III.
[10] Annex to the Communication to the Commission, Approval of the content of the draft Communication from the Commission – Guidelines on the scope of the obligations for general-purpose AI models established by Regulation (EU) 2024/1689 (AI Act), C(2025) 5045 final, July 18, 2025, Point 3.2, https://ec.europa.eu/newsroom/dae/redirection/document/118340 (last accessed on December 1, 2025).
[11] Art. 27(4) of the AI Act already allows for this possibility in principle, but makes it conditional on “the obligations laid down in this Article […] already [being] met through the data protection impact assessment conducted pursuant to Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680.”
[12] European Commission, Digital fitness check – testing the cumulative impact of the EU’s digital rules, https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/15554-Digital-fitness-check-testing-the-cumulative-impact-of-the-EUs-digital-rules_en (last accessed on November 21, 2025).
[13] The Draghi report on EU competitiveness, https://commission.europa.eu/topics/competitiveness/draghi-report_en (last accessed on November 20, 2025).




Practices
Industries + Issues
Regions