Data, Cyber + Privacy Predictions for 2026
Data, Cyber + Privacy Predictions for 2026
The Morrison Foerster Data, Cyber + Privacy team (Yes, we’ve changed our name to better reflect all we do across data, cybersecurity, and privacy.) provides creative, practical advice across every stage of the information life cycle, from navigating complex privacy laws and managing breach response to litigating data security claims and defending enforcement actions.
As 2026 approaches, rapid technological innovations and evolving global regulations are set to reshape the privacy and cybersecurity landscapes.
Key trends to watch include:
We consulted our Data, Cyber + Privacy team—leaders at the forefront of these issues— to share what clients can expect in 2026 as the worlds of technology, regulation, and accountability continue to converge. The following are our predictions for the year ahead.
Miriam Wugmeister, Partner, Data, Cyber + Privacy
The number of cyber incidents will continue to rise, particularly outside of the U.S., but the number of companies that elect to pay extortion relating to data exfiltration will continue to go down.
Linda Clark, Partner, Data, Cyber + Privacy
In 2026, regulators will keep strengthening cybersecurity laws, but rules will still vary widely across regions. They’ll focus more on holding companies accountable for protecting sensitive data, even as threat actors become more sophisticated and harder to detect and use AI-driven attacks. Regulators will push for greater transparency, requiring organizations to share more about incidents and prove they’re staying current on emerging threats.
Jasmine Arooni, Associate, Data, Cyber + Privacy
As security incidents become more routine, media coverage is beginning to taper off and may continue to do so in 2026, with many no longer drawing headlines or sustained public attention. That decline in visibility should not be mistaken for reduced regulatory scrutiny. Regulators remain focused on how organizations manage, disclose, and remediate such events. Even if fewer make the front page, incidents, and the enforcement that follows, will remain top of mind for regulators.
Dan Alam, Associate, Data, Cyber + Privacy
The last 18 months have seen the UK’s retail, manufacturing, and healthcare sectors attacked in major cybersecurity incidents. In response, the UK government is pressing ahead with the “Cyber Security and Resilience Bill,” tightening incident-reporting duties for IT providers and expanding the powers of the Information Commissioner’s Office. Alongside this, the UK government is consulting on proposals to ban cyber-related ransom payments from public bodies and critical infrastructure and create a new ransom payment reporting regime. Businesses should pay close attention to these two initiatives, focusing on the scope of the regimes and how they interact with each other and with existing requirements.
Jasmine Arooni, Associate, Data, Cyber + Privacy
The New York State Department of Financial Services (NYDFS) remains one of the most assertive cybersecurity regulators, and 2026 will continue its expansion of enforcement under the amended Cybersecurity Regulation. The year will mark the first full examination cycle under the amended regulation, with enforcement activity expected to focus on governance, risk assessment, and multi-factor authentication, testing how effectively companies have operationalized these obligations. The resulting actions will further define “reasonable security” in practice and cement NYDFS’s influence on cyber governance standards in the U.S.
Dillon Kraus, Associate, Data, Cyber + Privacy and Litigation
Cyber incidents will continue to trigger near-automatic class actions, but 2026 will mark a turning point in how companies defend them. Courts are increasingly scrutinizing privilege around forensic reports and incident communications, underscoring the importance of early legal engagement and coordinated response playbooks. We expect regulators to echo this emphasis, promoting clearer standards for “reasonable” cybersecurity governance. Companies that invest in disciplined, privileged response frameworks and document security decisions defensibly will fare better both in litigation and in regulatory reviews.
Jasmine Arooni, Associate, Data, Cyber + Privacy
Information-sharing between companies and law enforcement remains one of the most effective tools for disrupting cyber threats, but confidence in those partnerships is showing signs of strain. Increasing politicization within federal agencies has made some organizations wary of close cooperation. In 2026, voluntary cooperation may decline, and the result could be a weakening of coordinated cyber defense efforts at a time of increasing cyber activity.
Kaylee Cox Bankston, Partner, Data, Cyber + Privacy
In 2026, cybersecurity regulation will continue expanding through a national security lens, driven by the rise of artificial intelligence (AI) and the growing dependence on private infrastructure for national resilience. AI will reshape the threat landscape, enabling adversaries to automate and scale attacks, adapt faster to defenses, and blur the line between nation-state and criminal activity. As attacks grow more sophisticated, the barrier to entry will fall, allowing less experienced actors to conduct advanced operations. Against a backdrop of heightened geopolitical tension, foreign adversaries will increasingly use AI-powered cyber operations to pursue national objectives, from targeting critical infrastructure to spreading disinformation. In response, governments will intensify efforts to close the gap between innovation and regulation, emphasizing resilience and operational security. Cybersecurity’s deepening link to global stability will create a more complex regulatory landscape, one demanding agility, vigilance, and proactive governance from organizations.
Read More: Can Cyber Threat Intelligence Sharing Continue After CISA 2015’s Lapse?, NIH Follows in FDA’s Footsteps and Adopts “Bulk Sensitive Data” Policy That Goes Beyond DOJ Rule Requirements, and CTRL+ALT+DEFEND: Insights on Emerging Threats and Best Practices for Cyber Risk Mitigation
Jasmine Arooni, Associate, Data, Cyber + Privacy
California’s 2026 amendment to its data breach law, SB 446, may not introduce novel timelines, as its 30-day individual notice and 15-day attorney general reporting requirements mirror other states’ obligations, but it signals a broader trend toward faster, more transparent consumer notification and greater accountability for incident handling. In 2026, SB 446 is expected to reinforce the state’s role in shaping how quickly organizations must investigate, confirm, and disclose security incidents.
Marian Waldmann Agarwal, Partner, Data, Cyber + Privacy
There will be no omnibus U.S. privacy or AI law yet. State AI laws will continue to be proposed and passed, but there will be a shift in focus to areas considered safe from potential challenge or preemption following the December Executive Order (e.g., child safety precautions). These laws will continue to focus on requirements for pre-deployment impact assessments, recurring bias audits, public transparency notices, and meaningful human review of automated decisions.
Boris Segalis, Partner, Data, Cyber + Privacy
As AI companies look for new sources of training data as AI’s fuel, 2026 will bring sharper focus on the data rights. Regulators and litigants will increasingly test how consent, deletion, and opt-out rights apply to data used for model training and retraining, as well as stakeholders’ responsibilities to provide and honor those rights. The answer will define not just compliance costs but the economics of generative AI itself.
Nathan Taylor, Partner, Data, Cyber + Privacy and Litigation
Although Congress is expected to face ongoing challenges in advancing major legislative initiatives in 2026, AI legislation with significant privacy components (but also state law preemption) will move forward. A comprehensive federal privacy law, however, will remain on the legislative agenda for another year.
Marijn Storm, Partner, Data, Cyber + Privacy
As the August 2, 2026 compliance deadline for a majority of the provisions of the EU AI Act approaches, member states will begin issuing national guidance and sectoral interpretations. With Italy kicking off this trend, this secondary legislation will introduce divergent approaches to risk classification, registration, and the treatment of open-weight models. The result will mirror the early GDPR years: legal uncertainty and regulatory fragmentation across Europe, requiring multinationals to monitor developments at both the EU and national levels to ensure compliance.
Marijn Storm, Partner, Data, Cyber + Privacy
In 2026, companies operating in sectors such as energy, finance, and healthcare will treat AI governance not as an optional good practice but as a core pillar of critical infrastructure resilience. Guided by the U.S. Department of Homeland Security’s “Roles & Responsibilities Framework for AI in Critical Infrastructure,” companies will begin embedding model-risk governance, supply chain scrutiny, and AI-system intrusion testing into their standard operations, signaling a shift from voluntary guidance toward de facto industry norms.
Read More: Data Privacy at the Crossroads
Marian Waldmann Agarwal, Partner, Data, Cyber + Privacy
State attorneys general will use the new consumer privacy law limitations on profiling to regulate high-risk AI use. Enforcement actions will likely focus on inadequate notices, missing or difficult-to-use opt-outs, discriminatory outcomes, and ineffective appeals processes.
Boris Segalis, Partner, Data, Cyber + Privacy
Having faced few legal challenges, state general privacy laws are now digging deeper and wider. “Second-generation” state laws and regulations that go beyond consumer data rights to regulate data brokers, algorithmic transparency, and automated decision-making will proliferate in 2026. New state-level regulators—from Colorado’s Division of Privacy and Data Protection to California’s Privacy Protection Agency and emerging AI bureaus in states like New York and Illinois—will test the limits of their authority. The result: overlapping and sometimes conflicting obligations that reshape how companies operationalize privacy and AI governance programs.
Marijn Storm, Partner, Data, Cyber + Privacy
As AI regulation matures globally, boards will elevate AI governance to the same level as data protection and cybersecurity oversight. We expect organizations to establish dedicated AI compliance committees responsible for mapping model inventories, validating risk classifications, and ensuring board attestation under the EU AI Act. Regulators will increasingly view AI governance as an enterprise risk management function, one that demands demonstrable accountability from senior leadership.
Read More: Ways to Address AI Governance
Boris Segalis, Partner, Data, Cyber + Privacy
Having enacted new laws and issued new privacy and AI regulations, states will pursue actions enforcing those requirements in 2026. We expect enforcement themes around opaque algorithmic profiling, data broker transparency failures, and mishandled consumer deletion requests under “Delete Acts.” Companies will need defensible documentation, governance records, and cross-state audit trails that can stand up to multi-state scrutiny.
Michelle Luo, Associate, Technology + Transactions and Data, Cyber + Privacy
Particularly during the first half of 2026, we expect to see companies intensifying their preparations to achieve compliance with the EU AI Act ahead of its August 2, 2026 deadline. In practice, these efforts are likely to focus on the requirements applicable to high-risk AI systems and transparency obligations. Corporate compliance strategies will likely be shaped by future European Commission guidance on these areas, which is expected to be published prior to the compliance deadline.
Read More: EU AI Act – Landmark Law on Artificial Intelligence Approved by the European Parliament
Boris Segalis, Partner, Data, Cyber + Privacy
As privacy and AI regulatory complexity explodes, a new generation of companies will emerge to help solve compliance issues across privacy and AI obligations. After many failed attempts to tackle compliance with workflow-based products, AI opens the possibility of creating compliance products that will be scalable and flexible. In 2026, we will see these companies emerge and overtake traditional privacy tech companies that fail to adapt to the new AI reality.
Lokke Moerel, Sr. Of Counsel, Data, Cyber + Privacy
In light of geopolitical developments, EU institutions and supervisory authorities are poised to shift their focus from an AI Comply Strategy (forcing foreign LLMs to comply with EU requirements) to implementing the Commission’s Apply AI Strategy, the foundation of which is to ensure EU sovereign LLM models are compliant with EU legislation and public values. Privacy will be one of the many issues to be tackled. These sovereign EU LLMs will not be trained on scraped data and social media data in order to avoid all legal debates about whether digital platform providers will require an opt-out or opt-in to use their users’ content to train AI. Once EU sovereign models are in place, the AI Comply Strategy may become a much bigger issue in the future.
Mary Race, Of Counsel, Data, Cyber + Privacy
In 2025, we saw a rising tide of enforcement actions by state privacy regulators, with a focus on dark patterns, consumer rights, and malfunctioning opt-out processes. We expect the surf to get rougher in 2026, with more frequent, more expensive, and more coordinated enforcement actions, particularly in states such as California, Colorado, and Connecticut, which teamed up near the end of 2025 to enforce compliance with the Global Privacy Control. Their message to companies for 2026: you can’t opt out of honoring opt-outs.
Melissa Crespo, Partner, Data, Cyber + Privacy
New York’s Child Data Protection Act took effect on June 20, 2025, with attorney general guidance already in place. We expect 2026 to bring AG-led activity and copycat bills in other states that extend protections up to under-18 and curb targeted advertising and data-sharing. California’s Age-Appropriate Design Code remains enjoined, so many states are gravitating to New York-style data rules rather than “design code” mandates.
Josh Fattal, Associate, Data, Cyber + Privacy
In January 2026, the California Privacy Protection Agency is set to launch a new one-stop shop mechanism (the “DELETE Act”) allowing California consumers to request the deletion of their personal information held by data brokers. Data brokers must comply with this mechanism by August 2026. In anticipation of these new requirements, California—and other states—are likely to intensify enforcement efforts targeting data brokers next year.
Read More: Shifting Landscapes
Katherine Wang, Associate, Data, Cyber + Privacy and Litigation
Plaintiffs will continue filing claims under the Federal Wiretap Act and similar state laws, with web-based tracking technologies like pixels and cookies remaining in focus. At the same time, some plaintiffs may push the boundaries by targeting newer technologies, including drones transmitting radio frequency data and devices that collect sensor information. Companies deploying such tools should anticipate novel theories of liability as courts continue to grapple with how decades-old statutes apply to modern data practices.
Tina Reynolds, Partner, Government Contracts + Public Procurement
As the Department of Defense (DoD) begins to fully implement its long-planned Cybersecurity Maturity Model Certification (CMMC) program, federal contractors and subcontractors will need to implement required security measures and, in many instances, receive third-party confirmation of compliance from a designated certified third-party authorizing organization (C3PAO). The CMMC program is being implemented through a phased-in approach beginning in November 2025; by November 2028, this program, and nearly all DoD solicitations and contracts, will require contractor conformance to one of three levels of cybersecurity requirements.
Read More: Department of Defense Finalizes Long-Awaited Cybersecurity Rule
Joe Folio, Partner, Data, Cyber + Privacy
In 2024, the United States added two laws—the Protecting Americans’ Data from Foreign Adversaries Act (PADFAA), administered by the Federal Trade Commission (FTC), and regulations governing Access to U.S. Sensitive Personal Data and Government-Related Data by Countries of Concern or Covered Persons, administered by the Department of Justice—that restrict transactions involving certain types of sensitive U.S. data. To date, there has not been an enforcement action under either law. However, in 2026, it is likely that the FTC will announce its first resolution under PADFAA before the DOJ or a company announces an investigation or enforcement action under the regulations.
Read More: FTC Looks to Leverage PADFAA Enforcement
Brittnie Moss-Jeremiah, Associate, Data, Cyber + Privacy
Across jurisdictions, 2026 is likely to bring closer regulatory scrutiny of organizations and how they handle individual rights requests, including data subject access requests (DSARs). In the UK, the new Data Use and Access Act will prompt a shift towards more structured DSAR management, with organizations expected to adopt clearer, documented processes. Regulators may also begin to link effective individual rights requests handling to broader governance and accountability practices. The good news is that while expectations are tightening somewhat, clearer procedural rules and increased automation should make DSAR compliance more operationally achievable.
Alex van der Wolk, Partner, Data, Cyber + Privacy
In 2026, the French data protection authority (CJEU) will decide the answers to questions raised in multiple cases regarding the scope of EU privacy class actions. Multiple class action cases in, for example, the Netherlands have been put on hold because of the lack of clarity around fundamental issues of privacy class action litigation. For example, does a claims organization representing individuals on privacy claims for damages require an explicit mandate from those individuals (opt-in) or could they also operate on the basis of opt-out - as is currently the case for other types of mass claims in the Netherlands? There have been numerous CJEU cases on the topic of immaterial damage awards in 2025, which is indicative of the fact that this area of the law is still very much in development and will continue to further develop in 2026.
Hanno Timner, Partner, Data, Cyber + Privacy
With the Digital Omnibus, the EU Commission is pushing ahead with the simplification of digital regulations from 2026 onwards. By simplifying and harmonizing existing data-related regulations, including AI regulations, rules on cookies and other tracking technologies, and rules on reporting and cybersecurity requirements, the Digital Omnibus aims to create legal clarity and reduce duplicate reporting and compliance costs. The goal of reducing administrative and financial burdens is also reflected in the German government’s draft bill on reducing bureaucracy. Both measures complement each other and could bring significant organizational benefits for companies starting in 2026.
Read More: Simplification – digital package and omnibus
Marijn Storm, Partner, Data, Cyber + Privacy
In 2026, regulators and courts will increasingly focus on whether data subject rights can be meaningfully applied to trained AI models. We expect a growing consensus that rights such as access, rectification, and deletion should target model outputs rather than the unstructured datasets used for training. This conceptual shift would make enforcement more practical and privacy protection more effective, redefining the intersection between the GDPR and generative AI systems.
Hanno Timner, Partner, Data, Cyber + Privacy
On October 29, 2025, the German Federal Government presented its draft implementation law regarding the EU Data Act in Germany. The implementation law regulates to which authority citizens and companies in Germany can turn with questions and disputes regarding the Data Act. Main objectives of the Data Act include increasing the availability of data from networked devices, facilitating switching between cloud computing providers, and generally promoting data access and data use in Europe. The main news, however, is what the German Federal Government did not include in its draft implementation law. Germany is implementing the EU requirements without additional requirements, ending the usual “gold plating” of EU laws for application in Germany.
Alex van der Wolk, Partner, Data, Cyber + Privacy
While EU countries should have implemented the NIS2 Directive into their national laws by October 2025, only about half of them have actually done so. Going into 2026, this means we will see more countries, notably including countries such as Germany, France, the Netherlands, and Ireland, issue their national cybersecurity laws. Until such time, the NIS2 rules are simply not in effect in these countries. NIS2 is a comprehensive cybersecurity framework applicable to specific sectors and industries yet exceeding mere critical infrastructure. While the compliance requirements in the Directive are detailed, there are still many questions around implementation. We anticipate further guidance will be issued in 2026, in particular, by countries that have yet to issue their national laws.
Michelle Luo, Associate, Technology + Transactions and Data, Cyber + Privacy
In 2026, European data protection authorities (DPAs) are anticipated to increasingly scrutinize company privacy policies and other notices to individuals, including via questionnaires or fact-finding exercises. We expect to see DPAs taking follow-up enforcement actions (such as administrative fines and corrective orders) against organizations if they identify any non-compliance with GDPR transparency and information obligations. These topics have been designated as the enforcement focus for 2026 under the Coordinated Enforcement Framework and will be the subject of a report by the European Data Protection Board.
Brittnie Moss-Jeremiah, Associate, Data, Cyber + Privacy
After years of rapid rulemaking, EU policymakers are beginning to address overlap across the EU’s digital frameworks. In 2026, the European Commission’s “Digital Omnibus” initiative and similar projects are expected to begin aligning obligations under the GDPR, AI Act, Data Act, and NIS2. Full harmonization remains distant, but clearer definitions and reporting duties and coordinated enforcement should ease overlap for organizations and make compliance more globally transferable. The impact will be gradual, but these first steps will mark a shift towards a more coherent EU digital governance model, one the UK and other jurisdictions may soon start to mirror.
Dan Alam, Associate, Data, Cyber + Privacy
The ICO was hot on cookie enforcement in 2025, auditing the UK’s top 1,000 websites on their use of advertising cookies. The audit coincides with significant reform: the Data (Use and Access) Act has enhanced the ICO’s powers to fine businesses for cookie violations while relaxing consent requirements for certain cookies that have a low privacy impact. Across the EU, regulators remain active in enforcing cookie rules and the European Commission is looking (again) at reforms to reduce the circumstances in which consent is required for cookies. In 2026, two things are certain: cookie pop-ups will not go away, and regulators will continue to follow up with businesses about their cookie compliance.
Mercedes Samavi, Of Counsel, Data, Cyber + Privacy and TTG
In 2026, we should start to see the UK and EU flexing their enforcement muscles under the Online Safety Act and Digital Services Act. Businesses have had time to come to grips with regulatory guidance and expectations and thus should be looking to shift beyond initial compliance steps towards more advanced, proactive risk management (e.g., deploying advanced content moderation technologies and evaluating appropriate age assurance measures). It will be particularly interesting to see how the legal obligations under online safety regimes are supported (or hindered) by new developments in AI and how oversight for such issues is divided up among regulators.
Yukihiro Terazawa, Partner, Data, Cyber + Privacy, and Takaki Sato, Of Counsel, Data, Cyber + Privacy
The Japanese privacy regulator (the Personal Information Protection Committee (PPC)) is considering amending the Act on Protection of Personal Information (APPI), but the draft of this amendment has not yet been prepared. In March 2025, PPC published an issue list that included both tightening and loosening of regulations. Particular attention has been drawn to potential enhancements to APPI enforcement, such as the possible introduction of class actions and the imposition of fines calibrated to a violator’s revenue. At the same time, the PPC is considering exemptions from the APPI’s consent requirements for certain uses of AI technologies, addressing what many view as a longstanding barrier to seamless data-sharing among enterprises. Although significant pushback from stakeholders is anticipated with respect to stronger enforcement measures, we expect that relaxation of consent requirements in the AI context is likely to proceed, given the government’s broader initiative to promote AI-driven business development.
Mainland China
Paul McKenzie, Partner, Data, Cyber + Privacy, and Tingting Gao, Associate, Data, Cyber + Privacy
China has now fully implemented its cross-border data transfer (CBDT) regime with three transfer mechanisms—namely, the security assessment, standard contractual clauses filing, and certification—all fully operational, following the issuance of regulations implementing the certification process. Previously, the Cyberspace Administration of China (CAC) and other regulators have taken as their main task urging compliance with CBDT rules rather than penalizing non-compliance. In the next phase, look for more aggressive enforcement. We are already seeing that, with the recent imposition of an administrative penalty against fashion brand Dior and a court ruling against hotel group Accor for breaches of CBDT requirements.
Read More: China’s Data Regulator Significantly Relaxes CBDT Regime and China’s New CBDT Regime: One Year On
Gordon Milner, Partner, Data, Cyber + Privacy, and Tingting Gao, Associate, Data, Cyber + Privacy
Under China’s Data Security Law (DSL), “important data” is a key concept, and its processing is subject to stringent regulatory controls and compliance requirements, including those related to cross-border data transfers (CBDTs). Important data is intended to be identified through specific catalogues. Progress in developing these catalogues has been glacial since the DSL came into effect in 2021, creating uncertainty and practical challenges for international businesses operating in China. Over the past two years, regulators of several free trade zones have issued CBDT negative lists that offer local guidance on the scope of important data within designated sectors. We anticipate that, in 2026, the regulators will consolidate and build upon those regional efforts to develop nationally applicable catalogues that will define important data across key sectors. Regulators in the finance and automotive sectors have already made progress in this direction in 2025, and we expect this trend to extend to other sectors.
Read More: China’s New CBDT Regime: One Year On
Paul McKenzie, Partner, Data, Cyber + Privacy, and Tingting Gao, Associate, Data, Cyber + Privacy
The year 2026 will see more robust enforcement not only of China’s cross-border data transfer regime, but also of Personal Information Protection Law (PIPL), Data Security Law, and Cybersecurity Law obligations. This past year has seen regulators make great strides on the rulemaking front, issuing regulations to implement a range of discrete PIPL and other provisions. Regulations are now in effect governing the regulatory reporting of data breaches and cybersecurity incidents (effective in November 2025), the conduct of PIPL compliance audits (effective in May 2025), and the designation by foreign companies caught within PIPL’s ambit of local representatives in China (effective in January 2025). Regulators in 2026 will be looking to companies to comply with these regulations and taking enforcement measures against those that do not.
Gordon Milner, Partner, Data, Cyber + Privacy, Chuan Sun, Partner, Data, Cyber + Privacy, and Tingting Gao, Associate, Data, Cyber + Privacy
With the widespread adoption of AI technologies in China, many cases of AI misuse—including the abuse of personal information by AI service providers and users—have already surfaced. For example, as a typical case published by the Beijing Internet Court in September 2025, the court held that creating deepfake face-swap videos that contain an influencer’s likeness without her consent constitutes an infringement of her personal information rights. On the privacy front, in 2026, we expect to see more vigorous enforcement actions in this space to tamp down some of the more excessive practices. This could result in useful court rulings clarifying the steps AI deployers are required to take to guard against infringements of personal information rights.
Hong Kong
Gordon Milner, Partner, Data, Cyber + Privacy, Chuan Sun, Partner, Data, Cyber + Privacy, and Zooey Chen, Associate, Data, Cyber + Privacy
Hong Kong recently enacted its first standalone cybersecurity law, the Protection of Critical Infrastructures (Computer Systems) Ordinance (Cap. 653), which will take effect on January 1, 2026. It empowers the authorities to designate operators in eight critical sectors (energy, IT, banking and finance, air, land, and maritime transport, healthcare, and telecommunications and broadcasting) and will require the designated operators to follow certain strict cybersecurity requirements. We anticipate the rollout of the new law throughout 2026, with operator designations, sector-specific codes, and incident reporting drills, together with publication of further details regarding the specific requirements, including setting up a security management unit, conducting annual risk assessments and biennial audits, preparing emergency response plans, and reporting security incidents.
Gordon Milner, Partner, Data, Cyber + Privacy, Chuan Sun, Partner, Data, Cyber + Privacy, and Zooey Chen, Associate, Data, Cyber + Privacy
The Hong Kong office of the Privacy Commissioner for Personal Data (PCPD) has indicated that it is considering reforms to the existing Personal Data (Privacy) Ordinance (PDPO) to bring the law into line with other jurisdictions. While progress has been slow, there is a possibility that piecemeal changes could be enacted in 2026 to introduce mandatory breach notification requirements and to update retention policies and fines.
Gordon Milner, Partner, Data, Cyber + Privacy, Chuan Sun, Partner, Data, Cyber + Privacy, and Zooey Chen, Associate, Data, Cyber + Privacy
In recent months, the Hong Kong PCPD has been actively promoting AI governance, issuing a checklist for using generative AI in the workplace and an anonymization guide to help organizations prepare data for AI use while complying with the PDPO. We expect this regulatory focus on AI governance to continue in 2026 and anticipate seeing the PCPD becoming more active in policing PDPO compliance by AI operators through investigations and enforcement actions as rapid advancements in AI continue to reshape data practices locally and globally.
India
Cynthia Rich, Senior Privacy Advisor, Data, Cyber + Privacy
In late 2025, India began a gradual rollout of the provisions of its Digital Personal Data Protection Act (the “Act”). Most of the substantive provisions of the Act and the recently issued Digital Personal Data Protection Rules 2025, are not expected to go into effect until May 2027. However, in light of heightened U.S.-India trade tensions, the Central Government could flex its authority under the Act by issuing draft rules that restrict cross-border data transfers or data-sharing with foreign governments as a negotiating tactic in order to secure more favorable trade deals with the United States and other foreign governments.
Sri Lanka
Cynthia Rich, Senior Privacy Advisor, Data, Cyber + Privacy
The substantive provisions of Sri Lanka’s Personal Data Protection Act were repealed in March 2025 before they went into full effect. This action was taken to give the data protection authority and organizations more time and flexibility to get ready, and to fix practical implementation issues around individual rights, data protection officers, data protection impact assessments, and cross-border transfers. An Amendment Act was subsequently enacted in late October 2025. Under the Amendment Act, all remaining provisions will come into operation on a date or dates appointed by the minister. As a result, we are likely to see some or all of the provisions come into effect in 2026.
Vietnam
Cynthia Rich, Senior Privacy Advisor, Data, Cyber + Privacy
Vietnam’s Personal Data Protection Law (PDPL) takes effect on January 1, 2026. This comprehensive privacy law imposes on organizations a wide range of new obligations that cover, among other things, employee privacy, health, financial, and advertising business activities, biometric and location data, social networking platforms and online communication services, Big Data processing, artificial intelligence, blockchain, and cloud computing. The Ministry of Public Security (MPS) released a Draft Decree detailing several provisions of the PDPL in October 2025 for public consultation. So, it is likely that these rules will be finalized by the beginning of next year. Therefore, organizations should closely review their obligations under the PDPL and adjust their compliance programs accordingly.
Dillon Kraus, Associate, Data, Cyber + Privacy and Litigation
After years of explosive growth, privacy class actions will shift focus in 2026 as courts and legislatures push back on “no-injury” claims. Recent court rulings emphasize the need for plaintiffs to plead concrete harm, while proposed and enacted statutory amendments aim to curb boilerplate wiretap suits. Plaintiffs will pivot to narrower, fact-specific theories under statutes like the Electronic Communications Privacy Act (ECPA), Children’s Internet Protection Act (CIPA), and Biometric Information Privacy Act (BIPA), emboldened by recent high-value settlements for claims based in use of web tracking technologies. We expect the next wave of filings to focus on data uses that courts view as genuinely sensitive, such as biometric and health data.
Jasmine Arooni, Associate, Data, Cyber + Privacy
Data security incident-related class actions are expected to grow more sophisticated in 2026 as plaintiffs’ firms refine their strategies and broaden the scope of alleged harm. Emerging claims will focus less on the breach itself and more on each company’s response, such as how quickly it disclosed the breach, how it communicated vital information, and what controls it failed to implement for the incident. This shift will turn post-incident conduct into a central feature of litigation risk, not just a regulatory concern.
Marijn Storm, Partner, Data, Cyber + Privacy
In 2026, life sciences companies will increasingly harness large-scale health, genetic, and biometric datasets to power AI-driven drug discovery and personalized care. However, regulators on both sides of the Atlantic will intensify their scrutiny of how sensitive health-related personal information is collected, processed, and reused for AI systems. Firms will face heightened expectations around notice and choice, bias mitigation, explainability, and deletion mechanisms, meaning data-driven innovation must be paired with privacy-by-design governance frameworks.
Read More: Data Privacy at the Crossroads
Carson Martinez, Associate, Data, Cyber + Privacy
After the Fifth Circuit vacated the Biden-era HIPAA Privacy Rule to Support Reproductive Health Care Privacy in June 2025, key federal protections for reproductive health data were removed. In response, states are stepping up to fill HIPAA’s gaps. Virginia, for example, amended its comprehensive consumer privacy law to prohibit obtaining, disclosing, or selling reproductive health data without consent. We expect the patchwork of legal protections for this sensitive data to increase in 2026 as the scrutiny of reproductive privacy intensifies.
Read More: HHS Releases Final Rule to Protect Reproductive Health Care Privacy
Melissa Crespo, Partner, Data, Cyber + Privacy
The U.S. Department of Health and Human Services (HHS) and its Office of Civil Rights (OCR) published proposed updates to the Security Rule on January 6, 2025. The proposal would modernize a two‑decade‑old framework by spelling out baseline safeguards: multi‑factor authentication, encryption, asset inventories, tested incident and contingency plans, and clearer oversight of business associates. Many of us expected a final version of the rule in 2025, but that timeline slipped as HHS worked through a substantial comment record. We expect a 2026 final rule that makes these elements the practical yardstick for what HHS considers “reasonable security” under HIPAA.
Melissa Crespo, Partner, Data, Cyber + Privacy
Several U.S. states are building health data regimes outside the federal HIPAA, and the pace will accelerate in 2026. As of 2025, Virginia restricts collecting and sharing reproductive or sexual health information with a private right of action; New York’s Health Information Privacy Act (HIPA) passed the legislature and, if signed, it will take effect after one year and adopt a consent-first model. Washington’s My Health My Data Act (MHMDA) is already in private litigation, and Nevada’s SB 370 is in force with consent and geofencing limits. We expect more states in 2026 to have active AG enforcement of state health data laws. The challenges are scope and exceptions: some laws are narrow (reproductive health) while others are broad (consumer health); some use narrow “necessary” carve-outs (HIPA), while others omit a general “necessary” exception and require consent across the board (Virginia), making the creation of a single national playbook challenging.
Linda Clark, Partner, Data, Cyber + Privacy, Carson Martinez, Associate, Data, Cyber + Privacy, and Katherine Wang, Associate, Data, Cyber + Privacy, Litigation
Looking ahead to 2026, regulators around the globe will continue paying close attention to developments in the collection of neural and related data as new neurotechnology and wearable tech becomes even more common, and especially as AI’s ability to infer mental states from everyday data grows increasingly sophisticated. We will see laws aimed at expanding existing privacy laws as well as wholly new frameworks aimed at adding standalone protections to guard against misuse or unexpected use of this deeply personal and complex information. As legislative models diverge, companies handling neural data will need to monitor and adapt to a patchwork of evolving requirements and will face uncertainty as regulators work to define clear rules for how this data can be collected, stored, and shared.
Read more: A MoFo Privacy Minute: Neural Data Added to Montana’s Genetic Information Privacy Act and More States Propose Privacy Laws Safeguarding Neural Data
Miriam Wugmeister, Partner, Data, Cyber + Privacy
Other parts of the U.S. government (in addition to the Food and Drug Administration (FDA) and the National Institutes of Health (NIH)) will issue guidance or rules relating to the interpretation of the DOJ Bulk Data Rules.
Josh Fattal, Associate, Data, Cyber + Privacy
In 2024, the DOJ Data Security Program took effect, establishing the first U.S. national security framework to prohibit or restrict access to certain sensitive personal data by China and other countries of concern. Since the program’s implementation, the FDA and NIH have issued press releases and policies that go beyond its initial requirements. In 2026, additional federal agencies may follow suit and impose further data usage and transfer restrictions on U.S. companies.
Read More: NIH Adopts Bulk Sensitive Data Policy































Practices