Trump Administration Releases National AI Policy Framework

02 Apr 2026
Client Alert

The Trump administration released its National Policy Framework for Artificial Intelligence on March 20, 2026, outlining recommendations intended to establish a nationally uniform approach to AI regulation (the “Framework”). The recommendations to Congress span seven pillars, including (1) child protection, (2) AI infrastructure and small business support, (3) intellectual property, (4) censorship and free speech, (5) enabling innovation, (6) workforce preparation, and (7) preemption of state AI laws. The Framework reflects a policy preference for a sector-specific, federally led regulatory model with significant preemption of state AI laws. It builds on the December 2025 Executive Order (EO) on AI, which directed the administration to develop legislative recommendations for Congress while deploying executive tools, including an AI Litigation Task Force, to identify and, where appropriate, challenge state AI laws viewed as inconsistent with federal policy in the interim.

The Framework calls on Congress to preempt state AI laws that “impose undue burdens,” while preserving state authority in certain areas, including traditional police powers to protect children and prevent fraud, state zoning authority over AI infrastructure, and requirements governing a state’s own use of AI in procurement and public services. The recommendations largely track priorities identified in the December EO, such as safeguarding children, free speech, and communities, and maintain similar carve-outs from state preemption. The Framework also addresses new areas, including intellectual property, workforce development and AI education, small business support, residential ratepayer protections from data center costs, a prohibition on creating a new federal AI regulator, and the establishment of regulatory sandboxes.

These recommendations reflect the administration’s legislative priorities and provide a recommended roadmap for potential congressional action. Companies developing, deploying, or investing in AI should evaluate how these recommendations may shape the regulatory environment in the near term while keeping an eye on regulations at the state level.

Overview of the Framework’s Key Recommendations

I. “Protecting Children and Empowering Parents”

The Framework recommends measures focused on minors, including parental control tools, age assurance requirements, and safeguards to mitigate risks of exploitation and self-harm. The administration recommends avoiding broad content standards that could give rise to excessive litigation. It also emphasizes preserving state authority to enforce generally applicable child protection laws. As lawmakers are already pursuing AI-related child-safety legislation, including the bipartisan GUARD Act and the Youth AI Privacy Act, companies that offer consumer-facing AI tools should expect this area to remain a focus of both federal and state policymaking.

II. “Safeguarding and Strengthening American Communities”

The Framework includes proposals related to AI infrastructure, as well as measures enhancing enforcement against AI-enabled fraud, supporting small business adoption of AI tools, and mitigating risks from advanced AI models. These include limiting potential increases in residential electricity rates from data centers and streamlining permitting processes. Since Congress is already considering bills addressing AI-enabled fraud and small-business adoption, these proposals align with areas where more targeted legislative activity is already underway and may continue to develop.

III. “Respecting Intellectual Property Rights and Supporting Creators”

The administration takes the position that “training of AI models on copyrighted material does not violate copyright laws” but “acknowledges arguments to the contrary exist and therefore supports allowing the Courts to resolve this issue.” It recommends that Congress avoid taking actions that would “impact the judiciary’s resolution of whether training on copyrighted material constitutes fair use,” and should enable collective licensing frameworks, establish protections against unauthorized digital replicas, and monitor legal developments around copyrights. Given the parallel push in Congress for both training transparency and digital-replica protections, this section appears to leave open the possibility of more targeted legislation addressing disclosure, licensing, and digital replica protections in these areas.

IV. “Preventing Censorship and Protecting Free Speech”

The administration recommends that Congress limit government actions that would “coerce” technology providers to “ban, compel, or alter content based on partisan or ideological agendas,” and establish mechanisms for individuals to seek redress from the federal government for alleged “agency efforts to censor expression on AI platforms.”

V. “Enabling Innovation and Ensuring American AI Dominance”

The administration recommends that Congress establish regulatory sandboxes for AI applications, expand access for industry and academia to federal datasets for AI training, and rely on existing regulatory agencies with subject matter expertise and industry-led standards to oversee sector-specific AI applications instead of creating a new federal AI regulator.

VI. “Educating Americans and Developing an AI-Ready Workforce”

The administration recommends that Congress support workforce development initiatives, including integrating AI training into existing education and workforce programs, studying AI-related labor market impacts, and supporting land-grant institutions. AI education and workforce measures have also been the subject of ongoing bipartisan legislative interest, which may support incremental policy development.

VII. “Establishing a Federal Policy Framework and Preempting State AI Laws”

The Framework recommends that Congress preempt state AI laws that “impose undue burdens,” while preserving state authority in key areas, including:

  • Generally applicable laws (e.g., child protection, fraud prevention, and consumer protection);
  • Zoning and AI infrastructure siting; and
  • State use of AI in procurement and public services.

It also suggests limiting state laws that (i) regulate AI development because it is “inherently interstate” in nature, (ii) restrict otherwise lawful AI-enabled activity, or (iii) impose liability on AI developers for certain third-party uses.

Looking Ahead

These recommendations reflect the administration’s federal policy direction and may influence emerging legislative proposals in Congress. Even so, efforts to preempt state AI regulation have already faced pushback from state lawmakers, and many states have continued to advance their own AI regulatory frameworks. Companies should therefore monitor ongoing state-level activity even as federal preemption proposals are pending.

Companies should consider the following steps:

  • Evaluate products used by children, including age assurance, parental controls, and safety-by-design features;
  • Track litigation and emerging frameworks on AI training and copyright;
  • Assess exposure to deepfake and impersonation risks, including potential liability and mitigation strategies; and
  • Prepare for a dual regime where federal law may preempt some state requirements but traditional state laws remain highly relevant.

Maya Vishwanath, an AI Analyst at Morrison Foerster, contributed to this alert.

We are Morrison Foerster — a global firm of exceptional credentials. Our clients include some of the largest financial institutions, investment banks, and Fortune 100, technology, and life sciences companies. Our lawyers are committed to achieving innovative and business-minded results for our clients, while preserving the differences that make us stronger.

Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Prior results do not guarantee a similar outcome.