The Trump administration released its National Policy Framework for Artificial Intelligence on March 20, 2026, outlining recommendations intended to establish a nationally uniform approach to AI regulation (the “Framework”). The recommendations to Congress span seven pillars, including (1) child protection, (2) AI infrastructure and small business support, (3) intellectual property, (4) censorship and free speech, (5) enabling innovation, (6) workforce preparation, and (7) preemption of state AI laws. The Framework reflects a policy preference for a sector-specific, federally led regulatory model with significant preemption of state AI laws. It builds on the December 2025 Executive Order (EO) on AI, which directed the administration to develop legislative recommendations for Congress while deploying executive tools, including an AI Litigation Task Force, to identify and, where appropriate, challenge state AI laws viewed as inconsistent with federal policy in the interim.
The Framework calls on Congress to preempt state AI laws that “impose undue burdens,” while preserving state authority in certain areas, including traditional police powers to protect children and prevent fraud, state zoning authority over AI infrastructure, and requirements governing a state’s own use of AI in procurement and public services. The recommendations largely track priorities identified in the December EO, such as safeguarding children, free speech, and communities, and maintain similar carve-outs from state preemption. The Framework also addresses new areas, including intellectual property, workforce development and AI education, small business support, residential ratepayer protections from data center costs, a prohibition on creating a new federal AI regulator, and the establishment of regulatory sandboxes.
These recommendations reflect the administration’s legislative priorities and provide a recommended roadmap for potential congressional action. Companies developing, deploying, or investing in AI should evaluate how these recommendations may shape the regulatory environment in the near term while keeping an eye on regulations at the state level.
The Framework recommends measures focused on minors, including parental control tools, age assurance requirements, and safeguards to mitigate risks of exploitation and self-harm. The administration recommends avoiding broad content standards that could give rise to excessive litigation. It also emphasizes preserving state authority to enforce generally applicable child protection laws. As lawmakers are already pursuing AI-related child-safety legislation, including the bipartisan GUARD Act and the Youth AI Privacy Act, companies that offer consumer-facing AI tools should expect this area to remain a focus of both federal and state policymaking.
The Framework includes proposals related to AI infrastructure, as well as measures enhancing enforcement against AI-enabled fraud, supporting small business adoption of AI tools, and mitigating risks from advanced AI models. These include limiting potential increases in residential electricity rates from data centers and streamlining permitting processes. Since Congress is already considering bills addressing AI-enabled fraud and small-business adoption, these proposals align with areas where more targeted legislative activity is already underway and may continue to develop.
The administration takes the position that “training of AI models on copyrighted material does not violate copyright laws” but “acknowledges arguments to the contrary exist and therefore supports allowing the Courts to resolve this issue.” It recommends that Congress avoid taking actions that would “impact the judiciary’s resolution of whether training on copyrighted material constitutes fair use,” and should enable collective licensing frameworks, establish protections against unauthorized digital replicas, and monitor legal developments around copyrights. Given the parallel push in Congress for both training transparency and digital-replica protections, this section appears to leave open the possibility of more targeted legislation addressing disclosure, licensing, and digital replica protections in these areas.
The administration recommends that Congress limit government actions that would “coerce” technology providers to “ban, compel, or alter content based on partisan or ideological agendas,” and establish mechanisms for individuals to seek redress from the federal government for alleged “agency efforts to censor expression on AI platforms.”
The administration recommends that Congress establish regulatory sandboxes for AI applications, expand access for industry and academia to federal datasets for AI training, and rely on existing regulatory agencies with subject matter expertise and industry-led standards to oversee sector-specific AI applications instead of creating a new federal AI regulator.
The administration recommends that Congress support workforce development initiatives, including integrating AI training into existing education and workforce programs, studying AI-related labor market impacts, and supporting land-grant institutions. AI education and workforce measures have also been the subject of ongoing bipartisan legislative interest, which may support incremental policy development.
The Framework recommends that Congress preempt state AI laws that “impose undue burdens,” while preserving state authority in key areas, including:
It also suggests limiting state laws that (i) regulate AI development because it is “inherently interstate” in nature, (ii) restrict otherwise lawful AI-enabled activity, or (iii) impose liability on AI developers for certain third-party uses.
These recommendations reflect the administration’s federal policy direction and may influence emerging legislative proposals in Congress. Even so, efforts to preempt state AI regulation have already faced pushback from state lawmakers, and many states have continued to advance their own AI regulatory frameworks. Companies should therefore monitor ongoing state-level activity even as federal preemption proposals are pending.
Companies should consider the following steps:
Maya Vishwanath, an AI Analyst at Morrison Foerster, contributed to this alert.