President Trump issued an Executive Order (EO) on “Ensuring a National Policy Framework for Artificial Intelligence” on December 11, 2025. The EO seeks to scale back and discourage state AI regulations that conflict with the administration’s preference for a lighter-touch AI regulatory agenda. The final EO is a revised version of an initial draft of an EO that was delayed after appearing in news reports amid opposition from many in Congress, including from Republicans lawmakers.
The EO directs White House advisors to engage Congress on developing federal legislation to establish a “uniform Federal policy framework for AI” that would preempt state AI laws. The final EO places express limits on the call for legislation that were absent from the initial EO draft, stating that federal legislation should exempt certain categories of state AI laws from preemption: child safety protection, AI compute and data center infrastructure, state government procurement and use of AI, and “other topics as shall be determined.” The EO calls for federal legislation that imposes a “minimally burdensome national standard” that ensures that “children are protected, censorship is prevented, copyrights are respected, and communities are safeguarded.” Even with the president’s formal support for legislation, to date there has been little consensus in Congress on what provisions a national AI regulation bill should contain, and it is unclear whether the EO will change that dynamic.
Perhaps in appreciation of the uphill battle in Congress, the EO pushes federal agencies to use existing authorities where possible. It directs the Federal Communications Commission (FCC) to consider adopting a “Federal reporting and disclosure standard for AI models that preempts conflicting State laws.” The EO also calls on the chairman of the Federal Trade Commission (FTC) within 90 days to issue a policy statement detailing how the FTC Act’s prohibition on unfair and deceptive acts or practices[1] applies to AI models.
In an effort to roll back state regulation, the EO directs the attorney general to create a Department of Justice (DOJ) task force to mount legal challenges to state AI laws “on grounds that such laws unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful.” To help lay the groundwork for federal preemption litigation under existing law, the EO also directs the Department of Commerce within 90 days to identify and publish a list of state AI laws that conflict with the lighter-touch AI regulatory framework the Trump administration favors and refer such laws to the task force to potentially challenge in court. Under the EO, the Department of Commerce may identify state laws that promote AI innovation consistent with the national policy framework that the EO establishes.
Finally, in an effort to use the power of the federal purse to incentivize states to conform, the EO directs the Department of Commerce, which approves and administers Broadband Equity, Access, and Deployment (BEAD) Program grants, to withhold such funding from states identified as having “onerous” AI laws. All other agencies must also assess whether they may condition receipt of funds under the discretionary grant programs they administer on states refraining from regulating AI.
To date, no legislative proposal to establish a federal AI regulatory framework that garners broad support in Congress has materialized. Section 8 of the EO attempts to change that dynamic by directing White House advisors to engage Congress on federal legislation to establish a “uniform Federal policy framework for AI” that would preempt state AI laws. The EO notes that the federal legislation should not propose preempting state AI laws relating to child safety protections, AI compute and data center infrastructure (other than generally applicable permitting reforms), state government procurement and use of AI, and “other topics as shall be determined.”
Congress earlier this year considered but failed to pass a so-called moratorium on state AI regulation. These proposals have come in two flavors: (1) an explicit federal ban on states regulating AI for a certain period of time (which some have argued would be unconstitutional under the so-called “anti-commandeering” doctrine); and (2) legislation conditioning states’ receipt of federal funds potentially related to AI (e.g., under the BEAD Program, discussed above) on their agreement to refrain from AI regulation. Republican leaders in the House of Representatives have proposed including some type of moratorium in the annual National Defense Authorization Act (NDAA), but enactment of such a provision would likely face long odds, given the Senate soundly rejected moratorium proposals only a few months ago.
The EO appears to be a reaction to the states, such as California and Colorado, that have stepped into the vacuum left by Congress and enacted wide-ranging AI legislation. President Trump has portrayed these developments as hindering U.S. leadership in AI, arguing that they risk a patchwork of different AI laws across all 50 states where “the most restrictive state of all will be the one that rules.” To promote AI innovation, the president has argued, the United States “need[s] one common sense Federal standard that supersedes all states.” The EO represents a formal policy manifestation of the Trump administration’s goal to “remove barriers to and encourage adoption of AI applications across sectors.”
The EO directs the Department of Commerce within 90 days to identify and publish a list of state AI laws that conflict with the lighter-touch AI regulatory framework the Trump administration favors and refer such laws to the task force to potentially challenge in court. The EO specifically mentions “laws that require AI models to alter their truthful outputs, or that may compel AI developers or deployers to disclose or report information in a manner that would violate the First Amendment or any other provision of the Constitution.” The EO specifically calls out recent legislation passed in Colorado as an example of “onerous” state AI laws, but it is not clear yet which laws will be the subject of the list or whether other more recent state AI laws like California Senate Bill 243 and New York’s AI Companion Models Law will be on the list to be published within 90 days. That list will reveal which types of state AI laws the Trump administration is most inclined to seek to challenge. The EO also notes that the Department of Commerce may identify state laws that promote AI innovation consistent with the national policy framework that the EO establishes.
The EO directs the attorney general to create a DOJ task force to mount legal challenges to state AI laws “on grounds that such laws unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful.” Under the so-called Dormant Commerce Clause doctrine, states may not enact laws that unduly restrict interstate commerce. That doctrine has been used to strike down stringent health and safety laws by the states. Under either theory, without new federal jurisdiction, it is still uncertain to what extent existing state laws could be invalidated
The EO directs the FCC to consider adopting a “Federal reporting and disclosure standard for AI models that preempts conflicting State laws.” The EO offers no additional details here, but earlier this year, the chairman of the FCC publicly floated the idea of leveraging Section 253 of the Communications Act—which provides that no state or local law or regulation may “prohibit or have the effect of prohibiting the ability of any entity to provide any interstate or intrastate telecommunications service”[2]—to preempt state AI laws that “effectively prohibit the provision of telecom services.” It is therefore possible the FCC will promulgate a regulation pursuant to Section 253 or other authorities.
The EO also calls on the chairman of the FTC within 90 days to issue a policy statement detailing how the FTC Act’s prohibition on unfair and deceptive acts or practices[3] applies to AI models. The EO calls out “State laws that require alterations to the truthful outputs of AI models.” In a previous EO, President Trump sharply criticized “ideological biases or social agendas [being] built into AI models” and ordered that the federal government should not “procure models that sacrifice truthfulness and accuracy to ideological agendas.”
To create financial pressure for deregulation at the state level, the EO directs the Department of Commerce, which approves and administers Broadband Equity, Access, and Deployment (BEAD) Program grants, to withhold such funding from states identified as having unduly restrictive AI laws. All other agencies must also assess whether they may condition receipt of funds under the discretionary grant programs they administer on states refraining from regulating AI.
The legality of withholding federal funds from states that have passed AI laws the Trump administration disfavors is uncertain and likely to be tested in the courts. States that lose funding will almost certainly pursue litigation to challenge the action. The Supreme Court has recognized that Congress’s wide latitude to appropriate federal funds and incentivize states to adopt certain policies by conditioning the receipt of federal funds is subject to certain limits, including that:
In addition to these constitutional limitations, where a statute specifies conditions that recipients of such funding must satisfy, the executive branch is required to ensure that distribution of the funds adheres to those criteria so that the program does not exceed the program’s statutory authority. The executive branch has sometimes imposed conditions on the receipt of federal funding in addition to those specified in the authorizing statute. In these circumstances, courts have generally required the executive branch to abide by the same limits as the Supreme Court has imposed on Congress’s ability to condition receipt of federal funds.[9]
Both the constitutional and statutory issues are likely to be the subject of litigation.
Case law provides little guidance on what qualifies as a germane and non-coercive condition on federal funding. The administration’s attempt to impose conditions on BEAD funds may invite challenges over the scope of its authority–an area where courts have historically been divided. The administration may point to a broad requirement in the statute that states BEAD Program recipients must ensure that subgrantees (i.e., municipalities) are “capable of carrying out activities funded by the subgrant in a competent manner in compliance with all applicable Federal, State, and local laws.”[10] As the Trump administration promulgates more regulations attempting to establish a permissive regulatory environment for AI at the federal level, it may also argue “compliance with Federal laws” should be interpreted to require not enacting burdensome AI laws at the state level at odds with those federal regulations. The first Trump administration attempted a similar maneuver in withholding certain federal criminal justice funding from “sanctuary city” jurisdictions, and federal courts of appeal were split on the legality of this approach.[11]
Given the uncertain legal authority for many of the EO’s proposed mechanisms to strike down state AI laws, AI developers and deployers will need to continue to work toward compliance with state AI laws and prepare for reporting and auditing requirements. Even where not required by law, AI safety requirements (e.g., notices to consumers that they are engaging with an AI companion) are quickly becoming the industry norm. In addition, many AI deployers and developers are subject to the requirements of the EU AI Act. State and especially foreign laws regulating AI are unlikely to disappear anytime soon, despite the EO’s best efforts.
Maya Vishwanath, an AI Analyst at Morrison Foerster, contributed to this alert.
[1] 15 U.S.C. § 45.
[2] 47 U.S.C. § 253(a).
[3] 15 U.S.C. § 45.
[4] South Dakota v. Dole, 483 U.S. 203, 207–09 (1987).
[5] Id. at 209.
[6] Dole, 483 U.S. at 211; Nat’l Fed’n of Indep. Bus. v. Sebelius, 567 U.S. 519, 580 (2012).
[7] 483 U.S. at 211.
[8] 567 U.S. at 581.
[9] See, e.g., New York v. United States Dep’t of Health & Hum. Servs., 414 F. Supp. 3d 475, 566–72 (S.D.N.Y. 2019).
[10] 47 U.S.C. § 1702(g)(2)(A)(i).
[11] Compare City of Providence v. Barr, 954 F.3d 23, 45 (1st Cir. 2020); City of Philadelphia v. U.S. Att’y Gen., 916 F.3d 276, 291 (3d Cir. 2019); City of Chicago v. Barr, 961 F.3d 882, 909 (7th Cir. 2020) with State of New York v. Dep’t of Just., 951 F.3d 84, 110 (2d Cir. 2020).


