United States
The United States has issued regulations, recommendations, and guidance on Artificial Intelligence (AI). Companies subject to the laws of the United States should be familiar with all relevant AI-related regulations, recommendations, and guidance, including those listed below.
State Regulations, Recommendations, Guidance, and Other Resources
For laws, regulations, and/or other resources issued by U.S. state government authorities, click on a state in the map to view individual state pages.
Federal Laws
The TAKE IT DOWN Act criminalizes the publication of non-consensual intimate imagery (NCII), including AI-generated NCII (or “deepfake revenge pornography”), and requires social media and similar websites to remove such content within 48 hours of notice from a victim.
Federal Regulations, Recommendations, Guidance, and Other Resources
The Executive Order directs: 1) White House advisors to engage Congress on developing federal legislation to establish a “uniform Federal policy framework for AI” that would preempt many state AI laws; 2) the attorney general to create a task force to mount legal challenges to state AI laws; 3) the Department of Commerce to withhold funding from states identified as having “onerous” AI laws; and 4) all other agencies to assess whether they may condition receipt of funds under the discretionary grant programs they administer on states refraining from regulating AI.
The Genesis Mission will build an integrated AI platform to harness federal scientific datasets to train scientific foundation models and create AI agents to test new hypotheses, automate research workflows, and accelerate scientific breakthroughs. The Secretary of Energy will be responsible for implementing the Mission within Department of Energy.
The AI Action Plan identifies over 90 Federal policy actions across three pillars that the Trump Administration plans to address: (i) Accelerating AI Innovation; (ii) Building American AI Infrastructure; and (iii) Leading in International AI Diplomacy and Security.
This Executive Order directs agency heads to procure only large language models (LLMs) that adhere to “Unbiased AI Principles”. White House Fact Sheet.
This Executive Order, among other things, directs the Secretary of Commerce to launch an initiative to provide financial support, such as loans, grants, and tax incentives, for select projects, and repeals President Biden’s Executive Order 14141. White House Fact Sheet.
This Executive Order requires the Secretary of Commerce to establish and implement the American AI Exports Program to support the development and deployment of U.S. full-stack AI export packages, which include hardware, data systems, AI models, cybersecurity measures, applications for sectors such as healthcare, education, agriculture, and transportation. White House Fact Sheet.
This Executive Order revises President Biden’s January 16, 2025 Executive Order, "Strengthening and Promoting Innovation in the Nation's Cybersecurity”.
This Executive Order seeks to promote AI literacy and proficiency among Americans by promoting the integration of AI into education, providing comprehensive AI training for educators, and fostering early exposure to AI concepts and technology to develop an AI-ready workforce and the next generation of American AI innovators.
Directs the development of an AI action plan within 180 days, an immediate review of all relevant policies, directives, regulations, orders, and other actions taken pursuant to the revoked EO 14141, and orders the revision or rescinding all policies, directives, regulations, orders, and other actions previously taken under the prior administration that are inconsistent with the current administration’s AI approach. In addition, it orders the revision of OMB Memoranda M-24-10 and M-24-18 within two months.
Directs federal agencies to identify at least three federal sites by February 2025 that could host frontier AI data centers and establishes a timeline for the solicitation, construction, and operation of new high-capacity data centers on federal lands.
Issued by President Biden, the Executive Order provides for a coordinated, federal government-wide approach to governing the development and use of AI safely and responsibly.
The Office of Management and Budget (OMB) memorandum provides guidance for federal use of AI and replaces OMB M-24-10.
The Office of Management and Budget (OMB) memorandum addresses AI acquisition and replaces the OMB M-24-18 on government purchasing of AI systems and services. It also sets a 200-day deadline for the General Services Administration and OMB to create a web-based repository of AI procurement tools.
The Office of Management and Budget (OMB) memorandum requires agencies to follow minimum practices when using safety-impacting AI and rights-impacting AI, and establishes a series of recommendations for managing AI risks in the context of Federal procurement.
The Office of Management and Budget (OMB) memorandum requires agencies to create or update acquisition policies, procedures, and practices to reflect new responsibilities and governance for AI, as established by the OMB.
The Office of Management and Budget (OMB) memorandum to executive departments and agencies, pursuant to President Trump’s July 23, 2025 Executive Order 14319, Preventing Woke Al in the Federal Government, to ensure that Large Language Models (LLMs) procured by the Federal Government produce reliable outputs free from harmful ideological biases or social agendas.
This clarifies that certain activities involving advanced computing integrated circuits (ICs) and related commodities used to train AI models may require an export authorization under the Export Administration Regulations. The Policy Statement applies to exporters, re-exporters, transferors, and U.S. persons engaged in transactions or support involving items such as servers and chips.
The voluntary AI RMF is designed to equip AI actors with approaches that increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment, and use of AI systems over time.
The guidance applies to the development and deployment of Generative AI (“GenAI”) technologies, including large language models and cloud-based services. It provides recommendations and obligations for organizations involved in designing, developing, using, or evaluating AI systems to manage risks and ensure trustworthiness.
The Guidelines focus on secure software development practices for GenAI and dual-use foundation models and are applicable to AI model producers, AI system producers, and AI system acquirers, addressing the entire AI model development lifecycle, including data sourcing, design, training, fine-tuning, evaluation, and integration into software systems. The guidelines build off of the Secure Software Development Framework version 1.1. and provide recommendations and considerations, such as securing code storage, managing model versioning and lineage, and clarifying shared responsibilities among organizations.
The Guidance applies to all sectors involved in AI related activities, including standards development organizations, industry, academia, civil society, and foreign governments. It covers AI standards across all scopes, both horizontal (cross-sectoral) and vertical (sector-specific). The Guidance recommends several actions, including engaging in standards work, encouraging diverse stakeholder participation, and promoting global alignment on AI standards. The Guidance priorities scientifically sound, accessible AI standards that reflect the needs of diverse global stakeholders.
This paper outlines proposed AI use cases for the control overlays to manage cybersecurity risks in the use and development of AI systems and includes next steps. The use cases address generative AI, predictive AI, single and multi-agent AI systems, and controls for AI developers.
Guidance on practitioner use of AI
- Guidance on Use of Artificial Intelligence-Based Tools in Practice Before the USPTO
- Director guidance on party and practitioner misconduct related to use of AI
Inventorship
- Revised Inventorship Guidance for AI-Assisted Inventions (November 26, 2025)
This Guidance rescinds February 2024 guidance on this topic and implements the January 23, 2025 Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence.” The Guidance, applicable to utility, design and plant patents, underscores longstanding precedent that the same legal standard for determining inventorship applies to all inventions, regardless of whether AI systems were used in the inventive process.
- Inventorship Guidance for AI-assisted Inventions (February 2024)(Rescinded)
- Transaxle for Remote Control Car (Example 1)
- Developing a Therapeutic Compound for Treating Cancer (Example 2)
Subject matter eligibility
- 2024 Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence (2024 AI SME update). (Effective July 17, 2024)
- AI-related SME examples 47-49 issued in 2024
Compliance with 35 U.S.C. 112
Guidance on disclosure requirements for computer-implemented functional claim limitations.
General guidance for examining means plus function (35 U.S.C. 112(f)) limitations. MPEP 2181(II)(B) provides guidance on the description necessary to support a claim limitation that invokes 35 U.S.C. 112(f).
Guidance discusses functional limitations that do not invoke 35 USC 112(f).
Artificial Intelligence Patent Dataset (AIPD)
United States patents and pre-grant publications that include AI
PTAB and USPTO Petition Decisions Pertaining To AI
- PTAB Decision - Ex parte Hannun (formerly Ex parte Linden), 2018-003323 (April 1, 2019)
- Petition Decision – In re Appl. No. 16/524,350 (“DABUS”) (Inventorship limited to natural persons).
A Department initiative to solidify U.S. leadership in safe and trustworthy AI systems for scientific discovery, energy research, and national security.
A voluntary framework which includes recommendations for the safe and secure development and deployment of AI in critical infrastructure. This resource was developed by and for entities at each layer of the AI supply chain, including cloud and compute providers, AI developers, and critical infrastructure owners and operators.
Designed to help government officials improve the delivery of services through the responsible and effective deployment of generative AI technologies.
This guidance provides background on tenant screening companies, explains how the Fair Housing Act applies to both housing providers and tenant screening companies, describes common fair housing issues, and suggests how to avoid discriminatory screenings. This guidance covers screening practices with varying levels of human involvement and automation, including machine learning and other forms of AI.
The guidance explains how the Fair Housing Act applies to the advertising of housing, credit, and other real estate-related transactions through digital platforms. In particular, it addresses the increasingly common use of automated systems, such as algorithmic processes and AI, to facilitate advertisement targeting and delivery.
How DOL is Using AI:
This report provides insights into the current state of AI-related cybersecurity and fraud risks in the financial services sector, and best practice recommendations for managing those risks.
This report follows Treasury’s issuance of its 2024 Request for Information on the Uses, Opportunities, and Risks of AI in Financial Services. The Report highlights increasing AI use throughout the financial sector and underscores the potential for AI to broaden opportunities while amplifying certain risks, including those related to data privacy, bias, and third-party providers.
This Office regularly posts articles examining issues involving technology, consumer protection, and competition.
The rule defines materially misrepresenting that a reviewer exists as an unfair or deceptive act, which would cover AI-generated fake reviews. Among other things, the rule prohibits selling or purchasing fake consumer reviews or testimonials, buying positive or negative consumer reviews, certain insiders creating consumer reviews or testimonials without clearly disclosing their relationships, creating a company-controlled review website that falsely purports to provide independent reviews, certain review suppression practices, and selling or purchasing fake indicators of social media influence.
The National Security Agency (NSA) issued this joint report together with the Cybersecurity and Infrastructure Security Agency (CISA), the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), and others. The Report describes different ways that AI can be integrated into operational technology (“OT”) and outlines four principles critical infrastructure owners and operators should follow to both leverage the benefits and minimize the risks of integrating AI into OT environments.
Legislative Branch Reports
The U.S. Government Accountability Office (GAO) published this report, which details financial institutions’ use of AI technology and the potential privacy concerns associated with using such technology.
Related Resources
Client AlertDeputy Attorney General Lisa Monaco Announces DOJ’s Approach to AI
Client AlertBIS Proposes Rule to Address Challenges of Artificial Intelligence and Malicious Cyber-Enabled Activities
Client AlertSEC Targets "AI Washing" With Two New Settled Cases
Client AlertFTC Gears Up for AI Enforcement: No Brakes in Sight
