Outlook on DHS Framework for AI in Critical Infrastructure

09 Jan 2025
Client Alert

As critical industry sectors such as energy, financial services, and healthcare continue to use AI in new ways, the U.S. federal government is stepping in with guidance to enhance AI-related safety and security. The Department of Homeland Security (DHS) in November released “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure,” a voluntary framework developed in collaboration with industry leaders that provides tailored recommendations for key players in the AI ecosystem to protect critical infrastructure (the “DHS Framework”).

Here are some key takeaways regarding the framework:

  • The DHS Framework moves AI governance forward with targeted and practical guidance: Whereas prior U.S. federal government-published AI frameworks have offered recommendations for how U.S.-based organizations should conceptualize AI-related risks generally, the DHS framework is more targeted: it identifies the specific stakeholders involved in the development and use of AI in critical infrastructure, and offers recommendations within each stakeholder category that those organizations can immediately put into practice.
  • Like the EU AI Act, the DHS Framework focuses on the area it has identified as presenting the highest risk for AI: The DHS Framework’s focus on critical infrastructure shows where the federal government’s priorities lie when it comes to AI safety and security. Similarly, the EU AI Act, Europe’s comprehensive AI regulation, highlights high-risk AI use cases—among them, AI used as safety components in the management and operation of critical infrastructure. The EU AI Act and the DHS Framework thus both address the use of AI in critical infrastructure, but in a different way.
  • The DHS Framework provides targeted and practical recommendations, whereas the EU AI Act imposes legal requirements in the field of risk management, which must be made concrete by parties creating, modifying, and using AI. The EU AI Act also goes further by outright prohibiting certain use cases that are deemed too risky, such as using unjustified forms of social scoring or deploying subliminal, manipulative, or deceptive techniques to distort people’s behavior in a way that causes harm.

Below, we provide a summary of the DHS Framework, a breakdown of how it fits into the landscape of existing AI frameworks, and our analysis of its outlook under the second Trump administration.

Background on the DHS Framework for AI in Critical Infrastructure

The DHS Framework provides recommendations for each layer of the AI supply chain to ensure that AI is deployed safely and securely in U.S. critical infrastructure. The DHS Framework is created in consultation with the Artificial Intelligence Safety and Security Board, an advisory committee that was established by DHS Secretary Alejandro N. Mayorkas in response to President Biden’s 2023 executive order (EO) on the development and use of AI. The board’s members include the CEOs of leading technology and critical infrastructure companies, as well as members of civil society.

How the DHS Framework Compares to Existing Approaches to AI in Critical Infrastructure

Within the U.S.: The DHS Framework is the first AI framework specific to U.S. critical infrastructure. In 2023, the National Institute of Standards and Technology (NIST) published its AI Risk Management Framework, which is a higher-level educational tool, meant to be used by all kinds of organizations to frame the risks involved in their use of AI and to put in place governance mechanisms for AI. The DHS Framework builds on this by setting out specific measures that key entities should take to protect critical infrastructure in relation to AI.

Within the EU: The EU has taken a more aggressive stance on AI governance with the EU AI Act, which entered into force in August 2024 and will be fully applicable within another two years. The EU AI Act assigns AI use cases to one of three categories based on their risk levels, setting specific requirements for each. High-risk applications, which include AI systems deployed in critical infrastructure, are subject to strict obligations before they can be put on the market, such as ensuring adequate risk assessment and mitigation systems, traceability of results, and appropriate human oversight measures. While it places a similarly high emphasis on the risks involved and the need for safety measures in the use of AI in critical infrastructure, the DHS Framework is a voluntary framework. As the U.S. is home to so many key players in the AI industry, the federal government has generally taken a lighter-touch approach to regulating AI so far, opting in many cases for collaborative industry guidelines.

Overview of the DHS Framework for AI in Critical Infrastructure

The DHS Framework identifies three main categories of AI-related safety and security vulnerabilities in critical infrastructure: (1) attacks using AI; (2) attacks targeting AI systems; and (3) AI design and implementation failures. To mitigate these vulnerabilities, the framework assigns voluntary responsibilities for the safe and secure use of AI in U.S. critical infrastructure across five key roles:

  • Cloud and compute infrastructure providers,
  • AI developers,
  • Critical infrastructure owners and operators,
  • Civil society (organizations distinct from industry and government, such as nonprofits, labor unions, and academia), and
  • The public sector (government agencies and government-controlled entities).

The DHS Framework evaluates these roles across five responsibility areas: securing environments, driving responsible model and system design, implementing data governance, ensuring safe and secure deployment, and monitoring performance and impact for critical infrastructure. Finally, the DHS Framework recommends actions to enhance safety and security for each of the key stakeholders involved in supporting the development and deployment of AI in critical infrastructure. For many of its recommendations, the DHS Framework cites technical resources that provide further specifics on their implementation. The DHS Framework’s recommendations include, for example:

  • Cloud and compute infrastructure: The DHS Framework encourages cloud and compute infrastructure providers to review hardware and software in the supply chain to ensure the reliability and security of their components, establish vulnerability management, conduct systems testing, and use encryption to reduce the risk that personal or confidential customer data used to train or fine-tune AI models is exposed, leaked, or attacked.
  • AI developers: The DHS Framework recommends that AI developers test for possible biases, failure modes, and vulnerabilities, clearly identify AI-generated or manipulated content where technically feasible and commercially reasonable, and support independent assessments for models that present heightened risks to critical infrastructure systems and consumers.
  • Critical infrastructure owners and operators: The DHS Framework recommends several practices focused on the deployment of AI systems, such as implementing strong cybersecurity practices to maintain controls for AI systems and providing meaningful transparency regarding the use of AI to provide goods, services, or benefits to the public.

The DHS Framework references the same 16 sectors of the economy that DHS’s Cybersecurity and Infrastructure Security Agency (CISA) defined as critical infrastructure when promulgating cyber incident reporting rules earlier this year, including the communications sector, energy sector, and financial services sector. By providing recommendations for such a broad group of organizations, DHS is making clear that it considers a large portion of the private sector to be “critical infrastructure,” and is encouraging these entities to address risks accordingly.

Outlook Under the Second Trump Administration

Protecting U.S. critical infrastructure by securing supply chains from foreign participation was an area of focus in President-elect Trump’s first term, and securing strategic independence from China and bringing critical supply chains to the U.S. were part of Trump’s 2024 platform. Accordingly, the Trump administration may elect to carry these efforts forward.

More importantly, the DHS Framework provides the private sector with actionable recommendations that they can choose to adopt directly. Industry participation in the development of the DHS Framework shows a desire on the private sector’s part to collaborate on standards and to have AI safety measures in place. Whether through continued government directives, or with the private sector picking up the baton on its own, the DHS Framework’s recommendations may become industry standard.

We are Morrison Foerster — a global firm of exceptional credentials. Our clients include some of the largest financial institutions, investment banks, and Fortune 100, technology, and life sciences companies. Our lawyers are committed to achieving innovative and business-minded results for our clients, while preserving the differences that make us stronger.

Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Prior results do not guarantee a similar outcome.