Red Flags Everywhere! – Ten Risks for Directors – Week 1

04 Mar 2026
Client Alert

Each week for the next 10 weeks, we will publish an installment of our Red Flags Everywhere! series, highlighting key risk areas that public companies and their boards of directors should keep top of mind.

This series will serve as a lead up to MoFo’s upcoming Red Flags Everywhere Tabletop program, taking place in our Palo Alto office on May 7. Members of our Securities Litigation, Employment and Labor, and Capital Markets Groups will guide attendees through a ripped-from-the-headlines fact pattern designed to spark interactive discussion and practical analysis that will be valuable to every board advisor.

This week, we focus on AI oversight and the board’s responsibility to monitor the company’s rapidly developing AI-driven risks and strategic opportunities, while ensuring adherence to applicable legal and ethical frameworks.

If you are interested in learning more about MoFo’s Red Flags Everywhere Tabletop event, please reach out to Deborah Argueta.


Director Oversight of AI Requires Strong Processes and Good-Faith Engagement

Risk #1: AI oversight. Directors have a fiduciary duty to make a good-faith effort to implement oversight systems and monitor those systems, including systems related to AI. This requires understanding how AI is being used within the organization and by competitors.

 As AI moves from experimentation to embedded, business-critical deployment, now is a good time to revisit the lessons gleaned from Delaware oversight-liability cases, and how those lessons apply to board oversight of AI-related compliance and operational risk.

In a series of decisions beginning with the landmark Caremark[1] decision, and continuing through Boeing[2] and TransUnion,[3] Delaware courts have clarified what is expected of directors in overseeing compliance and operational risks. Those same expectations apply to AI risk. Bottom line: directors can mitigate liability risk by making a good-faith effort to establish reasonable oversight systems around AI, which requires understanding how AI is being used within the organization and by competitors.

Caremark established that oversight liability can arise in two circumstances: (1) when a board fails to implement a reasonable reporting and information system, or (2) when, having implemented such a system, the board consciously ignores red flags that the system surfaces. The standard is demanding, but does not require boards to prevent all wrongdoing. Instead, it requires reasonable, business-specific processes designed to bring material compliance risks to the board’s attention. Subsequent decisions have sharpened what that looks like in practice. Boeing emphasized that generalized risk oversight and ad hoc management updates may be insufficient where the risk is “mission critical.” The allegations in Boeing centered on the absence of a formal system for monitoring safety and a lack of regular escalation of safety issues to the board. Finally, a recent Delaware decision involving TransUnion underscores that adopting an oversight framework is only the first step; a board that creates reporting mechanisms but does not meaningfully monitor them, follow up, or address identified risks may still face exposure if the failure reflects bad faith. While they retain discretion in designing oversight systems, directors cannot remain passive once material risks have surfaced.

Applying this framework to AI, here are four practical steps every board should consider:

1. Treat AI as an enterprise risk issue—and start with an AI “inventory” and risk-tiering.

AI governance should be addressed through the company’s compliance and enterprise risk framework, not viewed solely as a technology initiative. Consider asking management to provide (and periodically update) an enterprise-wide view of AI use, including:

  • Where AI is used: Products/services, customer support, marketing, HR, finance, security, and other operations.
  • Third-party AI exposure: Vendor tools, embedded AI features, contractors, and “shadow AI” use by employees.
  • Data and outputs: What data goes in, where outputs go, and who relies on them.
  • Regulatory characterization: Whether any use case may qualify as “high-risk” or “regulated,” or otherwise be subject to heightened obligations under applicable laws.

Then tier use cases by risk, regulatory classification, and business criticality (e.g., regulated or high-risk decisions, safety-critical use, customer-facing outputs, financial reporting impact, cybersecurity relevance). AI uses that are “high-risk” and “mission critical” should receive the most structured board-level attention.

2. Assign ownership and formalize governance; make escalation paths explicit.

Oversight works best when roles are clear. To increase clarity:

  • Designate an accountable executive owner for AI risk (often a combination of Legal/Compliance, Security, Privacy, and the relevant business leader).
  • Create a cross-functional AI governance group (Legal, Compliance, Privacy, Security, HR, Product, Procurement) with written responsibilities.
  • Decide where AI oversight lives at the board level (full board and/or a committee) and consider updating committee charters to reflect that responsibility.
  • Define escalation triggers (e.g., significant incidents, regulator inquiries, material customer harm, major model failures, high-risk deployments) and who must notify the board—and when.

3. Build a repeatable reporting and monitoring cadence—and test that it works.

Move beyond one-off “AI updates” by requiring regular, decision-useful reporting (often quarterly) such as:

  • Deployments and planned rollouts (especially high-risk use cases)
  • Compliance posture (policy adherence, training completion, regulatory developments affecting the business)
  • Security and data governance (access controls, data leakage events, vendor incidents, abuse patterns)
  • Performance and monitoring (model drift, error rates, bias testing where relevant, customer complaints)
  • Incidents and remediation tracking (open issues, owners, deadlines, overdue items)

Boards should also ensure controls exist in practice, not just on paper, through internal audit plans, independent assessments, or compliance testing for high-risk AI use cases.

4. Invest in board/management education, and pair innovation with guardrails.

Practical oversight requires shared baseline fluency:

  • Provide targeted training tied to the company’s actual AI footprint (not generic AI 101), including key legal/regulatory themes, IP/data issues, and incident scenarios.
  • Run tabletop exercises (e.g., data leakage, harmful hallucinations, vendor failure, model abuse) to test escalation and response paths.
  • Encourage responsible adoption by requiring guardrails for experimentation (approved tools, sandboxing, criteria for moving pilots to production) and asking management to measure both ROI and risk metrics as deployments scale.

A final practical point: Because oversight liability often turns on what directors did when problems surfaced, boards should ensure meeting materials and minutes reflect, in summary fashion, that the board discussed these issues, asked questions, and directed action—especially when AI-related “red flags” arise.


[1] In re Caremark International Inc. Derivative Litigation, 698 A.2d 959 (Del. Ch. 1996).

[2] In re The Boeing Co. Derivative Litigation, C.A. No. 2019-0907-MTZ, 2021 WL 4059934 (Del. Ch. Sept. 7, 2021).

[3] In re TransUnion Derivative Stockholder Litigation, 324 A.3d 869 (Del. Ch. 2024).

We are Morrison Foerster — a global firm of exceptional credentials. Our clients include some of the largest financial institutions, investment banks, and Fortune 100, technology, and life sciences companies. Our lawyers are committed to achieving innovative and business-minded results for our clients, while preserving the differences that make us stronger.

Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Prior results do not guarantee a similar outcome.