Each week for the next 10 weeks, we will publish an installment of our Red Flags Everywhere! series, highlighting key risk areas that public companies and their boards of directors should keep top of mind.
This series will serve as a lead up to MoFo’s upcoming Red Flags Everywhere Tabletop program, taking place in our Palo Alto office on May 7. Members of our Securities Litigation, Employment and Labor, and Capital Markets Groups will guide attendees through a ripped-from-the-headlines fact pattern designed to spark interactive discussion and practical analysis that will be valuable to every board advisor.
This week, we focus on AI oversight and the board’s responsibility to monitor the company’s rapidly developing AI-driven risks and strategic opportunities, while ensuring adherence to applicable legal and ethical frameworks.
If you are interested in learning more about MoFo’s Red Flags Everywhere Tabletop event, please reach out to Deborah Argueta.
Risk #1: AI oversight. Directors have a fiduciary duty to make a good-faith effort to implement oversight systems and monitor those systems, including systems related to AI. This requires understanding how AI is being used within the organization and by competitors. |
As AI moves from experimentation to embedded, business-critical deployment, now is a good time to revisit the lessons gleaned from Delaware oversight-liability cases, and how those lessons apply to board oversight of AI-related compliance and operational risk.
In a series of decisions beginning with the landmark Caremark[1] decision, and continuing through Boeing[2] and TransUnion,[3] Delaware courts have clarified what is expected of directors in overseeing compliance and operational risks. Those same expectations apply to AI risk. Bottom line: directors can mitigate liability risk by making a good-faith effort to establish reasonable oversight systems around AI, which requires understanding how AI is being used within the organization and by competitors.
Caremark established that oversight liability can arise in two circumstances: (1) when a board fails to implement a reasonable reporting and information system, or (2) when, having implemented such a system, the board consciously ignores red flags that the system surfaces. The standard is demanding, but does not require boards to prevent all wrongdoing. Instead, it requires reasonable, business-specific processes designed to bring material compliance risks to the board’s attention. Subsequent decisions have sharpened what that looks like in practice. Boeing emphasized that generalized risk oversight and ad hoc management updates may be insufficient where the risk is “mission critical.” The allegations in Boeing centered on the absence of a formal system for monitoring safety and a lack of regular escalation of safety issues to the board. Finally, a recent Delaware decision involving TransUnion underscores that adopting an oversight framework is only the first step; a board that creates reporting mechanisms but does not meaningfully monitor them, follow up, or address identified risks may still face exposure if the failure reflects bad faith. While they retain discretion in designing oversight systems, directors cannot remain passive once material risks have surfaced.
Applying this framework to AI, here are four practical steps every board should consider:
1. Treat AI as an enterprise risk issue—and start with an AI “inventory” and risk-tiering.
AI governance should be addressed through the company’s compliance and enterprise risk framework, not viewed solely as a technology initiative. Consider asking management to provide (and periodically update) an enterprise-wide view of AI use, including:
Then tier use cases by risk, regulatory classification, and business criticality (e.g., regulated or high-risk decisions, safety-critical use, customer-facing outputs, financial reporting impact, cybersecurity relevance). AI uses that are “high-risk” and “mission critical” should receive the most structured board-level attention.
2. Assign ownership and formalize governance; make escalation paths explicit.
Oversight works best when roles are clear. To increase clarity:
3. Build a repeatable reporting and monitoring cadence—and test that it works.
Move beyond one-off “AI updates” by requiring regular, decision-useful reporting (often quarterly) such as:
Boards should also ensure controls exist in practice, not just on paper, through internal audit plans, independent assessments, or compliance testing for high-risk AI use cases.
4. Invest in board/management education, and pair innovation with guardrails.
Practical oversight requires shared baseline fluency:
A final practical point: Because oversight liability often turns on what directors did when problems surfaced, boards should ensure meeting materials and minutes reflect, in summary fashion, that the board discussed these issues, asked questions, and directed action—especially when AI-related “red flags” arise.
[1] In re Caremark International Inc. Derivative Litigation, 698 A.2d 959 (Del. Ch. 1996).
[2] In re The Boeing Co. Derivative Litigation, C.A. No. 2019-0907-MTZ, 2021 WL 4059934 (Del. Ch. Sept. 7, 2021).
[3] In re TransUnion Derivative Stockholder Litigation, 324 A.3d 869 (Del. Ch. 2024).