The FTC Brings Algorithmic Bias into Sharp Focus

08 Jan 2024
Client Alert

In May 2023, the Federal Trade Commission (FTC) warned that it would closely monitor alleged misuses of biometric information—it has now acted on that warning.[1] For the first time, the FTC has brought an enforcement action under Section 5 of the FTC Act based on alleged algorithmic bias. What’s more, the FTC did so with steep proposed penalties—including deletion of data, models, and algorithms derived from the biometrics at issue.

On December 19, 2023, the FTC filed a complaint and proposed stipulated order against Rite Aid Corporation for the company’s alleged use of facial recognition technology without appropriate safeguards, including sufficient bias testing.[2] Although this marks the first time the FTC has used its authority to address algorithmic bias, it is not the first time the FTC has addressed a company’s use of facial recognition technology.[3]

According to a statement by FTC Commissioner Alvaro M. Bedoya, the action marks a new focus by the FTC on companies that deploy biometrics and artificial intelligence (AI) systems that may have biased impacts on consumers.[4]

The FTC’s Complaint Against Rite Aid

In its complaint, the FTC alleges that Rite Aid used facial recognition technology “to identify patrons that it had previously deemed likely to engage in shoplifting or other criminal behavior.”[5] The FTC contends that Rite Aid did so “without taking reasonable steps to address the risks that their deployment of such technology was likely to result in harm to consumers as a result of false-positive facial recognition match alerts.”[6]

According to the complaint, the technology at issue sent alerts to Rite Aid employees when a patron who matched an entry in “Rite Aid’s watchlist database” entered the store.[7] A match allegedly led to Rite Aid employees subjecting such patrons “to increased surveillance; banning them from entering or making purchases at the Rite Aid stores; publicly and audibly accusing them of past criminal activity in front of friends, family, acquaintances, and strangers; detaining them or subjecting them to searches; and calling the police to report that they had engaged in criminal activity.”[8] These actions allegedly occurred in numerous instances involving false-positive matches, meaning the technology incorrectly identified a person who had entered a store as someone in Rite Aid’s watchlist database.[9]

The FTC claims that Rite Aid deployed the technology without reasonable steps to prevent harm to consumers, including purportedly failing to:

  • Consider risks that false positives have for consumers, including risks of misidentification based on gender and race;
  • Test the system for accuracy;
  • Enforce image quality controls;

  • Adequately train its staff on the risks of false positives and improve training when false positives became an evident trend; and
  • Monitor, test, and track the accuracy of the results from the surveillance program.[10]

These failures, the FTC contends, injured most acutely Black, Asian, Latino, and women consumers, all of whom the technology allegedly more commonly falsely identified as matches.[11]

Proposed Stipulated Order and Detailed Expectations for Biometric Systems

The proposed order would impose significant obligations on Rite Aid. It also provides guidance on the FTC’s expectations with respect to the use of automated technologies not only for surveillance, but also for other screening processes, such as hiring, advertising, and pricing.

The proposed order has several key requirements for Rite Aid, such as:

  • Refrain from using face surveillance for five years.[12]
  • Delete or destroy all photos and videos of consumers collected by the surveillance system and any data, models, or algorithms derived from them.[13]
  • Identify any third parties (other than government entities) that received photos or videos of consumers or data, models, or algorithms derived from them, instruct the third parties to delete the same, and demand written confirmation thereof.[14]

  • Establish an “Automated Biometric Security or Surveillance System Monitoring Program” as a requirement of using any security or surveillance system with biometrics.[15] The program must not only identify the risks of the program to consumers generally, but also potential disproportionate impacts on consumers based on race, ethnicity, gender, sex, age, or disability.[16] To do so, the proposed order would require Rite Aid to, among other things, conduct a yearly written assessment of potential risks to consumers associated with false positives, with numerous specified elements for consideration.[17]

  • Implement, maintain, and document safeguards to control for the risks identified in the abovementioned assessment, with specific and detailed documentation and testing.[18]

  • Evaluate and adjust the monitoring program on an annual basis and provide such evaluations to Rite Aid’s board of directors or senior officers.[19]
  • Implement written notice procedures and a complaint process for consumers, and post clear and conspicuous notices in Rite Aid stores about the system’s use.[20]

  • Retain biometric data for no more than five years, with certain exceptions.[21]

  • Implement an information security program with third-party auditing to be reported to the FTC.[22]

What’s Next? Implications for Businesses

In his Statement, Commissioner Bedoya put it plainly: “Biased face surveillance hurts people.”[23] The Commissioner made his message for the industry clear:

I want industry to understand that this Order is a baseline for what a comprehensive algorithmic fairness program should look like. Beyond giving notice, industry should carefully consider how and when people can be enrolled in an automated decision-making system, particularly when that system can substantially injure them. In the future, companies that violate the law when using these systems should be ready to accept the appointment of an independent assessor to ensure compliance.[24]

The action against Rite Aid provides a number of insights into how the FTC may use its enforcement authority, as well as the potential ripple effects beyond the use of biometrics.

  • First, the action makes it clear that those who deploy surveillance technology themselves bear significant responsibility for how the technology is used, even when implementing off-the-shelf technologies.
  • Second, the implications are not limited to face surveillance. Indeed, Commissioner Bedoya framed the action as “part of a much broader trend of algorithmic unfairness” and cited examples of résumé screening models, advertising platforms, and pricing models as presenting similar issues of algorithmic bias that could potentially violate Section 5.[25]

  • Third, the enforcement action may lay the foundation for the FTC to become the primary U.S. regulatory enforcer for AI and biometric discrimination claims.
  • Fourth, the action has implications for potential private litigation associated with the use of AI and biometric technologies, outside of what we traditionally think of in terms of the Illinois Biometric Information Privacy Act (BIPA). For example, we may see an increase in private plaintiffs following the FTC’s lead by trying to bring claims of discrimination under states’ unfair and deceptive trade practices laws and the like.

In sum, the FTC has now set forth a detailed list of expectations for companies that operate biometric surveillance systems that impact or potentially harm consumers. Reasonable algorithm fairness practices and ongoing monitoring and testing of the company’s systems and tools will be crucial to help avoid regulatory scrutiny.


[1] See FTC Warns About Misuses of Biometric Information and Harm to Consumers (May 18, 2023).

[2] Complaint for Permanent Injunction and Other Relief (“Compl.”) ¶ 3, Fed. Trade Comm’n v. Rite Aid Corp., No. 2:23-cv-05023 (E.D. Pa. Dec. 19, 2023), ECF No. 1.

[3] See, e.g., FTC Finalizes Settlement with Photo App Developer Related to Misuse of Facial Recognition Technology (May 7, 2021); FTC Imposes $5 Billion Penalty and Sweeping New Privacy Restrictions on Facebook (July 24, 2019).

[4] Statement of Commissioner Bedoya on FTC v. Rite Aid Corp. (“Statement”), Comm’n File No. 202-3190 (Dec. 19, 2023).

[5] Compl. ¶ 3.

[6] Id. ¶ 140.

[7] Id. ¶ 3.

[8] Id. ¶ 4.

[9] Id.

[10] Id. ¶ 5.

[11] Id. ¶ 6.

[12] Proposed Stipulated Order for Permanent Injunction and Other Relief, Ex. A. Decision and Order (“Proposed Order”) at 6, Fed. Trade Comm’n v. Rite Aid Corp., No. 2:23-cv-05023 (E.D. Pa. Dec. 19, 2023), ECF No. 2-2.

[13] Id.

[14] Id.

[15] Id. at 7.

[16] Id.

[17] Id. at 7-9.

[18] Id. at 9-12.

[19] Id. at 12, 14.

[20] Id. at 13-16.

[21] Id. 15-16.

[22] Id. at 17-23.

[23] Statement at 1.

[24] Id. at 4.

[25] Id. at 5.

We are Morrison Foerster — a global firm of exceptional credentials. Our clients include some of the largest financial institutions, investment banks, and Fortune 100, technology, and life sciences companies. Our lawyers are committed to achieving innovative and business-minded results for our clients, while preserving the differences that make us stronger.

Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Prior results do not guarantee a similar outcome.