(Still) All About Algorithms: Antitrust Lessons from the Last Year and What Lies Ahead in 2026
After several years of increased antitrust scrutiny involving pricing algorithms (and hundreds of presentations at antitrust conferences around the globe), the people who matter are finally starting to weigh in. 2025 brought with it significant decisions in litigation and investigations, and state legislatures took a swing at solving perceived problems (and creating new ones) involving this now common use of business informatics.
Below we identify recent developments that have helped shape the evolving legal landscape, and highlight a few lessons learned. While these developments have begun to clarify when pricing algorithms may trigger antitrust concerns, we are still at the thin end of the wedge. Unanswered questions remain. But one thing is clear: plaintiffs, regulators, and legislators have signaled their continued focus on restricting the perceived use of revenue management and other software to restrict competition, and we expect 2026 to bring even more developments.
Courts Continued to Grapple with the Limits of Algorithmic Conduct.
One threshold question is whether a plaintiff can adequately plead the existence of a conspiracy merely by alleging that each user of a challenged algorithm was aware that others were using it, too.
In August 2025, the Ninth Circuit became the first federal appellate court to address an antitrust challenge to pricing algorithms. In Cendyn v. Gibson, the court affirmed dismissal of a putative class action alleging that Las Vegas casino hotels used Cendyn Group’s revenue management software to artificially inflate room prices. Absent other evidence of an actual agreement among the hotels, the court found, hotels’ independent decisions to use Cendyn’s software were not anticompetitive—even if the hotels were aware that competitors came to the same (independent) decision to use the same product. The court also noted that Cendyn’s licensing agreement did not require acceptance of its software’s pricing recommendations. And plaintiffs did not allege that Cendyn shared or used competitors’ confidential information for its pricing recommendations.
Federal district courts also addressed the line between plausible conspiracy and independent action. In December 2025, the Northern District of Illinois dismissed plaintiffs’ claims alleging that Datacomp Appraisal Systems, a provider of valuation and pricing tools, facilitated a conspiracy among mobile home companies to fix lot rental prices. While plaintiffs alleged that Datacomp collects and distributes detailed, non-anonymized, disaggregated, current and future competitive pricing and pricing-related information, they did not allege that the tool provides pricing recommendations. This proved fatal—the court found that giving confidential data to Datacomp alone was insufficient to establish an agreement.[1]
In contrast to these dismissals, in June 2025, the Northern District of Illinois denied defendants’ motion to dismiss a hub-and-spoke conspiracy claim based on payors’ common use of MultiPlan, Inc., a platform that benchmarks payment rates for out-of-network medical providers. Unlike Gibson, the court inferred an agreement between defendants because it found that the use of MultiPlan would be against the self-interests of the payors absent an agreement due to the risk of subscriber loss. Plaintiffs also alleged that MultiPlan provided rate recommendations and payors exchanged competitively sensitive information through the software.
A few themes have begun to emerge from these cases. Pricing algorithms pose more risk when: (1) algorithms generate pricing recommendations; (2) companies follow those recommendations; (3) the tool facilitates the exchange of competitively sensitive information; and/or (4) there is other evidence of an agreement among users. That is not to say these factors are fatal—the jurisprudence is not settled, and we expect more courts to weigh in this year. But courts seem to dislike the presence of pricing algorithms that share too much information and/or control decision-making.
DOJ Settled Challenges to Multifamily Pricing Algorithms.
Last year, DOJ reached settlements with four defendants in its lawsuit challenging the use of RealPage’s revenue management software by residential property managers. In November 2025, DOJ proposed a consent decree with RealPage. If approved, the settlement would prohibit RealPage from using competitors’ nonpublic data to determine rental prices and restrict RealPage’s use of recent (less than 12 months old) lease data to train the model. It would also require RealPage to modify certain product features that encourage adoption of the algorithm’s price recommendations or favor price increases over decreases—an unusual remedy that may foreshadow DOJ’s willingness to impose software changes in future cases.
In December 2025, LivCor, one of six landlords in the same enforcement action against RealPage, agreed to stop using algorithms that use competitors’ nonpublic data to generate rent recommendations for LivCor or that use LivCor’s nonpublic data to generate recommendations for others. Under the consent decree, nonpublic data is defined broadly to include past, present, or prospective information that could be used to determine inventory, rents, or other rental terms. DOJ proposed similar consent decrees with the largest U.S. landlord, Greystar, in August 2025, and another landlord, Cortland, in January 2025.
Again, the settlements suggest some lessons—in particular, DOJ’s view that the use of pricing algorithms to exchange competitively sensitive data, even historic, increases the risk of anticompetitive effects. Driving the DOJ’s concerns appears to be the use of such data to make pricing recommendations, especially those that are binding and/or generally increase prices.
State Legislators Banned Algorithms.
Perhaps recognizing the limitations of litigation and regulation, state legislatures actively pursued alternative ways to limit or even prohibit the use of pricing algorithms, often beyond those recognized by federal courts interpreting the Sherman Act.
California: The Cartwright Act, effective January 1, 2026, now expressly outlaws the use or distribution of a common pricing algorithm, which is defined broadly as “any methodology, including a computer, software, or other technology, used by two or more persons, that uses competitor data to recommend, align, stabilize, set, or otherwise influence a price or commercial term.” The law does not distinguish between public and nonpublic data and applies to all industries operating in California. The amendment also lowers the pleading standard for Cartwright Act claims. Plaintiffs no longer need to allege facts that exclude the possibility of independent action but instead need only present plausible allegations of a conspiracy—a more forgiving pleading standard than Sherman Act conspiracy claims.
New York: The Donnelly Act, amended effective on December 15, 2025, now prohibits landlords from setting rents or lease terms based on algorithms that analyze current or historical information from multiple landlords. Like California, the legislation does not distinguish between public and nonpublic data. In November 2025, RealPage challenged the new law as a violation of the First Amendment and Due Process Clause.
Connecticut: Effective January 1, 2026, the Connecticut Antitrust Act prohibits any person from using “a revenue management device to set rental rates or occupancy levels for residential dwelling units.” The legislation includes “anonymized” information provided by a third party but, unlike California and New York, provides a carveout for publicly available competitor data.
What Lies Ahead?
As algorithms evolve and become more widespread, courts have attempted to apply existing legal frameworks to challenges of this new technology. To date, state regulators have stepped in to address perceived gaps in curbing competitive concerns, while Congress has not been successful in pursuing limitations on the use of algorithms.
Against this moving backdrop, some principles persist:
- Adoption of a common software tool alone does not violate the antitrust laws, even if users are aware that their competitors use the same software.
- Exchanging or using competitively sensitive information through an algorithm to generate recommendations increases antitrust risks, even if the data is historical or aggregated.
- Risk is reduced when pricing recommendations do not need to be followed and users retain the ability to make independent decisions.
Looking ahead, courts will soon confront new questions as cases move beyond the pleading stage and into discovery. Additional courts—and potentially circuits—may also weigh in on whether and when the use of shared pricing tools crosses the line. The Third Circuit heard oral argument in September 2025 on the district court’s dismissal of a case involving factual allegations similar to those in Gibson, and an opinion is expected later this year. The Northern District of California may also rule soon on a pending motion to dismiss in another hotel room-rate case. These upcoming decisions will be important guideposts for companies evaluating the antitrust risks of algorithmic pricing.
[1] Earlier in 2025, the same court granted a motion to dismiss filed in a lawsuit alleging the use of Amadeus IT Group’s Demand360 algorithm to increase luxury hotel rates, finding that the alleged sharing of occupancy was too attenuated to allege a conspiracy.
Michael B. MillerPartner
Rob ManosoPartner
Margaret A. WebbOf Counsel
Sarah Jane T. CatarozoliAssociate