The AI Class Action Path
Anat Lior
The emerging legal landscape of Artificial Intelligence (AI) is increasingly defined by the class action mechanism, which seems to have become the litigation of choice when AI systems cause harm. AI-related injuries often manifest as low-dollar but high-merit claims—situations where a vast number of individuals suffer small, similar injuries, such as privacy breaches or copyright infringements, that are too complex or costly to litigate individually. By utilizing the power of the many, class actions can bridge the resource gap between individual victims and powerful tech giants. This approach, however, does not come without a cost, bringing with it the new and familiar challenges that aggregated litigation characteristically presents.
Class actions have long been used to address harms from emerging technologies, with historical precedents including asbestos litigation in the 1970s and the Volkswagen “clean diesel” scandal in 2015, illuminating technological issues and demanding remedy when individuals lack the incentives to pursue litigation. The same could be applicable regarding AI. This technology is currently in its infancy, a stage where users lack the expertise to identify harms, making aggregate litigation appealing in the pursuit of gaining transparency and deterrence in the AI age.
The Mechanics of Certification under Rule 23
To proceed as a class action in federal court, plaintiffs must satisfy the four requirements listed in Rule 23 of the Federal Rules of Civil Procedure. First, numerosity. Given that millions of Americans now use generative AI platforms, satisfying the requirement for a large group of plaintiffs is relatively straightforward. Second, commonality. Post-Wal-Mart Stores, Inc. v. Dukes, plaintiffs must show a common stroke of harm. In the AI context, this is feasible because all class members interact with the same uniform code or foundational model, making systemic wrongful acts (like discrimination) easier to prove than in traditional corporate settings. Third and fourth, typicality and adequacy. AI cases usually stem from the same course of conduct by AI developers and deployers, ensuring the representative’s claims are typical of the group. However, courts must meticulously screen these cases to ensure they are driven by the class’s claims rather than lawyers seeking large fees in a new, uncertain sector.
Most AI class action cases are filed under Rule 23(b)(3), requesting damages. In this category, a plaintiff must also show predominance and superiority. This means that common questions must predominate over individual ones. While individualized damages can be a hurdle, many AI harms, like algorithmic price-fixing or benefit denials, affect members in nearly identical ways, making class litigation an efficient method over individual litigation.
Thematic Classification: Suitability of AI Claims
AI litigation can be roughly classified into areas where class actions are highly suitable and areas where they face significant challenges. Below is a brief breakdown of these two categories, pointing out elements of the subject matter that make a given claim more or less suitable to be litigated via class action.
Suitable for Class Action
First, copyright is the most active sector, exemplified by cases like Andersen v. Stability and Authors Guild v. OpenAI. The massive $1.5 billion settlement by Anthropic highlights the scale of these disputes and the potential for class actions to nudge the industry toward licensing regimes. Second is the health care sphere, where class actions have targeted insurers like Humana and UnitedHealth for using AI algorithms to wrongfully deny claims or terminate care without physician review. Third, antitrust as exemplified in the RealPage litigation involving algorithmic price-fixing in the rental market. This case demonstrated how AI can facilitate “hub-and-spoke” conspiracies that harm millions of consumers. Fourth, constitutional rights, where government use of AI for fraud detection (e.g., MiDAS) or Medicaid payment reductions has led to successful due process violations class actions. Lastly, discrimination, as AI-driven hiring tools, such as those used by Workday, have faced collective actions for allegedly discriminating based on age, race, or disability.
All of these cases present relatively straightforward instances where the four requirements stated in Rule 23 could be established, given the way AI is used and the scope of users who are exposed to it, mostly involuntarily and with shared detrimental effects.
Challenging for Class Action
First, privacy and surveillance. These claims often struggle with Article III standing (proving a concrete injury) and the predominance requirement, as individual issues of consent frequently outweigh common questions. Second, and closely related, is BIPA litigation. While Illinois’ Biometric Information Privacy Act offers statutory damages, AI data-scraping cases are often hindered by the black box nature of how biometric data is stored and possessed and limited by the language and geographical scope of the Act. Third, personal injury and mass torts. Traditional negligence claims (like autonomous vehicle accidents) are typically too individualized for class certification, as accidents are highly context-dependent and thus challenging to aggregate, even if they stem from the same catastrophic AI-related accident.
Normative Justifications: Accountability and the Black Box
There are several normative reasons for leaning into the AI class actions vehicle. To name only two: first, the highly concentrated industry structure, where a handful of companies (e.g., Microsoft, OpenAI, Anthropic, and Google) control the foundational models, means that moving liability upstream can effectively mitigate widespread harm. Second, class actions facilitate the cycle of ‘naming, blaming, and claiming.’ In a field where users are often unaware they have been harmed, aggregate litigation allows a single representative to hold a company accountable on behalf of everyone, incentivizing safer AI development.
The Limits: AI Class Actions are Not a Panacea
Despite their promises, there are significant drawbacks to over-reliance on class actions in the AI context. The threat of crushing liability and multi-billion-dollar settlements might stifle innovation, particularly for smaller to medium-sized companies that lack the resources to litigate or settle.
Furthermore, the prevalence of settlements (the Anthropic case in point) may severely delay the development of AI common law doctrines. If cases never reach a final judicial decision on their merits, the legal standards for AI liability remain uncertain, harming both plaintiffs and defendants and leading to inefficiencies in our judicial system. There is also the common risk of ploys by attorneys who prioritize large fee awards over meaningful relief for class members, leading some plaintiffs to opt out in hopes of more lucrative individual suits. Given the opacity and obscurity surrounding AI, these concerns amplify as more class actions are being filed and often settled.
Conclusion
Ultimately, while class actions are imperfect and carry risks of abuse, they remain a vital tool for transparency and accountability in the absence of robust AI regulation. Courts should carefully weigh factors like statutory damages to ensure that class actions promote safer AI software rather than just extracting monetary settlements. Over time, the nature of AI should become clearer, moving litigants away from a class action structure to individual litigation. But at the current stage of the AI age, suitable class actions will be a primary mechanism for victims, however, they should not be the only one.
Anat Lior is an Assistant Professor at Drexel University’s Thomas R. Kline School of Law. This post is based on a longer paper, Fighting AI Harms Together: What Class Actions Can (and Can’t) Do.
