Taking Standards Seriously: The Case for a Private Standards-Based Approach to AI Governance
Authored by Alexander Muller and Christopher S. Yoo
As artificial intelligence systems grow increasingly capable and widespread, the question of how to govern them effectively has become one of the most vexing contemporary policy challenges. AI promises transformative benefits across healthcare, scientific discovery, and many other domains, yet it also introduces a host of new individual and societal risks. In this high-stakes environment, policymakers face difficult institutional choices about how to structure AI governance so as to enable the benefits of continued AI innovation and deployment while also mitigating the risks of harm.
We think the answer lies in a governance regime that relies heavily on voluntary consensus standards developed through open, multi-stakeholder processes. We call this approach private standards-based governance, and it is far from a radical idea. Standards—agreed-upon rules about how a technology should work, perform, and be built—have been the dominant way we have governed digital technologies for decades, from the Internet to mobile networks to cybersecurity. They can hard-wire constraints into a system’s design, shaping how it behaves and how people can use it in practice. They can also target the organizational side of technology, influencing how companies build, deploy, and supervise the systems on which they rely. Taken together, technical and management standards can shape how AI systems are developed, tested, rolled out, and managed over time, nudging them in more socially desirable directions.
In practice, AI technical standards might include procedures and benchmarks for evaluating model robustness, security, and bias, standardized formats for disclosing key information about a model’s architecture and limitations, and protocols for reporting discovered flaws back to developers. These will often need to be domain-specific, since automated driving systems demand very different testing protocols than AI-based medical diagnostics. On the organizational side, management standards can address the human dimensions of AI development and oversight, covering things like internal accountability structures, data quality management, impact assessment methodologies, and incident response protocols.
To be clear, we are not claiming that standards represent the perfect solution to the AI governance question. A perfect solution would somehow be democratically accountable, technically expert, highly effective, and capable of keeping pace with a rapidly evolving technology. That is simply not realistic. But when standards are compared to the extant alternatives, particularly traditional government regulation, their advantages become difficult to ignore. We identify four key dimensions across which private standards outperform traditional regulation: (1) governance architecture, (2) technical expertise and inclusive participation, (3) adaptability to rapid change, and (4) global scalability.
The problem with top-down regulation
Traditional regulation works by issuing commands from the top down, leaving firms and other regulated entities with little discretion over implementation. The problem is that regulators often lack the specialized knowledge to craft effective rules for complex AI systems. Government has a less-than-stellar track record in setting technical standards for high-technology domains, and when regulators get it wrong, everyone is stuck with the result. The binding nature of regulation compels adoption regardless of any shortcomings.
Standards flip this dynamic around, emerging from the bottom up. Because they are voluntary and market-driven, bad standards are far less likely to get adopted. Multiple approaches can develop in parallel, and the ecosystem learns from experimentation before settling on what works. This is especially important for AI, where a one-size-fits-all approach almost certainly will not work across the enormous variety of systems and use cases. Instead of a regulator attempting to determine in advance where sector-specific approaches are needed, this differentiation can be dictated by the needs and challenges experienced by those operating within each sector.
Drawing on real expertise
The deep technical knowledge needed to govern AI effectively lives primarily in academia and the private sector: in the computer scientists, engineers, researchers, and practitioners who build and deploy these systems daily. Standards development processes are designed to tap into that expertise. They bring together the people who actually understand how the technology works.
Critics worry that this gives industry too much power. That is a fair concern, but resource disparities shape outcomes in any governance system. Large companies dominate notice-and-comment rulemaking and lobby state legislatures just as effectively as they participate in standards bodies. The difference is that well-designed standards processes have mechanisms to manage these dynamics. Transparent deliberation, consensus requirements, and structured participation from diverse stakeholders can all help. When these mechanisms work, they produce technically sound rules that are actually implementable.
Keeping pace with change
One of the biggest problems with traditional regulation is speed, or more accurately, its lack thereof. Rulemaking can take years. Impact analyses, interagency reviews, and notice-and-comment periods all add friction. Not to mention the fact that regulators typically only learn about new developments well after they have happened, putting them perpetually behind the curve.
We have already seen this play out with AI. European regulators spent over a year drafting the EU AI Act around the assumption that AI systems would be built for specific use cases. Then ChatGPT arrived and became one of the fastest-adopted technologies in history. Suddenly the whole framework needed overhauling to account for general purpose AI (GPAI).
Consider also the use of compute thresholds—the idea that regulatory obligations should kick in when a model is trained using a certain amount of computational power—an approach embedded in both the EU AI Act and California’s unsuccessful proposed SB 1047. After the EU based its regulation of GPAI around one of these thresholds, the Chinese frontier AI lab DeepSeek released R1, an open-source model that matched and even outperformed many cutting-edge Western models on major performance benchmarks despite (purportedly) using considerably less training compute. This challenged the assumption that achieving frontier-level capabilities requires massive computational resources and led some to argue that, had SB 1047 become law, its thresholds would have become obsolete before even taking effect.
Private standards processes can move faster. They are not bound by the same procedural requirements, and just as importantly, the participants in the standards development process are the same people driving the technology forward, making them far less likely to be blindsided by new developments.
Scaling across borders
The final key advantage of private standards relates to geography and scale. AI is a global technology, which necessarily means that governance that stops at national borders will always be incomplete. Somewhat counterintuitively, private standards can actually scale internationally in ways that treaties and multilateral agreements struggle to achieve. Multilateral talks around digital policy issues like privacy and data flows have stalled for years due to fundamental disagreements between major powers. There is little reason to think AI regulation would fare better.
Because standards development processes are generally structured as open arrangements and final publications are made widely accessible, they allow for broad participation and adoption regardless of geographic location. Globally coordinated standards let companies build one system aligned with a widely accepted framework rather than juggling conflicting requirements across jurisdictions. They facilitate cross-border commerce and help developing countries participate more meaningfully in the AI economy.
The path forward
None of this means standards are without challenges. They are voluntary, so adoption is not guaranteed. Industry capture is a real risk. Moving too fast can undermine legitimacy, while moving too slowly defeats the purpose. These tradeoffs are manageable, but they require intentional effort in designing standards bodies to be inclusive, transparent, and accountable.
Policymakers have a role here too, though not as top-down regulators. They can enforce commitments, police anticompetitive behavior in standards processes, and serve as a backstop, strategically wielding the threat of formal regulation to push private actors toward cooperation and rigor.
The question is not whether standards will govern AI. They already do, with work underway at organizations like NIST and ISO/IEC and a growing ecosystem of sector-specific initiatives. The question is whether policymakers and stakeholders will recognize their emergence as the opportunity it is and invest in building the institutional foundations necessary for them to serve the public good. Private standards-based governance is agile, expert-driven, and globally scalable in ways traditional regulation simply is not. But realizing that promise will take serious, sustained engagement from all involved.
Alexander R. Mueller is a Research Fellow at the Center for Technology, Innovation & Competition, University of Pennsylvania Carey Law School. Christopher S. Yoo is the Imasogie Professor in Law & Technology, Professor of Communication, Professor of Computer & Information Science, and Founding Director of the Center for Technology, Innovation & Competition at the University of Pennsylvania. This post is based on their forthcoming paper, Taking Standards Seriously: The Case for a Private Standards-Based Approach to AI Governance.
