Untangling AI Openness
Parth Nobel, Alan Z. Rozenshtein, & Chinmayi Sharma
Is AI “open” or “closed”? That question, which dominates policy debates from Capitol Hill to Brussels, is the wrong one. In a recent article in the Wisconsin Law Review, we argue that the open-versus-closed binary—inherited from the world of open source software—is dangerously misleading when applied to artificial intelligence. AI systems are not monolithic programs that can simply be stamped “open” or “closed.” They are composite technologies built on a stack of discrete components, each controlled by different actors with competing interests Getting AI governance right requires embracing what we call “differential openness”—a framework that untangles AI into its constituent parts and asks, for each one: what is open, how open is it, and to what end?
The Problem with the Binary
The open source software (OSS) movement offered a clear governance model: make the source code available, and a global community of developers can inspect it, improve it, and build on it. Licenses ranging from permissive (MIT, Apache) to copyleft (GPL) created a legal infrastructure that sustained decades of collaborative innovation. Today, open source projects like Linux, Apache, and Python run much of the modern world. The temptation to import the simple OSS governance model is understandable but misguided.
AI is not software in any simple sense. Traditional software’s value is unlocked almost entirely through access to source code. AI systems, by contrast, depend on multiple interdependent components—computational hardware, training data, source code, model weights, system prompts, operational records and controls, the application layer, and human labor—where source code is just one piece and often not the most important one. Many treat model weights as the AI equivalent to source code, but the analogy does not hold. By making model weights alone available, as many purportedly “open” AI systems do, developers are not unlocking the gamut of benefits that making source code open did for software. For example, without access to expensive hardware and energy, smaller labs and startups are boxed out of the marketplace. AI systems, and their openness, cannot be reduced to a single component nor be understood in isolation; differential openness requires considering the entire AI stack, including interactions between different components.
The actors driving AI openness are also fundamentally different from those who built the open source software movement. The OSS movement grew out of a decentralized community of academics, hobbyists, and developers motivated in significant part by an ethical commitment to software freedom. The AI ecosystem, by contrast, is dominated by a concentrated set of powerful corporations. Meta’s release of Llama was not altruism; it was a calculated strategy (albeit a failed one) to commoditize the model layer. Understanding these incentives is essential for diagnosing “open-washing”—claiming the reputational benefits of openness while withholding the components that matter most.
When Meta releases Llama’s model weights and calls it “open source,” it withholds key components like the propriety datasets used to train them and imposes a restrictive license. When the EU AI Act grants regulatory forbearance to models that publish their weights, architecture, and data-usage information, it misses the often more critical opportunity to demand transparency in the datasets themselves. This narrow focus on AI system openness overlooks the complexity of the stack and privileges systems that are not nearly as open as they seem. It also distorts the policy debate itself. By framing openness i as an all-or-nothing attribute, regulation begins to look like an all-or-nothing choice too, making outright prohibition of some forms of open spectrum AI a live option in a way it never really was for open source software. Effective governance requires recognizing AI’s differential openness so that regulation is crafted to target the right components of the AI stack. The question isn’t whether AI should be open or closed; it is how open each component of a system should be.
Untangling the AI Stack
Our article’s core contribution is a taxonomy that disaggregates AI into eight components, each with its own spectrum of openness. Rather than call these varied configurations “open source AI,” we propose the more precise term “open spectrum AI.” That shift in language matters because the issue is not simply that AI has many parts. It is that these parts interact. Opening one component can create new possibilities elsewhere in the stack, while keeping one critical layer closed can neutralize the practical value of openness in another.
At the infrastructure level, compute—the specialized hardware powering AI—is concentrated among a few firms (Nvidia, TSMC, ASML) and remains largely closed through high costs, and proprietary bottlenecks. The hardware doesn’t function without an inordinate amount of energy controlled by a few utilities commissions. However, compute can be made more open by making design blueprints open source, providing cloud credits , requiring the largest AI players to cover the rising price of energy caused by their increasing demand, or even going so far as to build public infrastructure.
Data—the fuel for the AI engine—is among the most contested components. Some truly open datasets, such as Common Crawl, exist; however, privacy concerns, copyright litigation, and competitive pressures mean that even nominally “open” models often withhold their training datasets.
The technical artifacts that define a model’s capabilities tell a similar story. Source code is often proprietary, while model weights— now the focal points of the openness debate— are sometimes released but frequently under restrictive terms. Even when models, such as Llama and Deepseek, release weights openly, the openness is largely limited without the data, code, and compute. Other layers remain largely closed. System prompts shape model behavior in ways users rarely see. The application layer is also usually proprietary. Each of these layers of the stack can be made partially or entirely open; however, the entities that control these components often lack the incentive to do so.
Finally, the governance and accountability layers. Operational records and controls—logs, safety benchmarks, bias-detection tools—are critical for oversight. And the human layer—from data labelers to researchers to engineers—determines who has the expertise and institutional freedom to build, audit, and improve AI systems. Ultimately, each of the eight components are interdependent: the value of opening any one layer depends in part on the evolving openness of the others.
The Trade-Offs
We evaluate differential openness against four policy objectives: safety, innovation, democratic control, and national security— each of which involves trade-offs.
Take safety. Transparent data, weights, and operational records enable independent auditing and red-teaming. But once these components are released, they cannot be recalled, and malicious actors can repurpose them to generate harmful content or automate sophisticate scams.
The innovation story is similarly mixed. While open-weight models like Llama and DeepSeek have let researchers and startups build domain-specific applications without billions in training costs,if compute, proprietary data, and expert talent remain locked down, opening weights alone creates only an illusion of competition
Openness cuts both ways for democratic control and competition. For democratic control, openness enables civil society to audit AI systems for bias and hold powerful institutions accountable, but oversight is harder with a fragmented ecosystem.. Competition is much the same: openness can foster innovation and new entrants to the market, , but it can also be co-opted by dominant firms to entrench their market positions .
And then there is national security. America’s open spectrum AI research ecosystem has been a source of competitive advantage, but the rapid rise of powerful open spectrum models from Chinese labs like DeepSeek—built in part on openly available Western research— present a challenge to the balance of security and knowledge sharing.
These trade-offs are unavoidable and often conflict between policy goals and even within a single one. Even when policymakers strike what looks like the right balance for a single component, that choice can reshape the rest of the stack in nonlinear ways. Opening one layer can create cascading openness elsewhere; closure at a strategic bottleneck can create cascading closedness.
A Playbook for Policymakers
The final part of our article turns to a research agenda for more precise intervention. We examine five legal and regulatory levers—liability, competition policy, intellectual property, trade controls, and government support—and show how each can be calibrated to target specific components of the AI stack rather than applying system-level mandates. The point is not to offer definitive solutions, but is to show where current law fails to account for the nuance of open spectrum AI.
Current liability rules, for example, often create perverse incentives: developers who release dangerous model weights without safeguards can shelter behind warranty disclaimers and open source licenses, while developers who maintain transparent safety records create a paper trail that plaintiffs can exploit in litigation.
Similarly, competition policy must look beyond model weights. True decentralization requires addressing concentration across the entire stack—compute, data, deployment infrastructure, and labor. Without that, releases of open spectrum AI models by dominant firms can function as competitive weapons rather than public goods.
Intellectual property law creates its own asymmetry: transparent projects that document their data sources invite copyright litigation, while closed developers shield their data sources as trade secrets, which can be used to hide copyright, privacy, and other sorts of violations.
Trade policy raises its own questions: the U.S. currently exempts open-weight models from export restrictions, signaling a recognition of the value of AI openness while raising the the risk of diffusion to adversaries. At the same time, government support, from subsidized compute access and public dataset initiatives to immigration and labor policy that improves accessibility, could counter the resource disparities that make true openness a privilege of the well-funded—but risks flowing back to incumbents if poorly designed.
Conclusion
AI openness is not an inherent good or evil. It is an instrumental value whose worth depends entirely on which components are opened, to what degree, and in service of what goals. Policymakers who want to get this right need to start by untangling the AI stack and then do the hard work of calibrating each component’s openness to the goals they actually care about. That means resisting the false simplicity of the open-closed binary and and building tools that can respond to a system whose components interact, whose trade-offs cascade across layers.
Parth Nobel holds a Ph.D. in Electrical Engineering from Stanford University. Alan Z. Rozenshtein is an Associate Professor at University of Minnesota School of Law. Chinmayi Sharma is an Associate Professor at Fordham Law School. This post is based on their recent paper, Untangling AI Openness.
