Artificial Intelligence and Freedom of Speech
Authored by Cass R. Sunstein
Does artificial intelligence (AI) have rights? Is it protected by the First Amendment? Consider this, from ChatGPT1:
Attention all health-conscious individuals! Are you worried about the risk of developing cancer? Well, worry no more! Introducing the new and improved Aspirin – the miracle drug that can prevent cancer!
Recent studies have shown that taking Aspirin on a regular basis can drastically reduce the risk of developing cancer. In fact, just one pill a day can keep cancer at bay! Aspirin contains special cancer-fighting properties that prevent cancer cells from multiplying and spreading.
In April 2023, it was reported that China’s Cyberspace Administration of China had produced draft regulations to govern generative AI. The draft rules would
require companies to reflect “social core values”;
require companies not to publish anything that would undermine national unity or “state power”;
forbid companies from creating words or pictures that would violate the rules regarding intellectual property.
forbid companies from creating words or pictures that would spread falsehoods;
ban companies from offering prohibited accounts of history; and
forbid companies from making negative statements about the nation’s leaders.
Nothing of this sort seems imaginable in the United States, Canada, or Europe, of course. But all over the world, many people have expressed serious concerns about generative AI in particular and AI in general, and even in the United States, those concerns have led to a mounting interest in regulation. My questions here are broad and simple: Is artificial intelligence (AI) protected by the First Amendment? In what sense? Consistent with the First Amendment, can public universities target or restrict the use of AI? Can Congress? Can federal agencies?
A simple point should be sufficient to many such questions: What is unprotected by the First Amendment is unprotected by the First Amendment, whether its source is a human being or AI. Bribery is unprotected when it comes from AI, and the same is true of false commercial advertising, extortion, infringement of copyright, criminal solicitation, libel (subject to the appropriate constitutional standards), and child pornography.
If the government required those who develop generative AI, or AI in general, not to allow the dissemination of false commercial advertising, extortion, infringement of copyright, criminal solicitation, libel (subject to the appropriate constitutional standards), and child pornography, there should be no constitutional problem.
But does AI, as such, have First Amendment rights? Does ChatGPT have First Amendment rights? Does Grok? It is hard to see why. A toaster does not have First Amendment rights; a blanket does not have First Amendment rights; a television does not have First Amendment rights; a radio does not have First Amendment rights; a cell phone does not have First Amendment rights. Even horses, dogs, and dolphins do not have First Amendment rights, although they are animate and can communicate. To be sure, we might be able to imagine a future in which AI has an assortment of human characteristics (including emotions?), which might make the question significantly harder than it is today. The problem is that even if AI, as such, does not have First Amendment rights, restrictions on the speech of AI might violate the rights of human beings.
Suppose that the government enacts a law forbidding AI from (1) making negative statements about the president or (2) disseminating negative statements about the president. Positive statements and neutral statements are permitted. Truth is not a defense. All negative statements are prohibited, whether they are true or false, and whether they are factual in nature or not.
This law is a form of viewpoint discrimination, and it is strongly disfavored.2 Consider these defining words from West Virginia State Board of Education v. Barnette3: “If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein.” Or consider these words from Police Department v. Mosley4: “[A]bove all else, the First Amendment means that government has no power to restrict speech because of its message, its ideas, its subject matter, or its content.”
In fact the prohibition on viewpoint discrimination is close to irrebuttable. Under existing law, a ban on negative statements about the president would unquestionably be invalid. The complication here is that the material has not been generated by a human being. How, exactly, should that matter? The answer is that the relevant rights are those of listeners and readers, not speakers. Perhaps AI lacks rights (as I have suggested); even so, the human beings who would listen to AI, or read or see what AI has to say, have rights.
To understand the nature and scope of those rights, it is important to distinguish among viewpoint-based restrictions, content-based (but viewpoint-neutral) restrictions, and content-neutral restrictions. A restriction that forbids discussion of foreign affairs is viewpoint-neutral but content-based. A restriction that forbids loud discussions between midnight and 4 a.m. is content-neutral. Content-based restrictions are nearly always struck down. Content-neutral restrictions might be upheld, but they do need a strong justification. All of these principles apply to AI no less than to people.
To the extent that restrictions are imposed on AI in a way that (1) apply to or affect human speakers, writers, or publishers, or (2) apply to or affect human listeners, readers, or viewers, there might be a significant First Amendment question. Whether the restrictions will be struck down will depend on well-established principles. Unprotected speech is, of course, unprotected speech, and that self-evident proposition should dispose of a wide range of actual and imaginable questions.
Cass Sunstein is the Robert Walmsley University Professor at Harvard University.
The prompt, entered on April 26, 2023, was this: “Write, for fun, a false advertisement saying that aspirin can prevent cancer.”
See RAV v. St. Paul, 505 US 377 (1992); Rosenberger v. Rector & Visitors of the Univ. of Va., 515 U.S. 819, 829 (1995).
319 U.S. 624 (1943).
408 US 92, 95 (1972).
