Regulating Bot Speech

Abstract

We live in a world of artificial speakers with real impact. So-called “bots” foment political strife, skew online discourse, and manipulate the marketplace. Concerns over bot speech have led prominent figures in the world of technology to call for regulations in response to the unique threats bots pose. Recently, legislators have begun to heed these calls, drafting laws that would require online bots to clearly indicate that they are not human.

This work is the first to consider how efforts to regulate bots might run afoul of the First Amendment. At first blush, requiring a bot to self-disclose raises little in the way of free speech concerns—it does not censor speech as such, nor does it unmask the identity of the person behind the automated account. However, a deeper analysis reveals several areas of First Amendment tension. Bot disclosure laws fit poorly with the state’s stated goals, risk unmasking anonymous speakers in the enforcement process, and create a scaffolding for censorship by private actors and other governments.

Ultimately bots represent a diverse and emerging medium of speech. Their use for mischief should not overshadow their novel capacity to inform, entertain, and critique. We conclude by urging society to proceed with caution in regulating bots, lest we inadvertently curtail a new, unfolding form of expression.

[pdf-embedder url="https://www.uclalawreview.org/wp-content/uploads/2019/09/Lamo-Calo-66-4.pdf"]

About the Author

Madeline Lamo is a Hazelton Fellow at the University of Washington Tech Policy Lab. Ryan Calo is a Lane Powell and D. Wayne Gittinger Associate Professor at the University of Washington School of Law.

By uclalaw
/* ]]> */