Can AI Standards Have Politics?

Abstract

How to govern a technology like artificial intelligence (AI)? When it comes to designing and deploying fair, ethical, and safe AI systems, standards are a tempting answer. By establishing the best way of doing something, standards might seem to provide plug-and-play guardrails for AI systems that avoid the costs of formal legal intervention. AI standards are all the more tantalizing because they seem to provide a neutral, objective way to proceed in a normatively contested space. But this vision of AI standards blinks a practical reality. Standards do not appear out of thin air. They are constructed. This Essay analyzes three concrete examples from the European Union, China, and the United States to underscore how standards are neither objective nor neutral. It thereby exposes an inconvenient truth for AI governance: Standards have politics, and yet recognizing that standards are crafted by actors who make normative choices in particular institutional contexts, subject to political and economic incentives and constraints, may undermine the functional utility of standards as soft law regulatory instruments that can set forth a single, best formula to disseminate across contexts.

[pdf-embedder url="https://www.uclalawreview.org/wp-content/uploads/securepdfs/2024/05/07-Solow-Niederman-No-Bleed.pdf" title="07 - Solow-Niederman No-Bleed"]

About the Author

Associate Professor, George Washington University Law School. Thank you to BJ Ard, Ryan Calo, Julie Cohen, Rebecca Crootof, David Freeman Engstrom, and Richard Re for helpful comments and suggestions. I am grateful to the UCLA Law Review Discourse student editors, especially Annabelle Spezia-Lindner and Kitty Young, for their feedback and assistance in preparing this piece for publication. This Essay reflects developments through April 2023, when it was substantively finalized for publication. Any remaining errors or omissions are my own.

By LRIRE
/* ]]> */