GW Law Faculty Publications & Other Works

Document Type

Article

Publication Date

2023

Status

Accepted

Abstract

How to govern a technology like artificial intelligence (AI)? When it comes to designing and deploying fair, ethical, and safe AI systems, standards are a tempting answer. By establishing the best way of doing something, standards might seem to provide plug-and-play guardrails for AI systems that avoid the costs of formal legal intervention. AI standards are all the more tantalizing because they seem to provide a neutral, objective way to proceed in a normatively contested space. But this vision of AI standards blinks a practical reality. Standards do not appear out of thin air. They are constructed. This Essay analyzes three concrete examples from the European Union, China, and the United States to underscore how standards are neither objective nor neutral. It thereby exposes an inconvenient truth for AI governance: Standards have politics, and yet recognizing that standards are crafted by actors who make normative choices in particular institutional contexts, subject to political and economic incentives and constraints, may undermine the functional utility of standards as soft law regulatory instruments that can set forth a single, best formula to disseminate across contexts.

GW Paper Series

2024-38

Included in

Law Commons

Share

COinS