GW Law Faculty Publications & Other Works

Document Type

Article

Publication Date

2025

Status

Forthcoming

Abstract

AI governance underscores the AI “supply chain” and the “many hands” involved in the creation of AI systems. Although invaluable, this production-centered approach risks overlooking what happens when people use AI tools. That’s a mistake, because user interactions determine the real-world impact of these tools.

This Essay contends that regulating AI and holding the appropriate actors responsible requires far more attention to what happens after deployment of an AI system. The supply chain is an invaluable start, but it’s not a linear chain. Rather, it is a circle that must incorporate users’ engagement with AI tools as part of post-deployment feedback. To do so, we must distinguish among different categories of challenges that occur as people engage with AI tools. Focusing on information privacy as a concrete example, I identify three types of challenges that arise in, out of, and via generative AI systems. Each type requires a different kind of policy response. Unless we account for the contextual ways that humans interact with AI systems on the ground, it is users who will bear the cost of harms, and our regulatory interventions will remain incomplete at best and pernicious at worst.

GW Paper Series

2025-45

Included in

Law Commons

Share

COinS