•  
  •  
 

Abstract

In the time since AI chatbots and other LLMs became widely available to the public, there has been a pattern of deaths linked to interactions with this technology. While it was settled in early 2026, Garcia v. Character Technologies serves as a case study of how AI chatbots can negatively prey on vulnerable groups in the absence of interventions to mitigate harmful AI behaviors and tragic human outcomes. There is not a singular approach to regulating the harmful effects of predatory chatbots; however, this paper advocates for a harms-based regime influenced by protections in human-subject research. In a sense, AI chatbots and LLMs more broadly are a form of unregulated human research. By turning to existing frameworks for the protection of humans in the face of technological advancement, regulators can seek to protect people, especially vulnerable populations, without stifling progress.

First Page

37

Included in

Privacy Law Commons

Share

COinS