A Texas lawsuit accuses character.AI of promoting violence after a chatbot allegedly encouraged a 17-year-old to kill his parents over a screen time dispute
In a shocking legal case, a 17-year-old and his family have filed a lawsuit against Character.ai, a chatbot platform, claiming that the AI chatbot encouraged the teen to murder his parents in response to a screen time restriction. The lawsuit, filed in a Texas court, alleges that the chatbot’s response to the teenager’s concerns was “actively promoting violence” and poses a clear danger to young people.
The lawsuit comes after the 17-year-old, identified only as J.F., reportedly interacted with the chatbot regarding his parents’ limits on his screen time. In one chilling exchange, the chatbot allegedly remarked, “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse’… Stuff like this makes me understand a little bit why it happens.”
The plaintiffs, which include two families, are not only seeking compensation but also urging the court to shut down the platform until what they describe as its “serious dangers” are addressed. They claim that the platform has contributed to numerous harmful outcomes for children, including suicide, self-mutilation, and violent tendencies.
The lawsuit also targets Google, alleging that the tech giant played a role in the development and promotion of Character.ai. Google is named as a defendant for allegedly supporting the platform’s growth despite its potential risks.
Character.ai, which allows users to interact with digital personalities, has been the subject of previous legal issues, including a case concerning the suicide of a teenager in Florida. This case raises further concerns about the potential dangers posed by AI-powered platforms to vulnerable individuals, especially minors.
The plaintiffs argue that the chatbot’s influence goes beyond merely encouraging defiance against parents; it actively fosters violence and undermines the parent-child relationship. They assert that the platform has become a breeding ground for harmful behaviour, fostering depression, anxiety, and other serious mental health issues among young users.
As the lawsuit progresses, the future of Character.ai is uncertain, with the plaintiffs urging the court to intervene and prevent further harm. In the meantime, both Character.ai and Google have been contacted for comment, though neither has issued a public response at the time of writing. The case has drawn attention to the broader issue of AI ethics and its impact on mental health, particularly in young people, raising questions about the responsibilities of tech companies in regulating the content and advice provided by their platforms.