A Norwegian man is demanding OpenAI be fined after ChatGPT falsely claimed he murdered his sons.
A Norwegian man is demanding severe penalties against OpenAI after ChatGPT falsely accused him of murdering his own children and serving a 21-year prison sentence. The shocking error, known as an AI “hallucination,” has sparked concerns over false information generated by artificial intelligence and its potential consequences.
Arve Hjalmar Holmen, who has never been accused of any crime, discovered the false claim when he asked ChatGPT “Who is Arve Hjalmar Holmen?” in August 2024. The chatbot responded with a chilling fabrication, alleging that he had killed his two sons, attempted to murder a third, and received Norway’s maximum prison sentence.
Holmen has since filed a complaint with Norway’s Data Protection Authority and is being represented by the digital rights group Noyb. He is demanding that OpenAI face significant fines for violating European data protection laws, which require personal data to be accurate and truthful.
AI’s Disturbing “Hallucination”
ChatGPT’s fabricated response went into disturbing detail, falsely claiming that Holmen’s sons had been found dead in a pond in 2020 and that his case had shocked the nation. The AI even invented news coverage of the supposed crime, making the falsehood seem even more credible.
“Some think that there is no smoke without fire,” Holmen said. “The fact that someone could read this output and believe it is true is what scares me the most.”
Holmen believes this AI-generated lie could ruin his reputation. Despite OpenAI’s disclaimer that ChatGPT “can make mistakes”, Holmen and his lawyers argue that this is not enough.
“You can’t just spread false information and then hide behind a small disclaimer,” said Joakim Söderberg, a lawyer from Noyb.
OpenAI Responds—but Is It Enough?
In response to the controversy, OpenAI stated that Holmen’s case involved an older version of ChatGPT and that recent updates have improved its accuracy and fact-checking capabilities.
“We continue to research new ways to reduce hallucinations and improve the reliability of our models,” the company said.
However, Holmen’s lawyers argue that OpenAI still lacks transparency about how its AI generates false information. They claim OpenAI has ignored access requests, making it impossible to determine what data influenced ChatGPT’s fabricated story.
AI’s History of False Claims
Holmen’s case is not an isolated incident. AI hallucinations have plagued other major tech firms:
- Apple’s AI-powered news tool had to be shut down after it fabricated fake headlines.
- Google’s Gemini AI infamously claimed that geologists recommend eating one rock per day.
- Other chatbots have generated false medical advice, historical inaccuracies, and fake crime reports.
AI experts, including Professor Simone Stumpf from the University of Glasgow, warn that even developers struggle to understand why AI makes these errors. “These systems work like a black box, even to those who build them,” she explained.
What Happens Next?
Holmen’s case could set a landmark legal precedent for AI accountability. If the Norwegian Data Protection Authority rules against OpenAI, it could lead to multi-million-dollar fines and stricter regulations on AI-generated content.
For now, Holmen is left battling the potential damage to his reputation, all because an AI chatbot fabricated a crime that never happened.