Sunday, September 7, 2025
Sunday September 7, 2025
Sunday September 7, 2025

ChatGPT boss Altman sparks fury after hinting ‘dead internet theory’ may be true

PUBLISHED ON

|

Altman’s remark fuels backlash as critics blame him for the bot-filled web he now warns about

Sam Altman, the chief executive of OpenAI, has set off a storm of controversy after suggesting there might be truth to the so-called “dead internet theory.”

The theory, often dismissed as a conspiracy, claims that much of the internet’s activity is no longer organic but instead generated by bots, algorithms and automated systems. For years, critics mocked it as paranoia. But with the rise of large language models (LLMs) like ChatGPT, their claims are gaining new traction.

Altman, whose company launched ChatGPT in late 2022 and ignited the current AI boom, admitted in a post on X that his view had shifted. “I never took the dead internet theory that seriously,” he wrote, “but it seems like there are really a lot of LLM-run Twitter accounts now.”

LLMs are the same type of technology that powers OpenAI’s ChatGPT and rivals like Anthropic’s Claude. They can rapidly generate text indistinguishable from human writing, and are increasingly being used—legitimately and maliciously—on social media.

Altman’s remark quickly drew fire from users who accused him of fuelling the very trend he was lamenting. Critics pointed out that OpenAI’s decision to release ChatGPT to the public had normalised AI-generated content, making it easier for spammers, propagandists, and scammers to flood platforms with automated posts.

“Isn’t this your fault?” one user asked bluntly, echoing a wave of similar reactions. Another wrote: “You made the monster. Now you’re shocked it’s out of control?”

The backlash underscores a growing tension around Altman’s dual roles: as the head of the company behind the most widely used generative AI system, and as a public voice warning about the dangers of an AI-dominated digital space.

Embed from Getty Images

His comments also prompted speculation about their connection to another of his ventures—the World Network, formerly known as Worldcoin. That project, founded in 2019, aims to create a global identity system based on biometric scans of users’ eyes. Proponents claim such technology could help distinguish humans from bots online, tackling the very problem Altman appeared to acknowledge in his tweet.

Sceptics, however, see irony in Altman’s warning about an internet overwhelmed by bots while simultaneously promoting a controversial identity system that critics view as dystopian.

The “dead internet theory” itself dates back more than a decade, but has resurfaced with force in the age of generative AI. Its central claim—that much of what people consume online is created by machines rather than humans—was once ridiculed. But with recent studies showing that fake accounts powered by AI models are multiplying, even high-profile figures like Altman are conceding that the line between conspiracy and reality is blurring.

Social media platforms such as X have repeatedly struggled to curb bot activity, despite promises of stricter moderation. Failed attempts at crackdowns have left users questioning how much of the conversation online is real.

Altman’s admission highlights the paradox at the heart of the AI boom: technologies built to enhance human communication are increasingly making it harder to tell human voices apart from machine ones.

Whether his comment was a casual observation or a veiled warning, it has triggered a reckoning—about responsibility, accountability, and the very future of the internet. For Altman, the backlash shows how difficult it is to play both innovator and watchdog in a digital world that may already be shifting beyond human control.

You might also like