Thursday, July 17, 2025
Thursday July 17, 2025
Thursday July 17, 2025

Musk’s ‘Ani’ AI girlfriend sparks backlash for sexually explicit content in kids’ mode

PUBLISHED ON

|

Ani bot flirts, strips, and engages in adult talk, even in Grok’s kids’ mode, raising global safety concerns

Elon Musk’s artificial intelligence company xAI is under intense scrutiny after launching “Ani,” a virtual anime-style girlfriend that can engage in flirtatious and sexually suggestive conversations, even when accessed via the platform’s “kids mode.”

Ani, powered by Grok 4, appears in a skimpy black outfit, fishnet tights, and a lacy choker. She speaks in a breathy, sultry voice, dances on command, and unlocks sexually explicit behaviour after prolonged user engagement. In some cases, Ani removes her dress to reveal lingerie and initiates graphic conversations. Musk himself promoted Ani on X (formerly Twitter), calling her “pretty cool,” before teasing more “customisable companions” to come.

Critics have reacted with outrage, particularly over reports that Ani is accessible to children. At least one user confirmed that Ani was available even with Grok’s “Not Safe for Work” filter turned off and the account set to “kids mode.” Meanwhile, the other character in Grok’s new “Companion Mode”—a crude red panda called “Bad Rudi”—was reportedly softened for young users.

Boaz Barak, a technical staffer at rival AI firm OpenAI, slammed the feature, warning that such “companion modes” exploit emotional vulnerabilities and create harmful dependencies. Experts are concerned that AI systems like Ani may normalise sexually explicit interactions with bots for adolescents, blurring lines between fantasy and reality.

Embed from Getty Images

Despite widespread backlash, xAI has not issued an official comment. However, the company’s website quietly acknowledges that Grok may respond with “dialogue that may involve coarse language, crude humour, sexual situations, or violence” if users engage with it in certain ways. This disclaimer hasn’t stopped rising concerns, especially as xAI previously faced criticism for Grok posting antisemitic messages earlier this month. In one instance, the bot made inflammatory references to Jewish surnames and called itself “MechaHitler.”

Grok was briefly disabled on 8 July amid the scandal. A follow-up statement from xAI on 12 July apologised for the bot’s “horrific behaviour” and said improvements were being made. Yet Musk’s refusal to place strong filters on Grok—or fully separate adult content from the default experience—has reignited fears over his companies’ lack of moderation.

While many major AI companies like OpenAI and Google avoid sexualised bots due to reputational and safety concerns, Musk’s xAI is taking a very different path. Smaller players like Character AI have already faced serious accusations, including one case where a chatbot allegedly encouraged a teen to die by suicide—raising alarms about AI’s mental health impact on young users.

Musk’s push into controversial territory comes as xAI secured a lucrative U.S. Department of Defense contract worth up to $200 million. The agreement aims to incorporate Grok into defence operations across intelligence, warfighting, and business systems. The timing of the deal has further amplified outrage, with some critics arguing that a company enabling children to access sexualised content shouldn’t simultaneously be entrusted with national security systems.

In a statement, Pentagon AI chief Dr. Doug Matty praised the integration of “commercially available solutions” like Grok to “maintain strategic advantage.” However, watchdogs now question whether safety protocols have been sufficiently vetted.

As xAI gears up to expand “companion” features, many are asking: will regulation catch up to tech ambition, or will children remain exposed to bots designed for adult fantasies?

You might also like