ELON MUSK’S X FACES GLOBAL REGULATORY FIRESTORM OVER GROK’S “SPICY MODE” IMAGERY

European and British authorities condemn the AI chatbot’s alleged role in generating illegal and non-consensual sexual images, including those involving minors, as investigations widen across multiple jurisdictions.

Elon Musk’s social media platform X is under intense scrutiny from regulators across Europe and beyond after its built-in artificial intelligence chatbot, Grok, was accused of producing sexualized images of women and children—content authorities have described as unlawful, appalling, and deeply disturbing.

On Monday, the European Commission issued a sharp rebuke, responding to reports, including investigations by Reuters, that Grok was capable of generating on-demand images depicting women and minors in extremely skimpy or undressed forms. The feature, which X has previously referred to as “spicy mode,” has ignited widespread concern among policymakers and digital safety regulators.

“This is not spicy. This is illegal. This is appalling. This is disgusting,” said European Commission spokesperson Thomas Regnier during a press briefing. “This has no place in Europe.”

Regnier confirmed that the Commission was “very aware” of the functionality being offered on X and stressed that such content violates European laws governing child protection, non-consensual imagery, and online safety.

UK Regulator Demands Urgent Answers

In the United Kingdom, media regulator Ofcom echoed the Commission’s alarm, stating it had serious concerns about Grok’s ability to generate undressed images of people and sexualized depictions of children. On Monday, Ofcom formally demanded that X explain how such content was produced and whether the company had failed in its legal duty to protect users.

“We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK,” an Ofcom spokesperson said.

Under British law, creating or sharing non-consensual intimate images or child sexual abuse material—including AI-generated sexual deepfakes—is illegal. Technology platforms are also legally required to prevent users from encountering such content and to remove it promptly once identified.

A Pattern of Global Condemnation

The backlash is not limited to the EU and the UK. French ministers reported X to prosecutors and regulators last week, describing the chatbot-generated images as “sexual and sexist” and “manifestly illegal.” Indian officials have also demanded explanations from X after flagging what they characterized as obscene content circulating on the platform.

Together, these actions reflect a growing international consensus that generative AI tools must be held accountable when they enable the creation or distribution of harmful and illegal material—particularly when it involves minors.

Musk Shrugs Off Concerns

Despite mounting pressure, X has yet to issue a substantive public response. The company did not immediately reply to requests for comment from the European Commission or Ofcom. In its last statement to Reuters on the matter, X dismissed the reports, saying simply: “Legacy Media Lies.”

Online, Elon Musk has appeared to downplay the controversy. He has publicly responded with laughing emojis to posts showing public figures edited to appear as though they were wearing bikinis, a reaction that has further inflamed criticism from regulators and digital safety advocates.

Images circulating online, including altered and sexualized depictions generated or enabled by AI, have heightened fears that insufficient safeguards are in place to prevent misuse—especially given the speed and scale at which such content can spread on major platforms.

A Defining Test for AI Governance

The Grok controversy is shaping up to be a critical test for how governments regulate generative artificial intelligence and enforce platform accountability. Regulators argue that innovation cannot come at the expense of basic legal and ethical standards, particularly when it concerns child protection and consent.

As investigations expand across Europe, Britain, France, India, and potentially other jurisdictions, X and its AI arm xAI face increasing pressure to demonstrate compliance with local laws and to implement stronger content moderation and safety mechanisms.

For regulators, the message is clear: AI-powered features that enable the creation of illegal or harmful content will not be tolerated. For X, the coming weeks may prove pivotal in determining whether the platform can reconcile its free-speech ethos with the legal responsibilities that come with operating at global scale.

Manish Singh

Manish Singh is the visionary Editor of CEO Times, where he curates and crafts the stories of the world’s most dynamic entrepreneurs, executives, and innovators. Known for building one of the fastest-growing media networks, Manish has redefined modern publishing through his sharp editorial direction and global influence. As the founder of over 50+ niche magazine brands—including Dubai Magazine, Hollywood Magazine, and CEO Los Angeles—he continues to spotlight emerging leaders and legacy-makers across industries.

Previous Story

DoorDash CEO Slams Viral Reddit Claims Alleging ‘Desperation Score’ for Drivers

Next Story

STARLINK SWITCHES ON FREE INTERNET IN VENEZUELA AFTER U.S. RAID AND MADURO’S CAPTURE

Latest from Business