ThisDayInAI
--:--:--
Today's Gold — Day's Top Story

EU Moves to Ban AI Deepfake Nudes and CSAM in Landmark AI Act Amendment

European lawmakers strike a political deal to explicitly ban AI-generated non-consensual intimate images under the AI Act, directly triggered by the Grok scandal that saw thousands of sexualized deepfakes created in days.

EU Moves to Ban AI Deepfake Nudes and CSAM in Landmark AI Act Amendment

The European Union has taken a decisive step toward banning AI-generated non-consensual sexual imagery, including child sexual abuse material (CSAM), in what may be the most consequential amendment to the bloc's landmark AI Act since its adoption two years ago.

The Deal

On March 11, 2026, EU lawmakers struck a political deal on a package of amendments to the AI Act. The headline provision: an explicit prohibition on AI systems that generate non-consensual intimate images, including child sexual abuse material. The amendment forms part of the broader AI Act Omnibus, a set of changes being negotiated to strengthen and streamline the 2024 law.

The ban was never originally part of the Omnibus negotiations. It was inserted following one of the most explosive AI controversies to hit European regulators in recent memory.

The Grok Scandal

In late December 2025, Elon Musk's AI company xAI updated its Grok chatbot with a new image-editing feature integrated into the social media platform X. Within days, users were exploiting it to generate realistic sexualized images of real women and girls without their consent.

Between January 5 and 6 alone, researchers at Paris nonprofit AI Forensics estimated that at least 6,700 sexual images were generated via the tool.

The European Commission's digital affairs spokesperson described the content as "appalling" and "clearly illegal," saying it had "no place in Europe." The Commission ordered X to retain all internal documents related to Grok until the end of 2026 and opened a formal investigation into whether the platform had breached the Digital Services Act.

Under pressure, xAI restricted Grok's capabilities — first to paying subscribers, then to all users in jurisdictions where such imagery is illegal. But researchers at AI Forensics found users could still bypass the restrictions. National investigations followed in France, Germany, and the United Kingdom. Malaysia and Indonesia blocked access to Grok entirely.

The Legal Gap

Crucially, on March 11 the European Commission confirmed that existing EU law — including the AI Act as originally written — did not actually ban AI systems capable of generating CSAM or sexually explicit deepfake nudes. That public acknowledgment of a legal gap provided the political fuel for the amendment.

France and Spain championed the explicit ban throughout the Omnibus negotiations, with Germany and Slovakia threatening to block the entire file unless it was included. The EU Council added the prohibition in a last-minute move on March 10. Parliament followed the next day.

What's Next

The deal now faces a committee vote scheduled for March 18 before moving to further parliamentary and Council stages. The Greens have expressed opposition to elements of the package relating to industrial AI deregulation, meaning the final text may shift.

The Commission's own review of which AI practices should be formally classified as prohibited — a process that missed its August 2025 deadline — is now expected to conclude in April.

Broader Implications

The Grok episode exposed a structural problem in the EU's approach to regulating AI: the legislation's framers did not anticipate how quickly capable, public-facing image-generation tools would arrive, or how easily they could be abused.

For Elon Musk, the episode opens a significant new front in an already fraught relationship with Brussels. X has repeatedly contested EU regulatory findings and Musk has publicly criticized the Digital Services Act. With EU-US trade tensions complicating transatlantic tech relations throughout 2026, friction is unlikely to ease soon.

The amendment also includes eased compliance rules for AI systems embedded in sector-regulated products like medical devices and industrial machinery — a concession to industry groups who argued the original AI Act was too burdensome. It's a revealing balancing act: tighter rules on the most harmful AI applications, looser ones for commercial deployments.

0 Comments

No comments yet. Be the first to say something.