The EU has now singled X out – and this time it’s not political, misinformation based mostly or some nebulous free speech argument.
It has to do with porn: Particularly, there’s this query of the sexually express photos that may be created utilizing Grok, the AI related to Elon Musk’s platform, and whether or not a few of these have been getting used to make “digital undressing” content material.
That is the form of factor that makes your abdomen clench whenever you learn it, as a result of it’s not simply hurt that’s summary. It’s focused, private and in some cases could also be unlawful.
And in regards to the temper, too. This is just not melodramatic E.U. That is the EU saying, “Sufficient”.
Regulators are involved about how briskly the sort of content material propagates itself on the web and likewise simply the truth that as soon as one thing like a deepfake express picture is on the market, it’s not going to vanish.
The harm is completed, even when the platform cuts it down, even when the account meets its finish.
Now right here’s the kicker. Folks maintain seeming stunned when AI will get put to make use of for the worst issues. However, I imply, let’s face it – are we actually stunned?
You unleash a scrumptious picture device on thousands and thousands of individuals and the web does what it at all times does: throws its shiny new toy right into a rubbish disposal, searches for methods to harm somebody with it.
That’s why this investigation isn’t merely “EU offended at chatbot.” It’s occurring with the Digital Companies Act, which basically instructions massive platforms to behave like accountable adults.
It ought to at all times be doable to inform whether or not X took an affordable method to danger evaluation, and put in place adequate security guardrails. Not after the harm. Earlier than.
X has apparently taken some measures in response, corresponding to paying extra consideration to sure options and tightening management (by, for instance, placing some features producing photos behind a paywall).
That’s… one thing, I assume. However should you’re the one whose picture was altered and circulated, it most likely doesn’t really feel like a win. It’s as should you’re locking the entrance door solely after your own home has been robbed.
And right here’s one other uncomfortable truth: Platforms at present don’t merely “host content material.” They amplify it. They suggest it. They push it into feeds.
That’s why the EU isn’t simply involved about Grok-exposed express photos – it’s interested by whether or not X techniques made that content material journey sooner and additional than it ever ought to have.
What’s horrifying is that this is about to change into the brand new regular.
AI-generated photos should not going wherever. In truth, it’s solely getting higher, sooner, cheaper and extra actual.
Which is to say the “gross makes use of” are going to multiply as nicely. At the moment it’s Grok. Tomorrow it’s one other mannequin, one other platform, one other crop of victims.
And it’s not simply celebrities anymore; it’s classmates, colleagues, ex-lovers and random girls on the web who posted one selfie in 2011 and nonetheless rue the day they ever existed on-line.
That’s why the E.U. inquiry is necessary. And never as a result of it’s enjoyable to see massive tech sweat (although, O.Okay., that half is sustaining).
It issues, although, as a result of this is without doubt one of the first high-profile assessments of whether or not governments can really compel platforms to deal with AI hurt as an actual emergency and not simply the aspect quest.
And if X fails this take a look at? And anticipate that regulators will be extra aggressive throughout the board – as a result of the subsequent platform of their cross hairs could not have so many possibilities.

