What occurs while you merge the world’s most poisonous social media cesspool with the world’s most unhinged, uninhibited, and deliberately “spicy” AI chatbot?
It appears lots like what we’re seeing play out on X proper now. Customers have been feeding photographs into xAI’s Grok chatbot, which boasts a robust and largely uncensored picture and video generator, to create specific content material, together with of bizarre folks. The proliferation of deepfake porn on the platform has gotten so excessive that in current days, xAI’s Grok chatbot has spit out an estimated one nonconsensual sexual picture each single minute. Over the previous a number of weeks, 1000’s of customers have hopped on the grotesque development of utilizing Grok to undress principally girls and kids — sure, youngsters — with out their consent via a quite apparent workaround.
To be clear, you’ll be able to’t ask Grok — or most mainstream AIs, for that matter — for nudes. However you’ll be able to ask Grok to “undress” a picture somebody posted on X, or if that doesn’t work, ask it to place them in a tiny, invisible bikini. The US has legal guidelines towards this sort of abuse, and but the group at xAI has been virtually…blasé about it.
Inquiries from a number of journalists to the corporate concerning the matter acquired automated “Legacy media lies” messages in response. xAI CEO Elon Musk, who simply efficiently raised $20 billion in funding for the corporate, was sharing deepfake bikini images of (content material warning) himself till not too long ago. On Friday morning, after widespread condemnation and threats from regulators, X appeared to paywall the flexibility to generate AI photographs just by tagging @grok, although for now a minimum of, the function is nonetheless simply out there free of charge elsewhere on X and in Grok’s standalone app.
Whereas Musk on January 4 warned that customers will “endure penalties” in the event that they use Grok to make “unlawful photographs,” xAI has given no indication that it’s going to take away or handle the core options — paywalled for $8 per thirty days or not — permitting customers to create such specific content material, although a few of the most incriminating posts have been eliminated. xAI has not responded to Vox’s request for remark as of Friday morning.
Nobody needs to be stunned right here. It was solely a matter of time earlier than the poisonous sludge that’s turn out to be of the web site previously referred to as Twitter mixed with xAI’s Grok — which has been explicitly marketed for its NSFW capabilities — to create a brand new type of sexual violence. Musk’s firm has basically created a deepfake porn machine that makes the creation of real looking and offensive photographs of anybody so simple as writing a reply in X. Worse, these photographs are feeding right into a social community of a whole bunch of tens of millions of individuals, which not solely spreads them additional however can implicitly reward posters with extra followers and extra consideration.
You could be questioning, as I feel all of us discover ourselves doing a number of instances a day now: How is any of this authorized? To be clear, it’s not. However advocates and authorized specialists say that present legal guidelines nonetheless fall far wanting the protections that victims want, and the sheer quantity of deepfakes being created on platforms like X make the protections that do exist very tough to implement.
Join right here to discover the massive, difficult issues the world faces and probably the most environment friendly methods to unravel them. Despatched twice every week.
“The prompts which might be allowed or not allowed” utilizing a chatbot like Grok “are the results of deliberate and intentional selections by the tech firms who’re deploying the fashions,” stated Sandi Johnson, senior legislative coverage counsel on the Rape, Abuse and Incest Nationwide Community.
“In another context, when any individual turns a blind eye to hurt that they’re actively contributing to, they’re held accountable,” she stated. “Tech firms shouldn’t be held to any totally different normal.”
First, let’s speak about how we acquired right here.
“Perpetrators utilizing know-how for sexual abuse isn’t something new,” Johnson stated. “They’ve been doing that eternally.”
However AI cemented a brand new sort of sexual violence via the rise of deepfakes.
Deepfake porn of feminine celebrities — created of their likeness, however with out their consent, utilizing extra primitive AI instruments — has been circulating on the web for years, lengthy earlier than ChatGPT turned a family identify.
However extra not too long ago, so-called nudify apps and web sites have made it extraordinarily simple for customers, a few of them youngsters, to show innocuous images of mates, classmates, and lecturers into deepfake specific content material with out the topic’s consent.
The state of affairs has turn out to be so dire that final 12 months, advocates like Johnson satisfied Congress to move the Take It Down Act, which criminalizes nonconsensual deepfake porn and mandates that firms take away such supplies from their platforms inside 48 hours of it being flagged or doubtlessly face fines and injunctions. The supply goes into impact this Could.
For a lot of victims, even when firms like X do start to crack down on enforcement by then, it would come too late for victims who shouldn’t have to attend for months — or days — to have such posts taken down.
“For these tech firms, it was at all times like ‘break issues, and repair it later,’” stated Johnson. “You need to remember the fact that as quickly as a single [deepfake] picture is generated, that is irreparable hurt.”
X turned deepfakes right into a function
Most social media and main AI platforms have complied as a lot as attainable with rising state and federal rules round deepfake porn and particularly, baby sexual abuse materials.
Not solely as a result of such supplies are “flagrantly, radioactively unlawful,” stated Riana Pfefferkorn, a coverage fellow on the Stanford Institute for Human-Centered Synthetic Intelligence, “but in addition as a result of it’s gross and most firms don’t have any need to have any affiliation of their model being a one-stop store for it.”
However Musk’s xAI appears to be the exception.
For the reason that firm debuted its “spicy mode” video technology capabilities on X final 12 months, observers have been elevating the alarm about what’s basically turn out to be a “vertically built-in” deepfake porn device, stated Pfefferkorn.
Most “nudify” apps require customers to first obtain a photograph, perhaps from Instagram or Fb, after which add it to whichever platform they’re utilizing. In the event that they need to share the deepfake, then they should obtain it from the app and ship it via one other messaging platform, like Snapchat.
These a number of factors of friction gave regulators some essential openings for intercepting nonconsensual content material, with a sort of Swiss cheese-style protection system. Possibly they couldn’t cease every little thing, however they might get some “nudify” apps banned from app shops. They’ve been in a position to get Meta to crack down on ads hawking the apps to youngsters.
However on X, creating nonconsensual deepfakes utilizing Grok has turn out to be virtually fully frictionless, permitting customers to supply images, immediate deepfakes, and share them multi function go. Even with the brand new restrictions put in place for non-premium customers on Friday morning, even free customers can nonetheless make deepfake content material virtually seamlessly, with out ever leaving the app.
“That will matter much less if it had been a social media group for nuns, however it’s a social media group for Nazis,” stated Pfefferkorn, referring to X’s far-right pivot lately. The result’s a nonconsensual deepfake disaster that seems to be ballooning uncontrolled.
In current days, customers have created 84 instances extra sexualized deepfakes on X per hour than on the opposite high 5 deepfake websites mixed, in response to impartial deepfake and social media researcher Genevieve Oh. And people photographs can get shared much more rapidly and broadly than anyplace else. “The emotional and reputational damage to the particular person depicted is now exponentially larger” than it has been for different deepfake websites, stated Wayne Unger, an assistant professor of regulation specializing in rising know-how at Quinnipiac College, “as a result of X has a whole bunch of tens of millions of customers who can all see the picture.”
It will be just about unattainable for X to individually average each a type of nonconsensual photographs or movies, even when it wished to — or even when the corporate hadn’t fired most of its moderators when Musk took over in 2022.
Is X going to be held accountable for any of this?
If the identical sort of felony imagery appeared in {a magazine} or an internet publication, then the corporate may very well be held answerable for it, topic to hefty fines and attainable felony costs.
Social media platforms like X don’t face the identical penalties as a result of Part 230 of the 1996 Communications Decency Act protects web platforms from legal responsibility for a lot of what customers do or say on their platforms — albeit with some notable exceptions, together with baby pornography. The clause has been a pillar free of charge speech on the web — a world the place platforms had been held answerable for every little thing on them could be much more constrained — however Johnson says the clause has additionally turn out to be a “monetary protect” for firms unwilling to average their platforms.
With the rise of AI, nonetheless, that protect may lastly be beginning to crack, stated Unger. He believes that firms like xAI shouldn’t be coated by Part 230 as a result of they’re not mere hosts to hateful or unlawful content material, however, via their very own chatbots, basically creators of it.
“X has made a design choice to permit Grok to generate sexually specific imagery of adults and kids,” he stated. “The consumer could have prompted Grok to generate it,” however the firm “decided to launch a product that may produce it within the first place.”
Unger doesn’t anticipate that xAI — or business teams like NetChoice — are going to again down and not using a authorized combat towards any makes an attempt to additional legislate content material moderation or regulate easy-to-abuse instruments like Grok. “Possibly they’ll concede the minor a part of it,” since legal guidelines governing [child pornography] are so sturdy, he stated, however “on the very least they’re gonna argue that Grok ought to have the ability to do it for adults.”
In any case, the general public outrage in response to the deepfake porn Grokpocalypse could lastly pressure a reckoning round a problem that’s lengthy been within the shadows. Around the globe, nations like India, France, and Malaysia have begun probes into the sexualized imagery flooding X. Ultimately, Musk did publish on X that these producing unlawful content material will face penalties, however this goes deeper than simply the customers themselves.
“This isn’t a pc doing this,” Johnson stated. “These are deliberate selections which might be being made by folks working these firms, they usually should be held accountable.”
Replace, January 9, 12 pm ET: This piece, initially printed January 9, has been up to date to replicate the information of xAI paywalling Grok’s deepfake capabilities.

