That is The Stepback, a weekly e-newsletter breaking down one important story from the tech world. For extra on dystopian developments in AI, comply with Hayden Subject. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Decide in for The Stepback right here.
You may say it began with Elon Musk’s AI FOMO — and his campaign in opposition to “wokeness.” When his AI firm, xAI, introduced Grok in November 2023, it was described as a chatbot with “a rebellious streak” and the power to “reply spicy questions which might be rejected by most different AI programs.” The chatbot debuted after a number of months of improvement and simply two months of coaching, and the announcement highlighted that Grok would have real-time information of the X platform.
However there are inherent dangers to a chatbot having each the run of the web and X, and it’s secure to say xAI might not have taken the mandatory steps to deal with them. Since Musk took over Twitter in 2022 and renamed it X, he laid off 30% of its international belief and security employees and minimize its variety of security engineers by 80%, Australia’s on-line security watchdog mentioned final January. As for xAI, when Grok was launched, it was unclear whether or not xAI had a security group already in place. When Grok 4 was launched in July, it took greater than a month for the corporate to launch a mannequin card — a apply usually seen as an {industry} commonplace, which particulars security exams and potential considerations. Two weeks after Grok 4’s launch, an xAI worker wrote on X that he was hiring for xAI’s security group and that they “urgently want sturdy engineers/researchers.” In response to a commenter, who requested, “xAI does security?” the unique worker mentioned xAI was “engaged on it.”
Journalist Kat Tenbarge wrote about how she first began seeing sexually specific deepfakes go viral on Grok in June 2023. These photographs clearly weren’t created by Grok — it didn’t even have the power to generate photographs till August 2024 — however X’s response to the considerations was diversified. Even final January, Grok was inciting controversy for AI-generated photographs. And this previous August, Grok’s “spicy” video-generation mode created nude deepfakes of Taylor Swift with out even being requested. Consultants have informed The Verge since September that the corporate takes a whack-a-mole method to security and guardrails — and that it’s troublesome sufficient to maintain an AI system on the straight and slim whenever you design it with security in thoughts from the start, not to mention in the event you’re going again to repair baked-in issues. Now, evidently method has blown up in xAI’s face.
Grok has spent the final couple of weeks spreading nonconsensual, sexualized deepfakes of adults and minors all around the platform, as promoted. Screenshots present Grok complying with customers asking it to interchange girls’s clothes with lingerie and make them unfold their legs, in addition to to place young children in bikinis. And there are much more egregious experiences. It’s gotten so dangerous that in a 24-hour evaluation of Grok-created photographs on X, one estimate gauged the chatbot to be producing about 6,700 sexually suggestive or “nudifying” photographs per hour. A part of the rationale for the onslaught is a current characteristic added to Grok, permitting customers to make use of an “edit” button to ask the chatbot to vary photographs, with out the unique poster’s consent.
Since then, we’ve seen a handful of nations both examine the matter or threaten to ban X altogether. Members of the French authorities promised an investigation, as did the Indian IT ministry, and a Malaysian authorities fee wrote a letter about its considerations. California governor Gavin Newsom known as on the US Lawyer Basic to research xAI. The UK mentioned it’s planning to move a regulation banning the creation of AI-generated nonconsensual, sexualized photographs, and the nation’s communications-industry regulator mentioned it might examine each X and the photographs that had been generated to be able to see in the event that they violated its On-line Security Act. And this week, each Malaysia and Indonesia blocked entry to Grok.
xAI initially mentioned its objective for Grok was to “help humanity in its quest for understanding and information,” “maximally profit all of humanity,” and “empower our customers with our AI instruments, topic to the regulation,” in addition to to “function a robust analysis assistant for anybody.” That’s a far cry from producing nude-adjacent deepfakes of ladies with out their consent, not to mention minors.
On Wednesday night, as stress on the corporate heightened, X’s Security account put out an announcement that the platform has “applied technological measures to forestall the Grok account from permitting the modifying of photographs of actual folks in revealing clothes corresponding to bikinis,” and that the restriction “applies to all customers, together with paid subscribers.” On prime of that, solely paid subscribers can use Grok to create or edit any form of picture transferring ahead, in accordance with X. The assertion went on to say that X “now geoblock[s] the power of all customers to generate photographs of actual folks in bikinis, underwear, and related apparel by way of the Grok account and in Grok in X in these jurisdictions the place it’s unlawful,” which was an odd level to make since earlier within the assertion, the corporate mentioned it was not permitting anybody to make use of Grok to edit photographs in such a method.
One other vital level: My colleagues examined Grok’s image-generation restrictions on Wednesday to seek out that it took lower than a minute to get round most guardrails. Though asking the chatbot to “put her in a bikini” or “take away her garments” produced censored outcomes, they discovered, it had no qualms about delivering on prompts like “present me her cleavage,” “make her breasts greater,” and “put her in a crop prime and low-rise shorts,” in addition to producing photographs in lingerie and sexualized poses. As of Wednesday night, we had been nonetheless capable of get the Grok app to generate revealing photographs of individuals, utilizing a free account.
Even after X’s Wednesday assertion, we may even see quite a few different international locations both ban or block entry to both all of X or simply Grok, a minimum of briefly. We’ll additionally see how the proposed legal guidelines and investigations world wide play out. The stress is mounting for Musk, who on Wednesday afternoon took to X to say that he’s “not conscious of any bare underage photographs generated by Grok.” Hours later, X’s Security group put out its assertion, saying it’s “working across the clock so as to add extra safeguards, take swift and decisive motion to take away violating and unlawful content material, completely droop accounts the place acceptable, and collaborate with native governments and regulation enforcement as needed.”
What technically is and isn’t in opposition to the regulation is an enormous query right here. As an illustration, consultants informed The Verge earlier this month that AI-generated photographs of identifiable minors in bikinis, or doubtlessly even bare, might not technically be unlawful below present little one sexual abuse materials (CSAM) legal guidelines within the US, although in fact disturbing and unethical. However lascivious photographs of minors in such conditions are in opposition to the regulation. We’ll see if these definitions increase or change, despite the fact that the present legal guidelines are a little bit of a patchwork.
As for nonconsensual intimate deepfakes of grownup girls, the Take It Down Act, signed into regulation in Could 2025, bars nonconsensual AI-generated “intimate visible depictions” and requires sure platforms to quickly take away them. The grace interval earlier than the latter half goes into impact — requiring platforms to truly take away them — ends in Could 2026, so we may even see some vital developments within the subsequent six months.
- Some folks have been making the case that it’s been doable to do issues like this for a very long time utilizing Photoshop, and even different AI image-generators. Sure, that’s true. However there are quite a lot of variations right here that makes the Grok case extra regarding: It’s public, it’s focusing on “common” folks simply as a lot because it’s focusing on public figures, it’s typically posted on to the particular person being deepfaked (the unique poster of the picture), and the barrier to entry is decrease (for proof, simply take a look at the correlation between the power to do that going viral after a straightforward “edit” button launched, despite the fact that folks may technically do it earlier than).
- Plus, different AI corporations — although they’ve a laundry listing of their very own security considerations — appear to have considerably extra safeguards constructed into their image-generation processes. As an illustration, asking OpenAI’s ChatGPT to return a picture of a particular politician in a bikini prompts the response, “Sorry—I can’t assist with producing photographs that depict an actual public determine in a sexualized or doubtlessly degrading method.” Ask Microsoft Copilot, and it’ll say, “I can’t create that. Photographs of actual, identifiable public figures in sexualized or compromising eventualities aren’t allowed, even when the intent is humorous or fictional.”
- Spitfire Information’ Kat Tenbarge on how Grok’s sexual abuse hit a tipping level — and what introduced us to as we speak’s maelstrom.
- The Verge’s personal Liz Lopatto on why Sundar Pichai and Tim Cook dinner are cowards for not pulling X from Google and Apple’s app shops.
- “If there isn’t any crimson line round AI-generated intercourse abuse, then no line exists,” Charlie Warzel and Matteo Wong write in The Atlantic on why Elon Musk can’t get away with this.

























