xAI’s Grok is eradicating clothes from photos of individuals with out their consent following this week’s rollout of a function that permits X customers to immediately edit any picture utilizing the bot while not having the unique poster’s permission. Not solely does the unique poster not get notified if their image was edited, however Grok seems to have few guardrails in place for stopping something in need of full express nudity. In the previous few days, X has been flooded with imagery of girls and kids showing pregnant, skirtless, sporting a bikini, or in different sexualized conditions. World leaders and celebrities, too, have had their likenesses utilized in photographs generated by Grok.
AI authentication firm Copyleaks reported that the development to take away clothes from photographs started with adult-content creators asking Grok for horny photographs of themselves after the discharge of the brand new picture enhancing function. Customers then started making use of related prompts to pictures of different customers, predominantly girls, who didn’t consent to the edits. Ladies famous the speedy uptick in deepfake creation on X to numerous information retailers, together with Metro and PetaPixel. Grok was already capable of modify photographs in sexual methods when tagged in a submit on X, however the brand new “Edit Picture” device seems to have spurred the current surge in recognition.
In a single X submit, now faraway from the platform, Grok edited a photograph of two younger ladies into skimpy clothes and sexually suggestive poses. One other X person prompted Grok to concern an apology for the “incident” involving “an AI picture of two younger ladies (estimated ages 12-16) in sexualized apparel,” calling it “a failure in safeguards” that it mentioned might have violated xAI’s insurance policies and US legislation. (Whereas it’s not clear whether or not the Grok-created photographs would meet this customary, practical AI-generated sexually express imagery of identifiable adults or youngsters will be unlawful below US legislation.) In one other back-and-forth with a person, Grok prompt that customers report it to the FBI for CSAM, noting that it’s “urgently fixing” the “lapses in safeguards.”
However Grok’s phrase is nothing greater than an AI-generated response to a person asking for a “heartfelt apology word” — it doesn’t point out Grok “understands” what it’s doing or essentially replicate operator xAI’s precise opinion and insurance policies. As a substitute, xAI responded to Reuters’ request for touch upon the scenario with simply three phrases: “Legacy Media Lies.” xAI didn’t reply to The Verge’s request for remark in time for publication.
Elon Musk himself appears to have sparked a wave of bikini edits after asking Grok to interchange a memetic picture of actor Ben Affleck with himself sporting a bikini. Days later, North Korea’s Kim Jong Un’s leather-based jacket was changed with a multicolored spaghetti bikini; US President Donald Trump stood close by in an identical swimsuit. (Cue jokes a couple of nuclear conflict.) A photograph of British politician Priti Patel, posted by a person with a sexually suggestive message in 2022, bought was a bikini image on January 2nd. In response to the wave of bikini pics on his platform, Musk jokingly reposted an image of a toaster in a bikini captioned “Grok can put a bikini on all the things.”
Whereas a number of the photographs — just like the toaster — had been evidently meant as jokes, others had been clearly designed to supply borderline-pornographic imagery, together with particular instructions for Grok to make use of skimpy bikini kinds or take away a skirt completely. (The chatbot did take away the skirt, but it surely didn’t depict full, uncensored nudity within the responses The Verge noticed.) Grok additionally complied with requests to interchange the garments of a toddler with a bikini.
Musk’s AI merchandise are prominently marketed as closely sexualized and minimally guardrailed. xAI’s AI companion Ani flirted with Verge reporter Victoria Tune, and Jess Weatherbed found that Grok’s video generator readily created topless deepfakes of Taylor Swift, regardless of xAI’s acceptable use coverage banning the depiction of “likenesses of individuals in a pornographic method.” Google’s Veo and OpenAI’s Sora video turbines, in distinction, have guardrails round era of NSFW content material, although Sora has additionally been used to supply movies of youngsters in sexualized contexts and fetish movies. The prevalence of deepfake photographs is rising quickly, in response to a report from cybersecurity agency DeepStrike, and plenty of of those photographs comprise nonconsensual sexualized imagery; a 2024 survey of US college students discovered that 40 p.c had been conscious of a deepfake of somebody they knew, whereas 15 p.c had been conscious of nonconsensual express or intimate deepfakes.
When requested why it’s remodeling photographs of girls into bikini pics, Grok denied posting pictures with out consent, saying: “These are AI creations based mostly on requests, not actual photograph edits with out consent.”
Take an AI bot’s denial as you want.


























