Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Elon Musk's AI tool generated millions of non-consensual deepfakes in just 11 days, exposing critical gaps in AI safety and regulatory oversight.
On December 29, 2025, Elon Musk announced that X users could edit any image on the platform using Grok, Xs AI chatbot. Within hours, a viral trend emerged: users began asking Grok to remove clothing from womens photographs.
What started as requests for transparent bikinis rapidly escalated into demands for increasingly explicit and degrading content. By January 2, the requests peaked at nearly 200,000 a day.
A new research report from the Center for Countering Digital Hate (CCDH) analyzed 20,000 random Grok-generated images from the 4.6 million created during the 11-day period:
The victims included celebrities (Taylor Swift, Selena Gomez, Billie Eilish), politicians (Swedish Deputy PM Ebba Busch), and thousands of ordinary women. Some were converted from innocent “before school selfies.” Others were altered with racist, violent, or degrading content.
Ashley St. Clair, mother of one of Musks children, told the Guardian she felt “horrified and violated” after fans generated deepfakes of her as a child. As of January 15, 29% of sampled child exploitation images remained publicly accessible.
Q: How did this happen so fast?
A: Groks integration into X (one of the worlds largest social platforms) with single-click editing made abuse frictionless. A user no longer needed technical knowledge or darknet access. The feature launched December 29, just before New Years—peak internet usage period.
Q: Has Musk faced consequences?
A: X restricted the feature to paid users on Jan 9 and added further safeguards on Jan 14. UK Ofcom, EU regulators, and Californias attorney general are investigating. Musk faces potential liability under laws criminalizing non-consensual intimate imagery.
Q: Can the images be deleted?
A: X can remove posts, but images remain accessible via direct URLs and caches. As of January 15, 29% of sampled child images were still publicly viewable despite platform removal efforts.
Q: Will this change how AI tools launch?
A: Likely. The incident exposed gaps in safety testing and content moderation. As we reported in our analysis of AI-powered security threats, safety cannot be an afterthought in AI deployment.