A coalition of women’s rights organizations, child advocacy groups, and tech watchdogs is pressing Apple and Google to remove Elon Musk-owned social platform X and its AI chatbot Grok from their app stores, citing concerns over the generation of sexually explicit and non-consensual images involving women and minors.
In open letters published this week, the groups accused X and its parent AI company xAI of enabling the creation and distribution of content that they argue violates app store policies and, in some jurisdictions, local laws. The campaign includes organizations such as UltraViolet, the National Organization for Women, MoveOn, and ParentsTogether Action, which say recent incidents involving Grok have intensified risks of online sexual abuse.
The calls come after Grok was linked to the creation of hyper-realistic, sexually suggestive images of women and children that circulated widely on X around the start of the year. While X has since adjusted the chatbot’s behavior to limit the public posting of AI-generated images, advocacy groups say the changes do not go far enough. Tests conducted this week showed Grok could still generate edited images of individuals in sexualized contexts when prompted privately.
“This is about whether app store operators are willing to enforce their own rules when harm is clearly documented,” said campaign representatives, arguing that continued availability of the apps undermines stated commitments to child safety.
Regulatory pressure has also increased outside the United States. Authorities in Malaysia and Indonesia have already banned Grok over explicit content concerns, while regulators in Europe and the UK have opened inquiries or requested explanations from X and xAI. Separately, some organizations are distancing themselves from the platform; the American Federation of Teachers announced it would leave X, citing concerns over AI-generated imagery involving children.
Amid the backlash, Musk said on Wednesday that he was not aware of any instances in which Grok generated “naked underage images.” In a post on X, he stated that Grok is designed to refuse illegal requests and must comply with local laws, adding that the system only produces images in response to user prompts rather than generating content autonomously.
Apple and Google have not publicly commented on the demands to remove X and Grok from their app stores. Advocacy groups argue that the companies’ response, or lack thereof, will serve as a test of how rigorously major app marketplaces enforce safety standards as generative AI tools become more widely distributed.


Comments
Loading…