App store algorithms under scrutiny for promoting AI ‘nudify’ apps

A new investigation has raised concerns about how app marketplace systems operated by Apple and Google may be surfacing and promoting apps capable of generating AI-based sexualized imagery, despite existing platform policies that prohibit such content.

According to findings from the Tech Transparency Project, search functions and advertising placements within both companies’ app stores have directed users toward so-called “nudify” apps. These tools use artificial intelligence to alter images—often of real individuals—to create nude or sexually explicit content, raising concerns around consent, safety, and moderation standards.

The report indicates that queries using terms such as “nudify,” “undress,” and “deepnude” returned a significant number of apps with such capabilities. In testing, roughly 40% of top search results across both platforms were found to generate or simulate nudity. Some of these apps were also categorized as suitable for younger audiences, intensifying scrutiny amid growing concerns about the impact of deepfake technology on minors.

In addition to organic search results, paid placements appear to play a role in visibility. Sponsored listings for apps with face-swapping or image-editing features were found at the top of search results in several cases. These ads, controlled and distributed through the app stores’ internal advertising systems, sometimes linked to apps that could generate explicit outputs without triggering content restrictions during testing.

Autocomplete suggestions further amplified discoverability. Partial search inputs led to recommended queries that surfaced additional apps with similar capabilities. Investigators noted that this mechanism effectively guided users toward more explicit tools, even when initial queries were incomplete.

The report also highlights the scale of the ecosystem. Apps identified in the investigation have collectively recorded hundreds of millions of downloads and generated substantial revenue through subscriptions and in-app purchases. This underscores the commercial dimension of the category, as well as the financial incentives tied to distribution and promotion within app marketplaces.

Developers of some featured apps acknowledged the issues when contacted, with at least one stating that moderation settings have since been tightened. Others did not respond to requests for comment. In certain instances, apps appeared to adjust their outputs following earlier scrutiny, shifting from explicit nudity to suggestive imagery such as bikini renderings.

Platform responses have been limited. Apple removed a number of apps identified in the report after being notified, while Google said it had taken enforcement actions and continues to review policy violations. However, neither company provided detailed explanations regarding how such apps passed initial review processes or why their discovery systems surfaced them.

The findings add to ongoing debates around platform accountability in the age of generative AI. With the proliferation of tools capable of producing realistic synthetic media, regulatory attention has increasingly focused on how distribution channels manage harmful or non-consensual content.

Written by Maya Robertson

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Loading…

Flowchart showing how your website connects to optimized ad performance via Simplified Conversions API, noting product names, prices, availability, and one-click setup.

Meta introduces AI-driven updates to simplify ad tracking and improve campaign performance

Local Cafe app screen: title 'Local Cafe' with subtitle 'Discover local cafes near you' and a blue button labeled 'Use precise location' over a pale map background.

Google Play updates policies with new privacy controls and mandatory account transfer system