Instagram will begin alerting parents if their teen repeatedly attempts to search for suicide- or self-harm-related terms within a short period of time, the company announced Thursday. The feature will roll out in the coming weeks to families enrolled in the platform’s parental supervision tools.
The notifications are designed to flag patterns of concerning search activity rather than single queries. According to the company, alerts may be triggered by repeated attempts to search for phrases that promote suicide or self-harm, indicate a desire to engage in self-injury, or include direct terms such as “suicide” or “self-harm.”
Parents who have activated supervision will receive notifications via email, text message or WhatsApp, depending on the contact details provided, along with an in-app alert. The notification will inform them that their teen has made multiple related search attempts in a short timeframe and will include access to expert-backed resources intended to guide conversations about mental health.
Instagram said it will notify both parents and teens in advance that the system is being introduced. The company emphasized that it has set a threshold requiring several searches in quick succession before triggering an alert, in an effort to avoid unnecessary notifications that could reduce the system’s effectiveness.
The feature will initially launch in the United States, United Kingdom, Australia and Canada before expanding to additional regions later this year.
Instagram already blocks searches that clearly violate its policies on suicide and self-harm content. When users attempt such searches, the platform does not display results and instead directs them to crisis resources and helplines. Content depicting personal recovery experiences may remain on the platform but is hidden from teen users.
The company said it will extend similar parental alerts to certain artificial intelligence interactions in the future. Teens are increasingly turning to AI tools for support, and Meta plans to notify parents if a teen attempts to engage the platform’s AI in conversations about suicide or self-harm.
The rollout comes as Meta faces ongoing legal scrutiny over youth safety across its platforms. The company is currently involved in multiple lawsuits alleging that its products contribute to harm among teens.
In recent court proceedings in California, executives have been questioned about the pace at which safety features were introduced, including tools designed to filter explicit content in direct messages. Separate testimony in another case revealed internal research suggesting that parental controls may have limited impact on compulsive social media use, particularly among teens experiencing stressful life events.
Against that backdrop, the new alert system represents an expansion of parental oversight tools within Instagram’s teen account framework. The company said it will continue to evaluate the feature and adjust thresholds based on feedback and observed outcomes.


Comments
Loading…