Instagram has announced that in the coming weeks, it will begin sending alerts to parents if teenagers repeatedly search for content related to suicide or self-harm within a short period.
اضافة اعلان
The new feature will be available to parents enrolled in the app’s Parental Supervision system. According to the company, the goal is to enable early intervention and allow parents to provide support to their children.
When is the Alert Triggered?
The Meta-owned platform clarifies that it already blocks direct searches for content that encourages suicide or self-harm, according to a report by TechCrunch. However, the new system will be activated when a pattern of repeated searches is detected, including:
Keywords encouraging suicide or self-harm.
Phrases indicating a teenager may be at risk.
General terms such as "suicide" or "self-harm."
Notifications will reach parents via email, SMS, or WhatsApp, in addition to an in-app alert, which will include educational resources to help them initiate a supportive dialogue with their children.
A Move Amid Legal Pressure
This announcement comes as Meta and other tech giants face lawsuits accusing them of failing to protect teenagers from the psychological harms associated with social media use.
During hearings this week at the U.S. District Court for the Northern District of California, Instagram head Adam Mosseri was questioned regarding delays in launching essential safety features, including the nudity filter in private messages for minors. Furthermore, internal Meta studies revealed in a separate case suggested that parental controls had not significantly reduced compulsive app usage among children facing high life stress.
Balancing Protection and Privacy
Instagram emphasized its effort to avoid "notification fatigue," stating that too many alerts could diminish their effectiveness. Consequently, the system requires a specific threshold of repeated searches within a short timeframe, a standard developed in consultation with suicide prevention experts.
The feature will roll out next week to users in the United States, United Kingdom, Australia, and Canada, with plans for global expansion later this year. Looking ahead, the platform intends to extend these alerts to cover interactions between teenagers and in-app AI tools regarding self-harm topics.