Meta announced on Wednesday the launch of additional protective measures for Instagram accounts managed by adults that frequently feature children.
اضافة اعلان
These accounts will automatically be placed under the app’s most restrictive messaging settings to prevent unwanted messages, and the platform’s “Hidden Words” feature will be activated to filter abusive comments.
The company is also introducing new safety features for teen accounts, according to a report published by TechCrunch.
Accounts subject to these new restrictions include those run by adults who regularly share photos and videos of their children, as well as accounts managed by parents or talent agents representing minors.
In a blog post, Meta wrote:
“While these accounts are often used in harmless ways, unfortunately, there are individuals who may attempt to exploit them—by leaving sexual comments on their posts or soliciting explicit images via direct messages, which is a clear violation of our policies.”
It added:
“Today, we’re announcing steps to help prevent this type of abuse.”
Meta stated that it will work to prevent suspicious adults—such as those already blocked by teens—from discovering or interacting with accounts that primarily feature children.
Suspicious users will not be recommended to such accounts, and vice versa, making it harder for them to find each other through Instagram’s search and discovery tools.
This announcement comes as Meta and Instagram continue their efforts over the past year to address mental health concerns related to social media.
The new changes will significantly impact family vloggers, creators, and parents managing “kid influencer” accounts—groups that have faced growing criticism over the risks of sharing children’s lives online.
An investigation by The New York Times published last year found that parents were often aware of or even complicit in the exploitation of their children by selling photos or the clothes their children wore.
In a review of 5,000 parent-run accounts, the Times found 32 million interactions from male followers.
Meta says that accounts affected by these stricter safety settings will receive notifications at the top of their Instagram feed informing them of the updates, and will be prompted to review their privacy settings.
Meta also noted that it has removed nearly 135,000 Instagram accounts primarily used to exploit children, in addition to 500,000 Instagram and Facebook accounts linked to the original removed accounts.
Alongside today’s announcement, Meta is also introducing new built-in protections for teens’ direct messages that will apply automatically.
Teens will now see new prompts offering safety tips, reminding them to carefully review their profiles and be mindful of what they share.
In addition, the month and year an account joined Instagram will now appear at the top of new chat threads.
Instagram has also added a new feature that allows users to block and report an account simultaneously.
These new tools aim to provide teens with more context about the accounts they’re communicating with and help them detect potential scammers, according to Meta.
In its blog post, Meta wrote:
“These new features complement the safety notifications we already display, which remind users to stay alert in private messages and to block or report anything that makes them uncomfortable—we’re encouraged to see teens responding to them.”
Meta also updated its nudity protection filter, stating that 99% of users, including teens, have kept the feature enabled.
Last month, the company reported that over 40% of blurred images received via direct messages remained blurred.