Instagram Enhances Teen Protection with New Restrictions on Direct Messages

Instagram Enhances Teen Protection with New Restrictions on Direct Messages
Instagram Enhances Teen Protection with New Restrictions on Direct Messages
Instagram has recently introduced stronger safeguards aimed at protecting teenagers, with a particular focus on limiting unwanted direct messages and interactions. According to Meta, Instagram’s parent company, the update is designed to make the platform “safer by default” for teens. While similar features have existed before, this update brings several notable changes to how messaging works on the platform.اضافة اعلان
Stricter Privacy by Default
Under the new rules, if you're under 18 and don’t follow someone, that person won’t be able to send you a message—regardless of whether their account is verified, belongs to a brand, or they’re a public figure. This policy applies universally, even if the sender is an adult.
Previously, users could receive one text-only message from someone they didn’t follow, as long as no images or videos were included. Now, this loophole has been closed: no messages can be sent unless both parties follow each other.
Messaging Restrictions for Adults, Too
The new rules don’t just apply to teens—they affect adults as well. If an adult attempts to message a teenager, Instagram now checks whether the adult is among the teen’s approved contacts. If not, the message won't be delivered. Additionally, if an account exhibits what Meta calls “potentially suspicious behavior,” it will be completely restricted from messaging any teens—especially if it has previously been reported.
Strengthening Privacy Settings
Instagram has long set private accounts as the default for new teen users, but the company is now pushing even further. Teenagers will receive regular prompts to review their privacy settings—specifically, who can tag or mention them, and who can view their stories. Meta also announced it would send alerts if a teen’s settings are considered too risky or expose them to potential harm.
These changes come as Meta and other major tech companies face mounting pressure from lawmakers in the U.S., U.K., and Europe to make online platforms safer for children and adolescents. Issues such as teen mental health, unsolicited messages, and exposure to inappropriate content have been recurring concerns driving regulatory and platform-level action.