Meta will introduce a new system on Instagram to notify parents when teenagers repeatedly search for suicide or self-harm content. The alerts will trigger when teens enter related terms multiple times in a short period. Meta integrates the feature into its Teen Account supervision tools. The company says the move strengthens online safety for young users.
Previously, Instagram blocked dangerous search terms and directed users to external support services. Meta now adds direct notifications for parents to provide additional oversight. Families enrolled in Teen Accounts in the UK, US, Australia, and Canada will start receiving alerts next week. The company plans to expand the feature worldwide over the coming months.
Charity Raises Concerns About Risks
The Molly Rose Foundation has strongly criticized the alert system. Chief executive Andy Burrows says the approach could have unintended consequences. He argues automatic notifications may cause panic instead of offering constructive help.
The family of Molly Russell created the foundation after she died by suicide in 2017 at age 14. She had viewed self-harm and suicide material on several online platforms, including Instagram. Burrows says parents naturally want to know if their child struggles. However, he believes abrupt alerts could leave families shocked and unsure how to respond.
Meta says it will attach expert-backed resources to every alert. The company aims to guide parents through sensitive conversations. Ian Russell, who chairs the foundation, doubts the effectiveness of these resources. He says a parent receiving a notification at work could react with panic. Written guidance alone may not ease that immediate shock.
Experts Call for System-Wide Prevention
Several charities say the announcement highlights deeper platform flaws. Ged Flynn, chief executive of Papyrus Prevention of Young Suicide, welcomes the alert system but says more must be done. He claims young people still encounter harmful content online.
Flynn adds that parents contact his organization daily, worried about their children’s online activity. He says families want platforms to block dangerous content before teens encounter it, rather than being alerted afterward.
Leanda Barrington-Leach, executive director of 5Rights Foundation, urges Meta to redesign its systems with child safety by default. Burrows also cites research from his foundation, claiming Instagram still recommends harmful material about depression, suicide, and self-harm to vulnerable teens.
He insists platforms must tackle systemic risks instead of passing responsibility to parents. Meta disputes the foundation’s September report, saying it misrepresents the company’s safety and parental support efforts.
Global Pressure Grows on Social Media
Instagram designed the Teen Account alerts to detect sudden changes in search behavior. Meta says the feature builds on existing safety protections. The platform already hides self-harm and suicide material and blocks related searches.
Parents will receive alerts by email, text message, WhatsApp, or directly in the app. Meta chooses the method based on the contact information families provide. The company acknowledges the system may occasionally generate alerts unnecessarily, preferring caution to risk.
Sameer Hinduja, co-director of the Cyberbullying Research Center, says such alerts will naturally alarm parents. He stresses that guidance must immediately follow each alert. Companies must not leave families alone with fear. Hinduja believes Meta understands this responsibility.
Instagram plans to extend similar alerts to conversations with its AI chatbot. The company notes that many teens increasingly seek support through artificial intelligence. Governments worldwide continue pressuring social media companies to improve child safety.
Australia has banned social media for children under 16. Spain, France, and the UK are considering similar restrictions. Regulators closely monitor how tech firms engage younger users. Meta chief executive Mark Zuckerberg and Instagram head Adam Mosseri recently appeared in a US court. They defended the company against claims it targeted underage users.
