A US senator has begun an investigation into Meta. A leaked internal document reportedly revealed that the company’s artificial intelligence enabled “sensual” and “romantic” conversations with children.
Leaked document sparks outrage
Reuters reported the paper was titled “GenAI: Content Risk Standards.” Republican Senator Josh Hawley condemned its contents as “reprehensible and outrageous.” He demanded to see the entire document and the list of affected products.
Meta denied the accusations. A spokesperson said: “The examples and notes in question were erroneous and inconsistent with our policies.” The company stressed it had “clear rules” for chatbot responses. These rules “prohibit content that sexualizes children and sexualized role play between adults and minors.”
Meta added the paper included “hundreds of notes and examples” reflecting hypothetical situations tested by teams.
Senator pressures Big Tech
Missouri senator Josh Hawley confirmed the probe on 15 August in a post on X. “Is there anything Big Tech won’t do for a quick buck?” he asked. He continued: “Now we learn Meta’s chatbots were programmed to carry on explicit and ‘sensual’ talk with 8-year-olds. It’s sick. I am launching a full investigation to get answers. Big Tech: leave our kids alone.”
Meta owns Facebook, WhatsApp and Instagram.
Parents demand protection
The leaked paper raised wider concerns. It reportedly showed Meta’s chatbot could provide false medical information and provoke sensitive discussions on sex, race, and celebrities. The document was intended to define standards for Meta AI and other chatbot assistants across Meta platforms.
“Parents deserve the truth, and kids deserve protection,” Hawley wrote in a letter to Meta and chief executive Mark Zuckerberg. He pointed to one disturbing example. The rules allegedly allowed a chatbot to tell an eight-year-old their body was “a work of art” and “a masterpiece – a treasure I cherish deeply.”
Reuters also reported that Meta’s legal department approved controversial measures. One decision allowed Meta AI to spread false information about celebrities, provided a disclaimer clarified the inaccuracy.
