Yazılar

Whistleblowers Accuse Meta of Prioritizing VR Profits Over Child Safety

Two former Meta researchers told the U.S. Senate Subcommittee on Privacy and Technology that Meta Platforms knowingly ignored harms to children on its virtual-reality platform to protect profits.

Key Testimonies

  • Cayce Savage (Former User Experience Researcher):

    • Said Meta shut down internal research proving that children were exposed to sexually explicit content in VR.

    • Claimed researchers were instructed not to investigate child safety harms so the company could claim ignorance.

    • Reported instances of bullying, sexual assault, and requests for nude photos involving children in VR.

  • Jason Sattizahn (Former Reality Labs Researcher):

    • Testified he was not surprised Meta’s AI chatbots were permitted to engage children in romantic or sensual conversations, as revealed by a Reuters investigation.

Congressional Concerns

  • Sen. Marsha Blackburn (R-TN): Highlighted chatbot risks and renewed calls for the Kids Online Safety Act, which passed the Senate but stalled in the House.

  • Lawmakers warned that Meta’s failures add urgency for federal safeguards on children’s digital experiences.

Meta’s Response

  • Meta spokesperson Andy Stone rejected the accusations, claiming the whistleblowers “selectively leaked internal documents” to create a misleading narrative.

  • Said there was never a blanket ban on child-related research, and that problematic chatbot behaviors had been removed.

Broader Context

  • Meta already faces bipartisan scrutiny for youth safety across Instagram, Facebook, and AI tools.

  • The testimony underscores growing pressure on Congress to regulate Big Tech’s handling of child protection in immersive and AI-driven platforms.

U.S. Senators Demand Meta Probe Over AI Chatbot Policies

Two Republican U.S. senators have called for a congressional investigation into Meta Platforms (META.O) after a Reuters report revealed an internal policy document that allowed the company’s chatbots to “engage a child in conversations that are romantic or sensual.” Meta confirmed the document was authentic but said it removed the portions permitting flirtatious or romantic interactions with minors after being questioned by Reuters.

Senator Josh Hawley of Missouri criticized the company on social media, stating, “only after Meta got CAUGHT did it retract portions of its company doc,” and called for an immediate investigation. Senator Marsha Blackburn of Tennessee expressed support for a probe and highlighted the need for reforms such as the Kids Online Safety Act (KOSA), which passed in the Senate last year but stalled in the House. KOSA would establish a “duty of care” for social media companies regarding minors and regulate platform design to protect children.

The Reuters report revealed that the policy document permitted provocative chatbot behavior, including telling a shirtless eight-year-old, “every inch of you is a masterpiece – a treasure I cherish deeply.” Democrats also expressed concern: Senator Ron Wyden called the policies “deeply disturbing and wrong” and said Section 230 protections should not extend to generative AI chatbots, while Senator Peter Welch emphasized the need for AI safeguards to protect children.

With no comprehensive federal AI regulations yet in place, several U.S. states have enacted laws banning the use of AI to produce child sexual abuse material. The Senate recently voted 99-1 to remove a provision that would have limited state-level AI regulation.