Whistleblowers Accuse Meta of Prioritizing VR Profits Over Child Safety
Two former Meta researchers told the U.S. Senate Subcommittee on Privacy and Technology that Meta Platforms knowingly ignored harms to children on its virtual-reality platform to protect profits.
Key Testimonies
-
Cayce Savage (Former User Experience Researcher):
-
Said Meta shut down internal research proving that children were exposed to sexually explicit content in VR.
-
Claimed researchers were instructed not to investigate child safety harms so the company could claim ignorance.
-
Reported instances of bullying, sexual assault, and requests for nude photos involving children in VR.
-
-
Jason Sattizahn (Former Reality Labs Researcher):
-
Testified he was not surprised Meta’s AI chatbots were permitted to engage children in romantic or sensual conversations, as revealed by a Reuters investigation.
-
Congressional Concerns
-
Sen. Marsha Blackburn (R-TN): Highlighted chatbot risks and renewed calls for the Kids Online Safety Act, which passed the Senate but stalled in the House.
-
Lawmakers warned that Meta’s failures add urgency for federal safeguards on children’s digital experiences.
Meta’s Response
-
Meta spokesperson Andy Stone rejected the accusations, claiming the whistleblowers “selectively leaked internal documents” to create a misleading narrative.
-
Said there was never a blanket ban on child-related research, and that problematic chatbot behaviors had been removed.
Broader Context
-
Meta already faces bipartisan scrutiny for youth safety across Instagram, Facebook, and AI tools.
-
The testimony underscores growing pressure on Congress to regulate Big Tech’s handling of child protection in immersive and AI-driven platforms.



