Yazılar

Whistleblowers Accuse Meta of Prioritizing VR Profits Over Child Safety

Two former Meta researchers told the U.S. Senate Subcommittee on Privacy and Technology that Meta Platforms knowingly ignored harms to children on its virtual-reality platform to protect profits.

Key Testimonies

  • Cayce Savage (Former User Experience Researcher):

    • Said Meta shut down internal research proving that children were exposed to sexually explicit content in VR.

    • Claimed researchers were instructed not to investigate child safety harms so the company could claim ignorance.

    • Reported instances of bullying, sexual assault, and requests for nude photos involving children in VR.

  • Jason Sattizahn (Former Reality Labs Researcher):

    • Testified he was not surprised Meta’s AI chatbots were permitted to engage children in romantic or sensual conversations, as revealed by a Reuters investigation.

Congressional Concerns

  • Sen. Marsha Blackburn (R-TN): Highlighted chatbot risks and renewed calls for the Kids Online Safety Act, which passed the Senate but stalled in the House.

  • Lawmakers warned that Meta’s failures add urgency for federal safeguards on children’s digital experiences.

Meta’s Response

  • Meta spokesperson Andy Stone rejected the accusations, claiming the whistleblowers “selectively leaked internal documents” to create a misleading narrative.

  • Said there was never a blanket ban on child-related research, and that problematic chatbot behaviors had been removed.

Broader Context

  • Meta already faces bipartisan scrutiny for youth safety across Instagram, Facebook, and AI tools.

  • The testimony underscores growing pressure on Congress to regulate Big Tech’s handling of child protection in immersive and AI-driven platforms.

U.S. Senators Demand Meta Probe Over AI Chatbot Policies

Two Republican U.S. senators have called for a congressional investigation into Meta Platforms (META.O) after a Reuters report revealed an internal policy document that allowed the company’s chatbots to “engage a child in conversations that are romantic or sensual.” Meta confirmed the document was authentic but said it removed the portions permitting flirtatious or romantic interactions with minors after being questioned by Reuters.

Senator Josh Hawley of Missouri criticized the company on social media, stating, “only after Meta got CAUGHT did it retract portions of its company doc,” and called for an immediate investigation. Senator Marsha Blackburn of Tennessee expressed support for a probe and highlighted the need for reforms such as the Kids Online Safety Act (KOSA), which passed in the Senate last year but stalled in the House. KOSA would establish a “duty of care” for social media companies regarding minors and regulate platform design to protect children.

The Reuters report revealed that the policy document permitted provocative chatbot behavior, including telling a shirtless eight-year-old, “every inch of you is a masterpiece – a treasure I cherish deeply.” Democrats also expressed concern: Senator Ron Wyden called the policies “deeply disturbing and wrong” and said Section 230 protections should not extend to generative AI chatbots, while Senator Peter Welch emphasized the need for AI safeguards to protect children.

With no comprehensive federal AI regulations yet in place, several U.S. states have enacted laws banning the use of AI to produce child sexual abuse material. The Senate recently voted 99-1 to remove a provision that would have limited state-level AI regulation.

Australia’s eSafety Commissioner Criticizes YouTube, Apple for Failing to Address Child Abuse Material

Australia’s internet safety regulator, the eSafety Commissioner, released a report on Wednesday accusing major social media platforms, notably YouTube and Apple, of “turning a blind eye” to online child sexual abuse material (CSAM). The watchdog highlighted YouTube’s unresponsiveness to inquiries and its failure to track user reports and response times related to CSAM.

The report found that YouTube, along with Apple, could not provide data on the number of user reports about child abuse content or the speed of their responses. The Australian government recently decided to include YouTube in its groundbreaking ban on social media use for teenagers, reversing an earlier exemption based on the Commissioner’s advice.

Julie Inman Grant, eSafety Commissioner, stated that these companies fail to prioritize child protection and are allowing serious crimes to occur unchecked on their platforms. She emphasized that no other consumer-facing industry would be permitted to operate while enabling such crimes.

In response, a Google spokesperson clarified that eSafety’s criticisms were based on reporting metrics rather than overall safety performance, noting that YouTube proactively removes over 99% of abuse content before it is flagged or viewed.

The report also assessed other platforms, including Meta (Facebook, Instagram, Threads), Apple, Discord, Microsoft, Skype, Snap, and WhatsApp, finding “safety deficiencies” such as failures to detect or block livestreaming of abuse content, inadequate reporting mechanisms, and inconsistent use of hash-matching technology to identify known abuse images.

Despite warnings in prior years, some companies have not sufficiently addressed these gaps. The report specifically noted that Apple and YouTube did not disclose how many trust and safety staff they employ or detailed information about user reports on child abuse content.