Yazılar

Whistleblowers Accuse Meta of Prioritizing VR Profits Over Child Safety

Two former Meta researchers told the U.S. Senate Subcommittee on Privacy and Technology that Meta Platforms knowingly ignored harms to children on its virtual-reality platform to protect profits.

Key Testimonies

  • Cayce Savage (Former User Experience Researcher):

    • Said Meta shut down internal research proving that children were exposed to sexually explicit content in VR.

    • Claimed researchers were instructed not to investigate child safety harms so the company could claim ignorance.

    • Reported instances of bullying, sexual assault, and requests for nude photos involving children in VR.

  • Jason Sattizahn (Former Reality Labs Researcher):

    • Testified he was not surprised Meta’s AI chatbots were permitted to engage children in romantic or sensual conversations, as revealed by a Reuters investigation.

Congressional Concerns

  • Sen. Marsha Blackburn (R-TN): Highlighted chatbot risks and renewed calls for the Kids Online Safety Act, which passed the Senate but stalled in the House.

  • Lawmakers warned that Meta’s failures add urgency for federal safeguards on children’s digital experiences.

Meta’s Response

  • Meta spokesperson Andy Stone rejected the accusations, claiming the whistleblowers “selectively leaked internal documents” to create a misleading narrative.

  • Said there was never a blanket ban on child-related research, and that problematic chatbot behaviors had been removed.

Broader Context

  • Meta already faces bipartisan scrutiny for youth safety across Instagram, Facebook, and AI tools.

  • The testimony underscores growing pressure on Congress to regulate Big Tech’s handling of child protection in immersive and AI-driven platforms.

Meta’s TBD Lab: Small, Talent-Dense Team Driving Next-Gen AI Models

Meta’s TBD Lab, a research group within its Superintelligence Labs, consists of only “a few dozen” researchers and engineers, CFO Susan Li told investors at the Goldman Sachs Communacopia + Technology conference on Tuesday.

Key Details

  • Team size: “A few dozen” researchers and engineers, highly talent-dense.

  • Focus: Developing next-generation foundation models at the AI frontier over the next 1–2 years.

  • Name origin: “TBD” began as a placeholder (“to be determined”) but stuck, reflecting the exploratory nature of the group.

Meta’s AI Reorganization

  • Earlier this year, Meta split its AI efforts under Superintelligence Labs into four groups:

    1. TBD Lab – new, frontier-focused models.

    2. Products team – including the Meta AI assistant.

    3. Infrastructure team – scaling compute and systems.

    4. FAIR (Fundamental AI Research) – long-term research.

  • This restructuring followed senior staff exits and lukewarm reception for Meta’s Llama 4 model.

Leadership & Talent Push

  • CEO Mark Zuckerberg has been personally driving talent acquisition, reportedly reaching out to startup founders and top researchers directly — even via WhatsApp — with million-dollar offers.

  • The company’s AI ambitions are positioned as a long-term bet, combining frontier R&D, consumer AI products, and infrastructure scaling.

Strategic Significance

  • The compact size of TBD Lab emphasizes high-leverage innovation rather than large-scale manpower.

  • Its work will likely feed into both open-source and proprietary models, shaping Meta’s response to OpenAI, Google DeepMind, and Anthropic in the race for AI dominance.

  • If successful, TBD Lab could be key in restoring Meta’s competitive credibility in foundation models.

Apple Hit With Lawsuit Over Use of Books in AI Training

Apple was sued Friday in federal court in Northern California by authors who accuse the company of illegally using copyrighted books to train its “OpenELM” large language models. The proposed class action, filed by writers Grady Hendrix and Jennifer Roberson, claims Apple copied protected works without consent, credit, or compensation.

“Apple has not attempted to pay these authors for their contributions to this potentially lucrative venture,” the lawsuit alleges. Neither Apple nor the plaintiffs’ lawyers immediately commented.

The case adds Apple to the growing list of tech giants—Microsoft, Meta, and OpenAI among them—facing litigation over whether training AI on copyrighted material constitutes infringement or fair use. On the same day, Anthropic agreed to a $1.5 billion settlement with authors who accused it of training its Claude chatbot on pirated books, a deal hailed as the largest copyright recovery in history.

According to the lawsuit, Apple’s models were trained on a known dataset of pirated books, allegedly including works by Hendrix and Roberson. The case seeks damages and legal recognition that Apple must compensate authors when their intellectual property is used to build AI systems.

The dispute underscores the escalating clash between AI developers and creators, as courts weigh how copyright law applies to massive datasets powering generative AI. With multiple cases now moving forward in U.S. courts, the outcome could reshape both the AI industry and protections for authors in the digital era.