Yazılar

FDA to Review AI-Powered Mental Health Devices in November Advisory Panel

The U.S. Food and Drug Administration (FDA) announced it will convene its Digital Health Advisory Committee (DHAC) on November 6 to evaluate the growing category of AI-enabled digital mental health tools.

The meeting will explore how technologies such as chatbots, virtual therapists, and digital therapeutics could help bridge the nation’s mental health care gap, while also assessing the risks of safety, efficacy, and oversight.

Why It Matters

The U.S. faces a shortage of mental health professionals, and AI-driven platforms promise scalability, accessibility, and rapid intervention. But the speed of innovation has left regulators searching for frameworks to ensure these devices are trustworthy and clinically sound.

FDA’s Approach

  • The DHAC will advise the agency on regulatory pathways for AI/ML tools, remote monitoring, digital therapeutics, and medical device software.

  • The panel discussion is expected to help the FDA identify key areas of concern such as data privacy, bias in algorithms, and standards for clinical validation.

  • The FDA has already begun experimenting with AI in its review processes, reflecting its broader shift toward digital oversight.

Next Steps

  • The FDA has opened a public docket for comments ahead of the session.

  • Supporting materials will be made available at least two business days before the meeting.

The November discussion could shape how future AI mental health devices are classified, monitored, and approved in the U.S., setting an early precedent for regulation in this rapidly expanding sector.

Meta and TikTok Win EU Court Challenge on Tech Fees, Regulators Must Recalculate

Meta Platforms and TikTok secured a legal victory on Wednesday against the European Commission over the way EU regulators calculated supervisory fees under the Digital Services Act (DSA). The General Court in Luxembourg ruled that the methodology used to determine the fees was flawed and must be reworked.

Both companies had challenged the 0.05% levy on annual worldwide net income, arguing the system unfairly imposed disproportionate costs. The fee is intended to fund the EU’s monitoring of large platforms’ compliance with the DSA, which requires them to better police harmful and illegal online content.

Court Ruling

The judges said the fee calculation method should have been set under a delegated act, rather than through implementing decisions, giving regulators 12 months to fix the legal framework. Importantly, the court said fees already paid for 2023 will not be reimbursed.

Reactions

  • The European Commission said the ruling requires only a “formal correction” and that it will adopt a delegated act to formalize the methodology.

  • TikTok welcomed the decision, pledging to monitor the new process.

  • Meta emphasized that the current system unfairly burdens profitable companies while large loss-making platforms avoid payment, despite imposing heavy regulatory costs.

Wider Context

The DSA, which came into effect in November 2022, gives the EU sweeping oversight powers and allows fines of up to 6% of global turnover for non-compliance. Other major platforms subject to supervisory fees include Amazon, Apple, Google, Microsoft, Booking.com, X (formerly Twitter), Snapchat, and Pinterest.

The cases were filed under references T-55/24 (Meta Platforms Ireland v Commission) and T-58/24 (TikTok Technology v Commission).

Senator Ted Cruz Proposes AI ‘Sandbox’ to Ease Federal Regulations

U.S. Senator Ted Cruz on Wednesday introduced a bill that would create a regulatory “AI sandbox” allowing artificial intelligence companies to apply for temporary exemptions from certain federal rules while developing new technologies.

Cruz, who chairs the Senate Commerce Committee, described the proposal as a way to help U.S. firms stay competitive with China by lowering regulatory barriers. “A regulatory sandbox is not a free pass. People creating or using AI still have to follow the same laws as everyone else,” Cruz said during a subcommittee hearing.

Key Details

  • The bill would let federal agencies grant two-year exemptions to companies that apply, provided they outline safety and financial risks and how they would mitigate them.

  • The Office of Science and Technology Policy (OSTP) would be given authority to override agency denials of waivers.

  • The sandbox would apply only at the federal level — Cruz’s proposal does not preempt state-level AI regulations, despite pressure from the tech industry.

Industry Push and Opposition

Major AI developers including OpenAI, Google, and Meta have urged the Trump administration to reduce regulatory barriers. The White House OSTP has also begun seeking public input on which regulations hinder AI growth.

Consumer advocacy group Public Citizen sharply criticized Cruz’s bill, arguing it “treats Americans as test subjects” and warning against OSTP’s ability to override regulators. “The sob stories of AI companies being ‘held back’ by regulation are simply not true,” said J.B. Branch, the group’s Big Tech accountability advocate, pointing to record-high valuations of AI firms.

State-Level Rules

While Cruz’s bill avoids limiting state laws, AI regulation is already expanding at the state level:

  • California bans unauthorized political deepfakes and requires patient disclosure when AI is used in healthcare.

  • Colorado passed a law to curb AI discrimination in hiring, housing, banking, and other areas — its enforcement was pushed to mid-2026 after lobbying by the tech sector.

  • Several states have criminalized AI-generated explicit imagery without consent.

OSTP director Michael Kratsios told the committee that such state measures risk stifling innovation, suggesting Congress revisit preemption in the future.

The proposal is likely to fuel debate between those who see regulation as a barrier to U.S. innovation and those who warn of the risks of treating AI experimentation as a public trial.