Yazılar

OpenAI Faces Criticism After Revealing Methods for Assessing ChatGPT Users’ Mental Health Concerns

OpenAI has drawn mixed reactions after publishing new details about how it evaluates and responds to potential mental health concerns among ChatGPT users. In a blog post released on Monday, the company explained that it has built a structured “safety evaluation mechanism” to detect signs of distress, suicidal ideation, or unhealthy emotional reliance on the chatbot. As part of this system, OpenAI has developed extensive “taxonomies” — internal guides that define sensitive conversation types and outline how the model should respond. While the company says the framework was created in consultation with clinicians and mental health professionals, critics argue that the initiative raises ethical and privacy concerns.

According to OpenAI, the new safety system is designed to help ChatGPT identify users who might be in emotional crisis and steer them toward professional support rather than attempting to intervene directly. The company stated that its large language models (LLMs) are now trained to recognize emotional distress, de-escalate tense conversations, and offer crisis hotline information when needed. Additionally, OpenAI said that sensitive chats can be “re-routed” to specialized, safer versions of the model to minimize potential harm or miscommunication during vulnerable moments.

The backbone of this effort lies in the newly created taxonomies — detailed classification systems that guide the AI in distinguishing between different types of sensitive interactions. These taxonomies also define what constitutes undesired or risky behavior from the model, such as giving inappropriate advice in response to a mental health query. OpenAI emphasized that detection accuracy is still a major challenge, and that the system is tested rigorously before being rolled out. It also clarified that it does not monitor users’ conversations continuously but relies on structured testing environments to assess safety performance.

However, the update has sparked backlash among some users and privacy advocates, who see the move as intrusive and potentially paternalistic. Critics worry that labeling and rerouting conversations based on perceived emotional content could lead to overreach, false positives, or a chilling effect on users who seek open, judgment-free discussions. Others argue that while the goal of improving safety is commendable, mental health support should remain firmly in the hands of trained professionals — not automated systems. As OpenAI continues refining its approach, the debate underscores a growing tension between AI safety innovation and user autonomy in emotionally sensitive spaces.

OpenAI to Offer UK Data Residency Through Government Partnership

penAI is introducing a new UK data residency option, allowing businesses and government bodies to store their data locally. The initiative, officially announced by Deputy Prime Minister David Lammy, stems from a partnership between OpenAI and the UK Ministry of Justice (MoJ). It aims to enhance privacy, cybersecurity, and national resilience while unlocking greater potential for AI innovation across the public sector.

Lammy highlighted how AI is already transforming operations within the MoJ. Over 1,000 probation officers will use “Justice Transcribe,” an AI-powered tool that records and transcribes conversations, cutting administrative time and improving efficiency. “By adopting AI, we’re freeing up staff to focus on what truly matters—protecting the public,” Lammy said.

OpenAI CEO Sam Altman noted a fourfold increase in UK users over the past year and expressed excitement about how local businesses are leveraging AI for productivity gains. The UK data residency option will be available for customers using OpenAI’s API Platform, ChatGPT Enterprise, and ChatGPT Edu. The move comes as OpenAI continues to expand its product ecosystem, recently launching ChatGPT Atlas, an AI-driven browser designed to transform online search.

Meta Strikes $27 Billion Financing Deal With Blue Owl for Massive Louisiana AI Data Center

Meta (META.O) has finalized a $27 billion financing partnership with Blue Owl Capital (OWL.N) to fund its largest data center project to date — a massive AI computing hub in Louisiana designed to supercharge the company’s artificial intelligence ambitions.

The agreement, Meta’s biggest-ever private capital deal, gives Blue Owl-managed funds a majority ownership stake in the joint venture, while Meta retains 20% equity. Blue Owl contributed about $7 billion in cash, and Meta will receive a $3 billion one-time payout, according to Tuesday’s announcement.

The planned Hyperion Data Center in Richland Parish, Louisiana, will deliver over 2 gigawatts of computing capacity, a figure that underscores the escalating global demand for infrastructure to train large language models such as ChatGPT and Google Gemini. Blue Owl co-CEOs Doug Ostrover and Marc Lipschultz called the project “an ambitious step toward powering the next generation of AI infrastructure.”

The move comes amid a historic wave of investment in AI-related data centers. According to Morgan Stanley, leading tech giants — including Alphabet, Amazon, Meta, Microsoft, and CoreWeave — are collectively set to spend $400 billion this year building AI infrastructure.

Meta CFO Susan Li described the partnership as “a bold step forward,” noting that the project will create more than 500 jobs and help the company diversify its financing strategy while reducing exposure to debt.

Industry analysts say the deal enables Meta to offload capital risk while maintaining operational control of a strategic AI asset. “It allows Meta to finance expansion without taking on heavy debt — a smart hedge if the AI market overheats,” said Alvin Nguyen, senior analyst at Forrester.

The Hyperion facility is expected to go online within four years, with Meta holding lease options to extend. Once operational, it will stand among the largest data centers in the world, symbolizing the scale of investment driving the AI revolution.