Yazılar

OpenAI Faces Criticism After Revealing Methods for Assessing ChatGPT Users’ Mental Health Concerns

OpenAI has drawn mixed reactions after publishing new details about how it evaluates and responds to potential mental health concerns among ChatGPT users. In a blog post released on Monday, the company explained that it has built a structured “safety evaluation mechanism” to detect signs of distress, suicidal ideation, or unhealthy emotional reliance on the chatbot. As part of this system, OpenAI has developed extensive “taxonomies” — internal guides that define sensitive conversation types and outline how the model should respond. While the company says the framework was created in consultation with clinicians and mental health professionals, critics argue that the initiative raises ethical and privacy concerns.

According to OpenAI, the new safety system is designed to help ChatGPT identify users who might be in emotional crisis and steer them toward professional support rather than attempting to intervene directly. The company stated that its large language models (LLMs) are now trained to recognize emotional distress, de-escalate tense conversations, and offer crisis hotline information when needed. Additionally, OpenAI said that sensitive chats can be “re-routed” to specialized, safer versions of the model to minimize potential harm or miscommunication during vulnerable moments.

The backbone of this effort lies in the newly created taxonomies — detailed classification systems that guide the AI in distinguishing between different types of sensitive interactions. These taxonomies also define what constitutes undesired or risky behavior from the model, such as giving inappropriate advice in response to a mental health query. OpenAI emphasized that detection accuracy is still a major challenge, and that the system is tested rigorously before being rolled out. It also clarified that it does not monitor users’ conversations continuously but relies on structured testing environments to assess safety performance.

However, the update has sparked backlash among some users and privacy advocates, who see the move as intrusive and potentially paternalistic. Critics worry that labeling and rerouting conversations based on perceived emotional content could lead to overreach, false positives, or a chilling effect on users who seek open, judgment-free discussions. Others argue that while the goal of improving safety is commendable, mental health support should remain firmly in the hands of trained professionals — not automated systems. As OpenAI continues refining its approach, the debate underscores a growing tension between AI safety innovation and user autonomy in emotionally sensitive spaces.

OpenAI to Offer UK Data Residency Through Government Partnership

penAI is introducing a new UK data residency option, allowing businesses and government bodies to store their data locally. The initiative, officially announced by Deputy Prime Minister David Lammy, stems from a partnership between OpenAI and the UK Ministry of Justice (MoJ). It aims to enhance privacy, cybersecurity, and national resilience while unlocking greater potential for AI innovation across the public sector.

Lammy highlighted how AI is already transforming operations within the MoJ. Over 1,000 probation officers will use “Justice Transcribe,” an AI-powered tool that records and transcribes conversations, cutting administrative time and improving efficiency. “By adopting AI, we’re freeing up staff to focus on what truly matters—protecting the public,” Lammy said.

OpenAI CEO Sam Altman noted a fourfold increase in UK users over the past year and expressed excitement about how local businesses are leveraging AI for productivity gains. The UK data residency option will be available for customers using OpenAI’s API Platform, ChatGPT Enterprise, and ChatGPT Edu. The move comes as OpenAI continues to expand its product ecosystem, recently launching ChatGPT Atlas, an AI-driven browser designed to transform online search.

OpenAI, Oracle and Vantage to build $15B Stargate data center in Wisconsin

OpenAI, Oracle (ORCL.N), and Vantage Data Centers announced plans to develop a massive new data center campus in Port Washington, Wisconsin, as part of the multibillion-dollar Stargate initiative designed to keep the U.S. at the forefront of artificial intelligence infrastructure.

The Wisconsin site, named Lighthouse, is set for completion in 2028 and will create more than 4,000 skilled construction jobs, most of them union-based. Backed by Vantage’s $15 billion investment, the facility will be a core component of OpenAI and Oracle’s plan to deliver over 4.5 gigawatts of IT capacity nationwide.

Stargate—envisioned as a $500 billion, 10-gigawatt project—also includes Japan’s SoftBank Group (9984.T) and recently began work on its first AI data center in Abilene, Texas. The initiative aligns with President Donald Trump’s broader strategy to maintain U.S. dominance in advanced computing amid growing competition from China.

OpenAI and its primary backer Microsoft (MSFT.O) are among the major tech firms investing heavily in data centers to power generative AI systems such as ChatGPT and Copilot, both of which demand vast computing resources.

Once operational, the Lighthouse campus will anchor a growing network of Stargate sites being developed with Oracle, generating more than 1,000 long-term jobs and thousands of additional indirect roles in the region.

Vantage, supported by private equity firm Silver Lake and asset manager DigitalBridge (DBRG.N), will oversee the Port Washington buildout as part of its ongoing U.S. data center expansion. The companies said the project marks a crucial step toward meeting the exploding global demand for AI infrastructure.