Yazılar

OpenAI Forms For-Profit Arm, Updates Microsoft Partnership With New AGI Agreement

OpenAI Finalises For-Profit Transition and Strengthens Microsoft Partnership With AGI-Proof Agreement

OpenAI has taken a major step in its long-running restructuring process by completing the formation of its new for-profit entity — OpenAI Group Public Benefit Corporation (PBC). The entity will operate under the control of the non-profit OpenAI Foundation, marking a significant evolution in the company’s governance and capital structure. The move aims to streamline OpenAI’s ability to raise funds while maintaining its public-benefit mission amid growing demands for advanced artificial intelligence (AI) development.

OpenAI Group PBC Officially Established

In a detailed announcement, OpenAI confirmed that its recapitalisation process has concluded, allowing for a clearer separation between its non-profit oversight and for-profit operations. The OpenAI Foundation now directly holds equity in the newly established PBC, giving it access to a portion of the company’s capital while ensuring that the foundation maintains strategic control. This structure enables OpenAI to pursue high-impact projects without being bound by traditional non-profit fundraising limitations.

New Agreement With Microsoft Includes AGI Clause

Alongside this structural shift, OpenAI has signed a new agreement with Microsoft, its largest investor and cloud partner. The deal explicitly outlines conditions for their collaboration in the event OpenAI achieves artificial general intelligence (AGI) — a milestone that could fundamentally change the AI landscape. The revised terms aim to ensure continued cooperation between the two companies while clarifying ownership, control, and ethical responsibilities tied to AGI development.

Foundation to Invest in Health and Cybersecurity

With its new structure in place, the OpenAI Foundation will manage approximately $25 billion (around Rs. 2.2 lakh crore) in two primary focus areas — health and cybersecurity. The organisation intends to leverage AI to revolutionise diagnostics and treatment capabilities, while also developing stronger security frameworks to protect global AI infrastructure. This dual focus underscores OpenAI’s intent to balance innovation with societal benefit as it moves closer to achieving AGI.

OpenAI Faces Criticism After Revealing Methods for Assessing ChatGPT Users’ Mental Health Concerns

OpenAI has drawn mixed reactions after publishing new details about how it evaluates and responds to potential mental health concerns among ChatGPT users. In a blog post released on Monday, the company explained that it has built a structured “safety evaluation mechanism” to detect signs of distress, suicidal ideation, or unhealthy emotional reliance on the chatbot. As part of this system, OpenAI has developed extensive “taxonomies” — internal guides that define sensitive conversation types and outline how the model should respond. While the company says the framework was created in consultation with clinicians and mental health professionals, critics argue that the initiative raises ethical and privacy concerns.

According to OpenAI, the new safety system is designed to help ChatGPT identify users who might be in emotional crisis and steer them toward professional support rather than attempting to intervene directly. The company stated that its large language models (LLMs) are now trained to recognize emotional distress, de-escalate tense conversations, and offer crisis hotline information when needed. Additionally, OpenAI said that sensitive chats can be “re-routed” to specialized, safer versions of the model to minimize potential harm or miscommunication during vulnerable moments.

The backbone of this effort lies in the newly created taxonomies — detailed classification systems that guide the AI in distinguishing between different types of sensitive interactions. These taxonomies also define what constitutes undesired or risky behavior from the model, such as giving inappropriate advice in response to a mental health query. OpenAI emphasized that detection accuracy is still a major challenge, and that the system is tested rigorously before being rolled out. It also clarified that it does not monitor users’ conversations continuously but relies on structured testing environments to assess safety performance.

However, the update has sparked backlash among some users and privacy advocates, who see the move as intrusive and potentially paternalistic. Critics worry that labeling and rerouting conversations based on perceived emotional content could lead to overreach, false positives, or a chilling effect on users who seek open, judgment-free discussions. Others argue that while the goal of improving safety is commendable, mental health support should remain firmly in the hands of trained professionals — not automated systems. As OpenAI continues refining its approach, the debate underscores a growing tension between AI safety innovation and user autonomy in emotionally sensitive spaces.

Google unveils ‘Quantum Echoes’ algorithm, marking leap toward practical quantum computing

Google has announced the creation of a groundbreaking quantum computing algorithm that could pave the way for real-world applications — from drug discovery to new materials research — and generate unique datasets for artificial intelligence.

The algorithm, dubbed Quantum Echoes, runs on Google’s quantum chip and performs calculations 13,000 times faster than the most advanced classical computing algorithms running on today’s supercomputers, the company said.

Executives from Alphabet’s (GOOGL.O) Google shared during a briefing that Quantum Echoes could one day help measure molecular structures with unprecedented precision, potentially revolutionizing chemistry, medicine, and materials science. “If I can’t tell you the data is correct, if I can’t prove to you the data is correct, how can I do anything with it?” said Google research scientist Tom O’Brien, highlighting that the algorithm’s outputs can be verified by other quantum computers or experiments — a key step toward real-world usability.

Quantum Echoes builds on Google’s Willow quantum chip, unveiled last year, which overcame one of the central challenges of quantum computing: maintaining stable and reliable “qubits,” the fragile quantum bits that store and process information. Company executives described the significance of the new algorithm as “roughly equivalent” to the chip itself.

Google joins a growing list of major tech firms — including Amazon (AMZN.O) and Microsoft (MSFT.O) — investing heavily in quantum computing as the technology races from theoretical promise toward commercial reality.

For artificial intelligence, Google engineers said Quantum Echoes could be used to create new, high-quality datasets for fields like life sciences, where usable data is scarce. The company detailed the breakthrough in the journal Nature on Wednesday, marking another milestone in the emerging era of quantum-enhanced computation.