Elon Musk to Proceed with Lawsuit Against OpenAI Despite Nonprofit Control Assurance

Elon Musk will continue pursuing his lawsuit against OpenAI, his attorney Marc Toberoff confirmed on Monday, despite the company reaffirming that its nonprofit parent will retain control over its for-profit arm.

OpenAI, co-founded by Musk, had recently proposed a governance plan that maintains its nonprofit entity’s control over its for-profit operations and gives it a significant shareholder position. However, Musk’s legal team claims the move is superficial and insufficient.

Nothing in today’s announcement changes the fact that OpenAI will still be developing closed-source AI for the benefit of [CEO Sam] Altman, his investors, and Microsoft,” said Toberoff. He criticized the plan for lacking transparency, particularly regarding the nonprofit’s diluted stake in the for-profit venture.

Musk, who has grown increasingly critical of OpenAI, accuses the company of abandoning its original mission of open-source development for public benefit. His lawsuit aims to block what he describes as a corporate shift toward private enrichment, particularly in favor of Microsoft, a key investor and partner.

OpenAI dismissed Musk’s lawsuit as meritless, with a company spokesperson stating, “Elon continuing with his baseless lawsuit only proves that it was always a bad-faith attempt to slow us down.”

The case is expected to proceed to jury trial in March 2026. It has drawn wide attention from other tech leaders and AI researchers, including Meta and Geoffrey Hinton, the “godfather of AI,” who have raised concerns about the implications of powerful AI being developed under private control without sufficient regulatory oversight.

Cyberattacks on M&S and Co-op Originated from Help Desk Deception, Says Report

Cybercriminals launched recent attacks on British retailers Marks & Spencer (M&S) and Co-op Group by impersonating employees to trick IT help desks into resetting passwords, according to a report by BleepingComputer. This social engineering tactic allowed hackers to gain initial access to internal systems.

The UK’s National Cyber Security Centre (NCSC) responded by urging all organisations to re-evaluate their help desk protocols, warning that online criminal activity like ransomware and data extortion is on the rise and that even large enterprises are vulnerable to such basic forms of manipulation.

While both M&S and Co-op declined to comment, the consequences of the M&S breach are already being felt. Shares dropped 4% on Tuesday and are down 12% since the cyber incident was disclosed on April 22. The company halted online orders for clothing and home products via its website and app on April 25, with no timeline for resumption. Some food product availability has also been disrupted.

Deutsche Bank analysts estimate the incident has cost M&S around £30 million ($40 million) so far, with an ongoing weekly impact of approximately £15 million. Though cyber insurance may offset part of the loss, it typically covers a limited time period. The broader risks include loss of consumer trust, data breach fines, and long-term reputational damage.

Ciaran Martin, former CEO of the NCSC, noted that the recovery time for such attacks is often lengthy due to the need to completely rebuild compromised IT networks.

Meanwhile, a group identifying as DragonForce claimed responsibility for attacking both M&S and Co-op, as well as stealing staff and potential customer data from the latter. The same group also claims responsibility for attacking Harrods. The report also links the cyberattack on M&S to the Scattered Spider” hacking collective, known for using DragonForce ransomware, although the NCSC said it could not confirm the connection.

India Forms Expert Panel to Review Copyright Law in Wake of AI Legal Battles

India has convened an eight-member expert panel to review the Copyright Act of 1957 and assess whether it adequately addresses artificial intelligence-related disputes, amid ongoing litigation against OpenAI by major Indian news publishers.

The secret memo, reviewed by Reuters, outlines how the Ministry of Commerce has tasked intellectual property lawyers, government officials, and tech executives to examine legal and policy challenges related to the use of copyrighted content by AI models like ChatGPT.

The move comes in response to a pending high court case in New Delhi filed by prominent media entities such as NDTV, Indian Express, Hindustan Times, and members of the Digital News Publishers Association. The plaintiffs accuse OpenAI of using their content without authorization to train ChatGPT, which they argue constitutes copyright infringement.

OpenAI has denied any wrongdoing, asserting that it uses publicly available data and offers an opt-out mechanism for websites. The company maintains that its practices do not breach Indian copyright law.

The panel’s mandate includes reviewing the scope and interpretation of existing laws, evaluating how global copyright trends intersect with AI, and delivering recommendations for legal updates or clarifications to the government.

India joins a growing list of countries — including the U.S., EU members, and Japan — grappling with how to regulate AI training data in a way that balances innovation, creator rights, and fair use.