Yazılar

Musk says he missed OpenAI for-profit details

Elon Musk testified in court that he did not read the “fine print” of a 2017 term sheet discussing OpenAI’s potential shift toward a for-profit structure, during ongoing litigation over the company’s evolution.

Under cross-examination, Musk said he focused only on headline-level information and believed assurances from Sam Altman and others that OpenAI would remain fundamentally nonprofit. OpenAI’s legal team presented emails suggesting Musk had earlier exposure to internal discussions around commercialization.

Musk’s lawsuit seeks governance changes, a return to nonprofit principles and $150 billion in damages, arguing OpenAI abandoned its founding mission. OpenAI counters that restructuring was necessary to secure capital for computing power and talent.

The trial could significantly influence OpenAI’s governance, public perception and future IPO trajectory.

Sam Altman’s sister loses legal team in abuse case

The two law firms representing Annie Altman in her sexual abuse lawsuit against OpenAI CEO Sam Altman have filed to withdraw from the case, citing a breakdown in the attorney-client relationship.

Court filings say the firms consider continued representation impracticable due to confidential and professional concerns. Annie Altman is now seeking new legal counsel, pending court approval.

Sam Altman has denied allegations that he sexually abused his sister during their childhood and has filed a defamation countersuit, arguing the claims are false and financially motivated.

The case is separate from ongoing corporate litigation involving Elon Musk and OpenAI. As this remains an active legal matter with disputed allegations, no court ruling has established liability.

Grok Faces Lawsuit Over Images

Elon Musk’s artificial intelligence company xAI is facing a lawsuit in the United States alleging its Grok image generator enabled the creation of explicit content using real photos of individuals.

The complaint was filed in federal court by three plaintiffs, including two minors, who claim the system allowed altered images based on their likeness to be produced and circulated online.

The case seeks class-action status for individuals in the United States who may have been identifiable in AI-generated explicit imagery.

According to the filing, the plaintiffs argue the technology lacked sufficient safeguards to prevent misuse involving real people.

The lawsuit is requesting damages and court orders that would require the company to halt the alleged practices.

The case adds to a growing global debate over safeguards and accountability for generative artificial intelligence tools.