Yazılar

Las Vegas Cybertruck Explosion Linked to ChatGPT, Authorities Say

The driver of the Tesla Cybertruck that exploded outside the Trump International Hotel in Las Vegas on New Year’s Day allegedly used the AI chatbot ChatGPT to plan the attack, according to law enforcement officials. Authorities revealed on Tuesday that the suspect used the platform to help determine how much explosive material was required to trigger the blast.

The individual identified as Matthew Livelsberger, 37, an active-duty Army soldier from Colorado Springs, was found dead inside the vehicle. The FBI has stated that it appears to be a case of suicide, and that Livelsberger acted alone in the incident. No connection has been made between the Las Vegas explosion and another truck attack in New Orleans that killed more than a dozen people.

This incident marks the first known case in the U.S. where ChatGPT was used to plan and facilitate the creation of an explosive device, raising alarms about the potential misuse of AI technologies. Las Vegas Metropolitan Police Department Sheriff Kevin McMahill highlighted the significance of the case, noting, “Of particular note, we also have clear evidence in this case now that the suspect used ChatGPT artificial intelligence to help plan his attack.”

The explosion left seven individuals with minor injuries, but the use of ChatGPT in this context adds a new layer of concern regarding AI’s role in enabling harmful activities. While OpenAI, the company behind ChatGPT, has emphasized that the tool is designed to prevent harmful use, it acknowledged that the chatbot only provided publicly available information and included warnings against illegal actions in its responses.

The FBI’s investigation continues, with Livelsberger’s phone revealing a six-page manifesto that authorities are actively reviewing for additional clues about his motives and state of mind.

 

OpenAI Whistleblower Suchir Balaji Found Dead in San Francisco Apartment

Suchir Balaji, a former researcher at OpenAI, was found dead in his San Francisco apartment on November 26, according to a report by CNBC. The 26-year-old, who had spent four years at the AI company, had raised significant concerns earlier this year regarding OpenAI’s practices, particularly in relation to copyright law violations.

The San Francisco Medical Examiner’s Office confirmed that Balaji’s death was ruled as a suicide, with no evidence of foul play found during the police investigation. The police were called to perform a “wellbeing check” at his residence on Buchanan Street, where they discovered his body. Balaji’s next of kin have been notified.

Balaji had publicly spoken out against OpenAI, particularly in an October interview with The New York Times, where he voiced concerns about the company’s use of copyrighted material. He stated, “If you believe what I believe, you have to just leave the company,” referring to his belief that AI models like ChatGPT were exploiting the content created by others without fair compensation. He argued that as AI systems trained on massive datasets of content scraped from the internet, they could threaten the financial viability of content creators such as journalists, artists, and writers.

OpenAI confirmed Balaji’s death, with a spokesperson expressing the company’s deep sorrow. “We are devastated to learn of this incredibly sad news today and our hearts go out to Suchir’s loved ones during this difficult time,” the spokesperson said in an email.

This tragic event comes amid growing concerns within the tech and creative industries about the impact of AI models that use vast amounts of data from publicly available sources without proper compensation. OpenAI is currently involved in multiple legal disputes related to the alleged misuse of copyrighted material, a matter that Balaji had highlighted in his warnings.

 

Museum’s Use of “Unalived” to Describe Kurt Cobain’s Death Sparks Controversy, Reflects Shift in Language

The recent controversy surrounding a museum’s use of the term “unalived” to describe Kurt Cobain’s death highlights evolving attitudes toward discussing sensitive topics. The placard, which appeared in an exhibit at the Museum of Pop Culture, referred to the Nirvana frontman’s death by suicide as him having “unalived himself at 27.” This term, popularized on TikTok as a euphemism for death, was used to bypass content moderation on the platform. Its appearance in a museum setting drew criticism from visitors who felt it disrespected Cobain’s legacy and avoided the direct discussion of suicide. Critics likened the term’s use to Newspeak from George Orwell’s “1984,” suggesting it sanitized the harsh reality of Cobain’s death.

Linguists and experts suggest that “unalived” reflects a broader trend of adapting language to approach difficult subjects with increased sensitivity. While originally a product of TikTok’s censorship workarounds, the term has gained traction in offline discussions, particularly among younger generations. The shift from digital slang to formal usage underscores a generational change in how suicide and mental health are addressed. Though the museum reportedly updated the placard to a more conventional term following the backlash, “unalived” remains a fixture in discussions around mental health, illustrating how new euphemisms can persist in the lexicon despite initial controversy.