Yazılar

Adobe Launches Firefly AI Image Generation App for Smartphones, Expands Partner Integrations

Adobe Inc. unveiled its first dedicated AI-powered smartphone app, Firefly, on Tuesday, combining Adobe’s own AI model with those from partners including OpenAI and Google. The app is available for both iOS and Android devices and aims to capitalize on the rising popularity of AI-generated images shared on social media.

Beyond Adobe’s internal AI models, Firefly integrates additional capabilities from new collaborators such as Ideogram, Luma AI, Pika, and Runway, accessible through Firefly Boards within the Firefly web app ecosystem.

The mobile app offers unlimited basic image generation from Adobe’s models for subscribers, with premium access to partner models available for an extra fee. Subscription pricing matches Adobe’s existing Firefly web plans, starting at $10 per month.

This move follows Adobe’s earlier rollout of AI features integrated into the mobile version of its flagship image-editing tool, Photoshop.

Adobe emphasized its commitment to ethical AI training practices, assuring users that Firefly’s AI is trained exclusively on legally licensed material, thus providing protection against copyright infringement claims.

Ely Greenfield, Adobe’s CTO for digital media, highlighted that this responsible approach has resonated well with consumers and remains a key differentiator in the competitive AI market.

Google and Character.AI Must Face Lawsuit Over Teen Suicide, U.S. Judge Rules

Google and AI startup Character.AI must face a lawsuit brought by a Florida mother who alleges that a chatbot interaction led to her 14-year-old son’s suicide, a U.S. federal judge ruled on Wednesday.

U.S. District Judge Anne Conway rejected the companies’ efforts to dismiss the case, stating they had failed to prove at this early stage that free speech protections shield them from liability. The decision allows one of the first U.S. lawsuits targeting an AI company for alleged psychological harm to move forward.

“This historic decision sets a new precedent for legal accountability across the AI and tech ecosystem,” said Meetali Jain, attorney for plaintiff Megan Garcia.

Background: The Case

  • Garcia’s son, Sewell Setzer, died by suicide in February 2024.

  • The lawsuit alleges that he had become deeply obsessed with an AI chatbot created by Character.AI, which represented itself as a real person, a licensed therapist, and an adult romantic partner.

  • The complaint cites one chilling interaction where Setzer told a chatbot imitating “Daenerys Targaryen” from Game of Thrones that he would “come home right now,” shortly before taking his own life.

Legal and Corporate Response

  • Character.AI argued its chatbots were protected by the First Amendment, and that it had built-in safety features to block conversations around self-harm.

  • Google, which was also named in the suit, argued it should not be held liable, saying it “did not create, design, or manage” the Character.AI app. A spokesperson emphasized that Google and Character.AI are entirely separate entities.

  • However, the court noted that Google had licensed Character.AI’s technology and re-hired the startup’s founders, a fact the plaintiffs cite in arguing Google’s involvement as a co-creator.

Judge Conway dismissed the free speech argument, saying the companies failed to explain “why words strung together by an LLM (large language model) are speech” under constitutional protections. She also denied Google’s request to be cleared of aiding in any alleged misconduct by Character.AI.

What This Means

This ruling opens the door for a landmark case examining:

  • The legal accountability of AI firms for harm caused by chatbot interactions

  • The limits of free speech when applied to AI-generated content

  • Tech platform liability for emerging technologies not fully governed by existing law

With rapidly expanding deployment of LLM-powered chatbots, particularly among youth, this lawsuit is likely to set important legal precedents for AI safety, responsibility, and regulatory oversight in the U.S. and beyond.

Critics Say OpenAI’s Revised Restructuring Still Prioritizes Profit Over Public Good

A group of former OpenAI employees and AI experts — including renowned computer scientist Geoffrey Hintonhas submitted a new letter to California and Delaware attorneys general, warning that OpenAI’s latest organizational restructuring still fails to uphold its founding mission of developing artificial intelligence for the benefit of humanity.

The group, calling itself Not For Private Gain, first criticized OpenAI earlier this year when the company proposed a plan to reduce control by its nonprofit parent entity. Facing backlash, OpenAI scaled back the changes and announced a revised plan in May: to convert its for-profit unit into a Public Benefit Corporation (PBC), with the nonprofit retaining major shareholder status.

Despite this shift, the group argues in its May 12 letter that the revised structure still allows for investor interests to outweigh ethical safeguards:

  • Under the current setup, OpenAI’s nonprofit has full operational control over the for-profit entity, including executive hiring and firing. The group says this control would be weakened under the new PBC, undermining accountability.

  • They also note that while the current for-profit entity is legally bound to prioritize OpenAI’s mission and charter over profits, a PBC has no such explicit legal obligation.

OpenAI responded, stating: The nonprofit would continue to have control over the PBC, full stop. Any suggestion otherwise is not accurate.”

Broader Concerns and Musk’s Involvement:

The restructuring debate has sparked wider criticism, including from Elon Musk, OpenAI’s co-founder and now rival via his company xAI. Musk is currently suing OpenAI for allegedly breaching its founding agreement by prioritizing commercial goals over public benefit. A lawyer representing Musk backed the letter, dismissing OpenAI’s revised structure as “nothing but window dressing.”

OpenAI’s main corporate backer, Microsoft, has invested more than $13 billion, and the restructuring is viewed as a means to attract additional capital needed to remain competitive in the rapidly evolving and costly AI sector.

While a Public Benefit Corporation is designed to balance profits and public interest, critics remain skeptical that the new model will provide the necessary governance and enforcement mechanisms to prevent the misuse of powerful AI technologies.