Yazılar

DeepSeek Researcher Voices Pessimism About AI’s Future Impact Despite Company’s Global Success

In its first major public appearance since becoming a global AI sensation, Chinese developer DeepSeek struck a surprisingly cautious tone about the technology’s long-term impact on society.

At the World Internet Conference in Wuzhen, Chen Deli, a senior researcher at DeepSeek, warned that artificial intelligence could create major social disruptions within the next two decades. “In the next 10–20 years, AI could take over the rest of work humans perform and society could face a massive challenge,” Chen said. “I’m extremely positive about the technology, but I view the impact it could have on society negatively.”

Chen shared the stage with executives from five other Chinese AI companies—Unitree, BrainCo, and others—collectively referred to as the country’s “six little dragons” of AI innovation. While praising AI’s potential in the short term, Chen stressed that companies like DeepSeek must act as “defenders” of social stability as automation accelerates.

DeepSeek rose to global prominence in January after releasing a low-cost open-source AI model that outperformed several leading U.S. systems. The company’s meteoric rise has since made it a symbol of China’s technological resilience amid intensifying competition with the United States.

Despite its success, DeepSeek has remained mostly silent publicly. Its only major appearance this year came when founder and CEO Liang Wenfeng met President Xi Jinping in February. The company has since skipped several major tech events, adding to its enigmatic reputation.

DeepSeek has continued developing its technology quietly, unveiling in September a new V3 model that it described as “experimental,” optimized for efficiency and longer text processing. Its work has also boosted China’s domestic chip ecosystem: hardware makers Cambricon and Huawei now build processors compatible with DeepSeek’s models.

In August, DeepSeek’s announcement of an upgraded model optimized for Chinese-made chips caused local semiconductor stocks to surge—underlining how the company remains both a technical pioneer and a national symbol of self-reliance in AI.

Salesforce faces lawsuit from authors over AI model training data

Salesforce (CRM) is facing a proposed class action lawsuit accusing it of using copyrighted books without permission to train its xGen artificial intelligence models. The complaint, filed Wednesday in a U.S. court, was brought by authors Molly Tanzer and Jennifer Gilmore, who allege that the cloud-computing firm infringed their copyrights by using their works to develop language-processing AI.

The lawsuit claims Salesforce used “thousands of pirated books” written by the plaintiffs and other authors to train its AI systems, echoing similar suits filed against other tech giants like OpenAI, Microsoft, and Meta over the use of copyrighted material in AI training datasets.

“It’s important that companies that use copyrighted material for AI products are transparent,” said Joseph Saveri, the authors’ attorney, who has led several high-profile copyright cases against AI companies. “Our clients deserve fair compensation when their creative work is used.”

Salesforce has declined to comment on the lawsuit.

In an ironic twist, the complaint notes that Salesforce CEO Marc Benioff has previously criticized other AI firms for using “stolen” training data, arguing that compensating creators would be “very easy to do.” The lawsuit quotes that statement, suggesting Salesforce failed to follow its own advice.

The case adds to a growing list of legal battles testing how intellectual property laws apply in the age of AI model training, with potentially wide-ranging implications for the industry.

OpenAI to Give Content Owners Control Over Sora AI Videos, Plans Revenue Sharing Model

OpenAI is rolling out new tools to give content owners greater control over how their intellectual property is used in Sora, its recently launched AI video-generation app, and plans to introduce a revenue-sharing system for creators who opt in.

In a blog post on Friday, CEO Sam Altman said OpenAI will soon provide “more granular control over the generation of characters” within Sora, enabling rights holders such as film and television studios to decide how their characters can appear—or to block them entirely.

The move comes amid intensifying scrutiny of AI-generated content and growing concern across Hollywood and the creative industries about copyright infringement and the unauthorized replication of proprietary characters and likenesses.

Sora, launched this week as a standalone app in the United States and Canada, allows users to generate and share AI-created videos up to 10 seconds long. Its social-media-style interface quickly gained traction, with users producing clips based on both original and copyrighted material.

Altman acknowledged that the app’s rapid popularity—and the sheer volume of video creation—has outpaced expectations, creating a need for clear rules and compensation mechanisms. “We’ll experiment with different approaches,” he wrote, adding that the revenue-sharing model would evolve through “trial and error” as OpenAI tests various systems within Sora before applying them to its broader suite of AI tools.

At least one major studio, Disney, has already opted out of allowing its characters to appear in Sora-generated videos, sources familiar with the matter told Reuters. Other studios are reportedly reviewing whether to participate under OpenAI’s forthcoming licensing framework.

The company’s initiative could mark a turning point in the relationship between AI firms and content owners, shifting from conflict to collaboration—if a viable monetization model can be found.

Backed by Microsoft, OpenAI’s expansion into multimodal AI via Sora places it in direct competition with Meta’s Vibes and Google’s text-to-video tools, as major tech firms race to define the future of synthetic media creation.

Still, the effort to give rights holders control over how their creations are used—and to share revenue from those uses—reflects a broader recognition that AI’s creative power must coexist with creator compensation and consent.