Yazılar

John Schulman Departs AI Startup Anthropic

John Schulman, a co-founder of OpenAI, has left his position at the AI startup Anthropic, the company confirmed late Wednesday. Schulman had joined Anthropic in August after departing OpenAI, aiming to focus more intensively on AI alignment and to reengage with hands-on technical work. His departure marks a significant shift for the company, which has been one of the primary competitors to OpenAI in the AI foundation model sector.

In a statement, Anthropic’s Chief Science Officer, Jared Kaplan, expressed support for Schulman’s decision, stating, “We are sad to see John go but fully support his decision to pursue new opportunities and wish him all the very best.”

Despite the departure, Anthropic remains a key player in the AI industry. The company has achieved annualized revenue of approximately $875 million and offers access to its models through direct sales and third-party cloud services, including Amazon Web Services. The news of Schulman’s exit was initially reported by The Information.

 

US Investigates Whether DeepSeek Used Restricted AI Chips

The U.S. Commerce Department is investigating whether DeepSeek, the Chinese AI company behind a disruptive new model, has been using U.S.-made AI chips that are restricted from being shipped to China, according to a source familiar with the situation. DeepSeek’s free assistant, which launched last week, has been widely praised for its cost-effective performance and ability to process less data compared to U.S. models. It quickly became the most downloaded app on Apple’s App Store, raising concerns in the U.S. about its competitive edge in AI and contributing to a significant drop in the stock market, which wiped out around $1 trillion from U.S. tech stocks.

The current restrictions on advanced AI processors, particularly from Nvidia (NVDA.O), are designed to prevent China from accessing the most sophisticated chips that could enhance its AI capabilities. The U.S. has been tracking organized smuggling operations of these chips into China from countries such as Malaysia, Singapore, and the United Arab Emirates.

DeepSeek has reportedly used Nvidia’s H800 chips, which were legally purchased in 2023. However, the legality of DeepSeek’s access to other U.S. chips remains unclear. It is also known to have Nvidia’s H20 chips, which can be legally sold to China. Although there have been discussions within the U.S. government about placing more restrictions on these chips, the Biden administration and new Trump officials are also weighing tighter controls.

In response to these allegations, Nvidia emphasized that it requires its partners to comply with U.S. export laws, noting that many of its clients in Singapore might use the country as an intermediary for products destined for the U.S. and the West. However, the Singapore trade ministry stated that while there was no indication that DeepSeek obtained export-controlled chips from Singapore, it would continue to uphold the rule of law and cooperate with U.S. authorities.

DeepSeek has also been linked to the use of chips that, while not banned, have raised concerns among AI industry experts. Dario Amodei, CEO of Anthropic, expressed doubts over the legality of some of DeepSeek’s chips, suggesting that they could include smuggled or pre-banned processors.

The U.S. has imposed a range of restrictions on AI chip exports to China and is planning to extend these limits to other countries.

 

Anthropic Unveils Citations Feature to Enhance Claude’s Response Accuracy

Anthropic Launches Citations Feature to Improve Claude AI Responses

On Thursday, Anthropic introduced a new feature to enhance the reliability and accuracy of responses generated by its Claude AI models. Named Citations, the feature allows developers to restrict AI output to responses grounded in specific source documents. This addition is designed to tackle one of the most significant challenges faced by generative AI models—ensuring the accuracy of the information they provide. Anthropic has already rolled out this feature to companies like Thomson Reuters (for the CoCounsel platform) and Endex, and notably, the feature comes at no extra cost.

Improving Response Accuracy with Grounding

Generative AI models, like Claude, are known to sometimes generate incorrect or “hallucinated” information due to the vast and varied datasets they pull from when formulating answers. This problem becomes more pronounced when AI systems incorporate web searches, making it even harder for models to sift through vast amounts of data and avoid inaccuracies. By introducing the Citations feature, Anthropic aims to address these challenges by grounding responses in a set of predefined documents, thereby minimizing the risk of generating unreliable or false information.

A Solution for Developers Seeking More Control

While many AI companies offer specialized tools that restrict data access to improve accuracy—such as Google’s Gemini for Google Docs or PDF analysis tools in Adobe Acrobat—these solutions are often built into specific applications or platforms. For developers working in more open environments, like those creating various API-driven tools, it can be difficult to integrate such controls. Anthropic’s Citations feature helps bridge this gap, giving developers the ability to apply source restrictions without compromising the flexibility required for their projects.

No Extra Cost for Enhanced Reliability

One of the standout aspects of the Citations feature is that it is available at no additional cost. This is a significant advantage for developers and companies looking to integrate more reliable AI responses into their tools without worrying about escalating expenses. By offering this feature for free, Anthropic not only makes it easier for businesses to adopt more dependable AI but also sets a new standard for how AI models can be utilized in real-world applications with a focus on accuracy. As AI continues to evolve, features like Citations could play a key role in ensuring these models are used responsibly and effectively.