DeepSeek claims AI model trained for just $294,000, challenging U.S. rivals
Chinese AI developer DeepSeek has disclosed that its reasoning-focused R1 model cost just $294,000 to train—dramatically below the hundreds of millions reportedly spent by U.S. leaders such as OpenAI. The figure, revealed in a Nature article co-authored by founder Liang Wenfeng, is the company’s first public estimate of training costs and is likely to reignite debate over China’s position in the global AI race.
According to the paper, R1 was trained on a cluster of 512 Nvidia H800 chips over 80 hours. DeepSeek acknowledged for the first time that it also owns Nvidia A100 GPUs, which were used in preparatory phases before training shifted to the China-specific H800s. The H800 was designed to comply with U.S. export restrictions that bar Nvidia from selling its more powerful H100 and A100 chips to China.
The cost revelation is striking: OpenAI CEO Sam Altman has said foundational models cost “much more” than $100 million to train, though OpenAI has never published detailed figures. DeepSeek’s claim of drastically lower costs fueled January’s investor selloff in global tech stocks, amid fears it could disrupt the market dominance of Nvidia and other AI giants.
Skepticism remains. U.S. officials have suggested DeepSeek may have obtained H100 chips despite restrictions, while U.S. companies have questioned whether its development relied on model distillation—a technique where one AI model learns from another. DeepSeek has admitted using Meta’s open-source Llama models and said its training data may have included content generated by OpenAI systems, though it insists this was incidental.
DeepSeek defends distillation as an efficient way to cut costs and expand access to AI by reducing the enormous energy and resource demands of large-scale training. Analysts note this could accelerate the spread of competitive AI models outside the U.S., though questions about intellectual property and national security will remain central to the debate.











