Yazılar

OpenAI to spend $100B on backup servers in five-year cloud push

OpenAI plans to spend $100 billion over the next five years renting backup servers from cloud providers, according to The Information. The investment comes on top of the $350 billion the company has already projected for server rentals between now and 2030, underscoring the massive infrastructure costs of training and deploying advanced AI systems.

The spending spree reflects the global race for scarce computing capacity, benefiting cloud giants and chipmakers as AI developers scramble to secure the hardware needed to train and run ever-larger models. With backup capacity included, OpenAI expects to average $85 billion annually on server rentals over the next five years.

Executives told shareholders the servers are “monetizable,” meaning they could generate additional revenue not yet included in forecasts—either by enabling new research breakthroughs or handling spikes in product demand. Even so, OpenAI is projected to burn about $115 billion in cash through 2029, as it scales infrastructure to match the ambitions of ChatGPT and future AI models.

The enormous outlays highlight both the intensity of the AI arms race and the risks: investors are betting that today’s infrastructure bets will translate into tomorrow’s breakthroughs and revenue streams.

OpenAI teams with Apple supplier Luxshare to build consumer AI device

OpenAI has struck a deal with Luxshare, a major Apple supplier, to manufacture a prototype consumer AI device, according to The Information. The pocket-sized gadget is being designed to work natively with OpenAI’s AI models and adapt to user context, potentially offering an alternative to smartphones and PCs as the main way people interact with artificial intelligence.

The project represents one of the boldest pushes yet by an AI firm into dedicated hardware, rather than layering AI onto existing devices. Analysts say an “AI-native” product could open entirely new markets while challenging the dominance of established consumer electronics leaders such as Apple, Samsung, and Google.

OpenAI earlier this year acquired io Products, a hardware startup founded by former Apple designer Jony Ive, in a $6.5 billion deal to accelerate its hardware ambitions. Luxshare—best known for assembling iPhones and AirPods—will provide the large-scale manufacturing muscle. OpenAI has also reached out to Goertek, another Apple supplier, for components such as speaker modules.

Neither Luxshare nor OpenAI has commented publicly on the report. But the move underscores OpenAI’s effort to expand beyond software like ChatGPT into consumer electronics, a sector where hardware-software integration is often the key to success.

If successful, the device could pose a new kind of competition to smartphones by offering a lightweight, AI-first alternative—part personal assistant, part communications tool—that reimagines how users connect to digital ecosystems.

DeepSeek claims AI model trained for just $294,000, challenging U.S. rivals

Chinese AI developer DeepSeek has disclosed that its reasoning-focused R1 model cost just $294,000 to train—dramatically below the hundreds of millions reportedly spent by U.S. leaders such as OpenAI. The figure, revealed in a Nature article co-authored by founder Liang Wenfeng, is the company’s first public estimate of training costs and is likely to reignite debate over China’s position in the global AI race.

According to the paper, R1 was trained on a cluster of 512 Nvidia H800 chips over 80 hours. DeepSeek acknowledged for the first time that it also owns Nvidia A100 GPUs, which were used in preparatory phases before training shifted to the China-specific H800s. The H800 was designed to comply with U.S. export restrictions that bar Nvidia from selling its more powerful H100 and A100 chips to China.

The cost revelation is striking: OpenAI CEO Sam Altman has said foundational models cost “much more” than $100 million to train, though OpenAI has never published detailed figures. DeepSeek’s claim of drastically lower costs fueled January’s investor selloff in global tech stocks, amid fears it could disrupt the market dominance of Nvidia and other AI giants.

Skepticism remains. U.S. officials have suggested DeepSeek may have obtained H100 chips despite restrictions, while U.S. companies have questioned whether its development relied on model distillation—a technique where one AI model learns from another. DeepSeek has admitted using Meta’s open-source Llama models and said its training data may have included content generated by OpenAI systems, though it insists this was incidental.

DeepSeek defends distillation as an efficient way to cut costs and expand access to AI by reducing the enormous energy and resource demands of large-scale training. Analysts note this could accelerate the spread of competitive AI models outside the U.S., though questions about intellectual property and national security will remain central to the debate.