Yazılar

Analysts weigh in on Nvidia’s $100B OpenAI investment and strategic compute pact

Nvidia’s decision to invest up to $100 billion in OpenAI — securing at least 10 gigawatts of compute capacity — is being hailed as a power play that cements its dominance in AI infrastructure. But analysts caution the partnership also carries risks of overexposure and market concentration.

Matt Britzman, Hargreaves Lansdown:
Britzman called the deal a “huge prize” for Nvidia, estimating each gigawatt of AI data center capacity could equate to $50 billion in revenue. By tying OpenAI closely to its hardware and software ecosystem, Nvidia raises the stakes for rivals, ensuring GPUs remain the foundation of next-gen AI.

Jacob Bourne, eMarketer:
Bourne said the move reassures investors about Nvidia’s long-term demand pipeline while fending off competitive threats from rival chipmakers or Big Tech’s in-house chips. For OpenAI, the deal signals growing independence from Microsoft as it diversifies funding and resources.

Anshel Sag, Moor Insights & Strategy:
Sag highlighted the long-standing relationship between the firms, saying this validates Nvidia’s growth targets while giving OpenAI the scale to serve even larger customers.

Ben Bajarin, Creative Strategies:
Bajarin described the partnership as practical: Nvidia is simply enabling OpenAI to meet surging demand for GPUs, which remain its core compute backbone.

Kim Forrest, Bokeh Capital:
Forrest was more skeptical, warning that “being totally linked with each other” risks short-sightedness and could open doors for competitors to court other AI companies. She also questioned whether large language models (LLMs) will ultimately deliver the sweeping productivity gains many expect.

Gil Luria, D.A. Davidson:
Luria suggested Nvidia may be acting as the “investor of last resort,” propping up OpenAI’s heavy spending commitments rather than purely chasing opportunity.

David Wagner, Aptus Capital Advisors:
Wagner said the investment reflects CEO Jensen Huang’s long-term vision of building out “AI factories,” though the timing came earlier than many anticipated.

Stacy Rasgon, Bernstein:
Rasgon noted the partnership helps OpenAI pursue its ambitious compute goals while ensuring Nvidia hardware powers the expansion. But he flagged “circular” concerns about whether Nvidia is essentially financing its own demand, a critique that could intensify.

The mixed reactions underscore the scale of Nvidia’s gamble: a bet that doubling down on OpenAI — while fending off rivals — will extend its dominance in the AI era, even as questions linger over long-term sustainability.

OpenAI’s GPT-5 Model Nears Release Amid High Expectations

OpenAI is on the brink of releasing GPT-5, the next-generation language model succeeding GPT-4, which powered the ChatGPT phenomenon starting in 2022. Industry insiders and early testers express cautious optimism, praising its enhanced coding and scientific problem-solving capabilities, though some say the leap from GPT-4 to GPT-5 feels less dramatic compared to the jump from GPT-3 to GPT-4.

OpenAI, backed by Microsoft and currently valued at around $300 billion, has faced challenges scaling GPT-5 due to limitations in available training data and increased complexity in training runs that can last months and are prone to hardware failures. Unlike GPT-4, which saw significant gains through increased compute power and data, GPT-5 incorporates a novel approach called “test-time compute,” directing extra processing power dynamically to solve complex reasoning and decision-making tasks.

Since the debut of ChatGPT nearly three years ago, generative AI has rapidly advanced. GPT-4 notably outperformed its predecessor by passing the simulated bar exam in the top 10%, setting a new standard in AI capabilities. Meanwhile, competitors like Google and Anthropic have developed rival models, and open-source initiatives such as Meta’s Llama 3 have narrowed the performance gap.

OpenAI CEO Sam Altman noted earlier in 2025 that GPT-5 would blend traditional large model training with test-time compute techniques, reflecting the company’s increasingly sophisticated and multifaceted AI portfolio. The broader AI industry awaits the release with anticipation, expecting GPT-5 to unlock new applications beyond conversational AI toward fully autonomous task execution.

Huawei’s AI Lab Denies Copying Alibaba’s Qwen Model Amid Copyright Claims

Huawei’s AI research division, Noah Ark Lab, has denied allegations that its Pangu Pro Moe (Mixture of Experts) large language model copied from Alibaba’s Qwen 2.5 14B model. The lab insisted on Saturday that Pangu Pro was independently developed and trained, refuting claims made in a report by an entity named HonestAGI.

HonestAGI published a paper on GitHub claiming “extraordinary correlation” between Huawei’s Pangu Pro Moe and Alibaba’s Qwen model, suggesting that Huawei’s model might have been “upcycled” rather than trained from scratch. The report also raised concerns about potential copyright violations and false claims regarding Huawei’s investment in the model’s training.

In response, Noah Ark Lab stated that their model is not based on incremental training from other manufacturers’ models but instead includes key innovations in architecture and technical features. They highlighted that Pangu Pro is the first large-scale model built entirely on Huawei’s Ascend chips and confirmed adherence to open-source licensing rules for any third-party code used—though they did not specify which open-source models influenced their work.

Alibaba has yet to comment on the allegations, and the identity of HonestAGI remains unknown. The controversy comes amid rising competition in China’s AI sector, which has been accelerated by the release of open-source models like DeepSeek’s R1 and Alibaba’s Qwen family, designed for consumer and chatbot applications. In contrast, Huawei’s Pangu models are primarily applied in government, finance, and manufacturing sectors.