Yazılar

Intel to unveil Panther Lake chip details, its first built entirely on 18A process

Intel plans to reveal the technical architecture of its upcoming laptop chip, Panther Lake, on Thursday, according to sources cited by Reuters. The disclosure aims to reassure investors about Intel’s progress on its long-awaited 18A manufacturing process, the company’s next-generation technology platform developed after years of costly setbacks.

The Panther Lake chips will serve as Intel’s high-end mobile processors, featured in premium laptops. They are the first large-scale products built entirely using 18A — a key milestone as Intel seeks to reclaim market share lost to AMD and TSMC. The chipmaker conducted in-depth technical briefings and factory tours last week in Arizona, showcasing the redesigned architecture, including the AI engine, graphics cores, and media processing unit optimized for 18A.

According to those briefed, Panther Lake offers 30% better energy efficiency and up to 50% greater data processing power compared to its predecessor, Lunar Lake — a chip largely produced by TSMC. Intel executives said the new processors are expected to debut in early 2026.

The Arizona event underscored how vital Panther Lake is to Intel’s turnaround. The company reported a $2.9 billion loss in the second quarter and warned that future investments in its 14A process depend on finding new customers. Following political and financial turbulence — including President Trump’s call for CEO Lip-Bu Tan’s resignation and subsequent investments from SoftBank and Nvidia — Intel is under pressure to deliver results.

The Fab 52 facility in Arizona, built under former CEO Pat Gelsinger’s global expansion strategy, now houses the 18A process, featuring a new transistor design and more efficient power delivery. Intel did not disclose yield rates for Panther Lake, though previous reports indicate the success rate has improved from 5% to about 10% this year.

AI Startup Modular Raises $250 Million to Take On Nvidia’s Software Dominance

AI startup Modular announced Wednesday it has raised $250 million in fresh funding, giving the company a valuation of $1.6 billion as it looks to loosen Nvidia’s grip on the AI computing ecosystem.

The round, which nearly tripled Modular’s valuation from two years ago, was led by the U.S. Innovative Technology Fund with participation from DFJ Growth and existing backers GV, General Catalyst, and Greylock.

Founded in 2022 by former Apple and Google engineers, Modular has built a platform that lets developers run AI applications across multiple types of chips without rewriting code for each one. Its clients include cloud providers such as Oracle and Amazon, as well as chipmakers Nvidia and AMD.

Nvidia’s dominance—holding more than 80% of the high-end AI chip market—is reinforced by its proprietary CUDA software, which locks in over 4 million developers worldwide. Modular positions itself as a neutral alternative, branding its approach the “Switzerland strategy.”

Co-founder and CEO Chris Lattner emphasized that Modular isn’t aiming to topple Nvidia directly. “What we’re focused on is not like pushing down Nvidia or crushing them. It’s more about enabling a level playing field so that other people can compete,” he said.

The company plans to sell its software directly to enterprises on a usage-based model and through revenue-sharing deals with cloud providers. Investors are betting that a multi-vendor AI hardware future is inevitable. DFJ Growth partner Sam Fort described Modular as “VMware for the AI era,” enabling workloads to move seamlessly across different chip vendors.

With around 130 employees, Modular plans to use the new capital to grow its engineering and sales teams and to expand beyond AI inference into the more demanding AI training market.

Nvidia’s $100B OpenAI deal sparks funding, valuation, and competition questions

Nvidia’s plan to invest up to $100 billion in OpenAI — while also supplying millions of its GPUs to the ChatGPT maker — is unprecedented in the tech sector and raises major uncertainties about finance, competition, and market impact.

Key open questions:

1. Where does the rest of the money come from?

  • Nvidia has pledged $10B per gigawatt for 10 GW of compute, but CEO Jensen Huang estimates $50B is needed per gigawatt (with $35B of that spent on Nvidia hardware).

  • That leaves a massive $40B funding gap per GW. OpenAI has not disclosed how it will raise the remainder.

2. How does this fit OpenAI’s shift to for-profit?

  • OpenAI is transitioning from a nonprofit into a public benefit corporation overseen by its nonprofit parent.

  • Nvidia’s investment may hinge on this structure, but it’s unclear if funding flows to the nonprofit entity or the restructured PBC.

  • Regulatory approval in Delaware and California is still pending.

3. What does it mean for OpenAI’s valuation?

  • Nvidia’s initial $10B tranche is pegged to OpenAI’s current $500B valuation.

  • But there’s no timeline for deploying the full 10 GW or committing the entire $100B. Future investments may depend on OpenAI’s valuation at the time, raising uncertainty about dilution and pricing.

4. How will competition be affected?

  • Nvidia’s chips remain the most coveted resource in AI. By tying up vast capacity with OpenAI, rivals like Anthropic, Google, or even Microsoft could face constraints in access.

  • Competitors like AMD may find it harder to gain traction if Nvidia prioritizes OpenAI, despite Nvidia’s public pledge to “make every customer a top priority.”

5. What does it mean for Oracle?

  • Oracle has signed hundreds of billions in cloud contracts with OpenAI, but analysts question whether OpenAI has the liquidity to pay.

  • Nvidia’s cash infusion could strengthen Oracle’s revenue outlook, reassuring investors and credit agencies like Moody’s, which flagged funding risks.

Big picture:

The deal deepens the interdependence of AI’s leading players — Nvidia for chips, OpenAI for models, Microsoft for software integration, and Oracle for cloud. But it also amplifies antitrust concerns, as U.S. regulators eye whether such alliances foreclose competition in the AI stack.