Yazılar

TSMC Proposes Joint Venture with Intel’s Foundry Division to Nvidia, AMD, and Broadcom

TSMC (2330.TW) has pitched the idea of a joint venture involving Intel’s (INTC.O) foundry division to major U.S. chip designers, including Nvidia (NVDA.O), Advanced Micro Devices (AMD.O), and Broadcom (AVGO.O), according to sources familiar with the discussions. Under the proposal, TSMC, the world’s leading contract chipmaker, would oversee Intel’s foundry operations, which focus on manufacturing chips tailored to customer needs, but TSMC would retain no more than 50% ownership.

The proposal has been discussed with several other firms as well, including Qualcomm (QCOM.O), as part of TSMC’s efforts to partner with chip designers. The discussions are still in their early stages, and any potential deal would require approval from the U.S. government, particularly under the administration of President Donald Trump, who has shown interest in helping Intel recover from its financial struggles. Trump is particularly invested in boosting American manufacturing and supporting companies like Intel in remaining U.S.-owned.

Intel, which reported an $18.8 billion net loss for 2024, has seen a drastic decline in its stock price over the past year. As of December 31, the book value of Intel’s foundry division’s property and plant equipment stood at $108 billion. The company’s recent struggles have pushed its board members to consider various strategic moves, including partnering with TSMC for its foundry operations.

Despite some internal opposition, Intel’s board members have expressed support for exploring a joint venture with TSMC, with Intel’s executives holding different views on the matter. Intel’s foundry division, once a crucial part of Intel’s strategy under former CEO Pat Gelsinger, is now central to the company’s efforts to return to profitability, even as Gelsinger was replaced by interim co-CEOs in December.

TSMC’s push for a joint venture is complicated by the significant differences in manufacturing processes and technologies between the two companies. Intel and TSMC currently employ distinct chipmaking methods, which could pose challenges in aligning operations. Intel has previously partnered with Taiwan’s UMC (2303.TW) and Israel’s Tower Semiconductor (TSEM.TA), offering some precedent for potential collaboration, but the specifics of how such a partnership could function remain uncertain, especially regarding the sharing of trade secrets.

While TSMC’s interest is to involve Intel’s advanced manufacturing customers in the venture, discussions have also centered around Intel’s 18A manufacturing process, a key area of contention in the negotiations. Intel executives have claimed that its 18A technology surpasses TSMC’s 2-nanometer process, with Nvidia and Broadcom already testing Intel’s manufacturing capabilities, alongside AMD exploring the potential of Intel’s processes for its chips.

Meta Tests Its First In-House AI Training Chip

Meta, the parent company of Facebook, has initiated testing of its first in-house chip designed specifically for training artificial intelligence (AI) systems. This development marks a significant step in Meta’s plan to reduce its reliance on external chip suppliers like Nvidia and move toward producing its own custom silicon. Sources told Reuters that Meta has begun a small deployment of the chip and plans to expand production if the test proves successful.

Meta’s push to develop in-house chips is part of a broader strategy to reduce the high infrastructure costs associated with its AI projects. The company has forecast total 2025 expenses between $114 billion and $119 billion, including up to $65 billion in capital expenditure largely driven by investments in AI infrastructure.

The new chip is a dedicated accelerator, meaning it is built specifically for AI tasks, making it more power-efficient compared to graphics processing units (GPUs) typically used for AI workloads. Meta is collaborating with Taiwan-based TSMC to produce the chip. The initial design, known as the “tape-out,” has been completed, a crucial milestone in chip development. While tape-out is expensive, costing tens of millions of dollars, it is an essential part of the process to test the chip’s functionality.

Meta has experienced setbacks in its Meta Training and Inference Accelerator (MTIA) series in the past, even scrapping one chip after its initial tests failed. However, last year, Meta began using a MTIA inference chip for content recommendation systems on platforms like Facebook and Instagram. This progress has encouraged Meta to pursue further development of custom chips, aiming to use them for both training and inference of AI models, including generative AI products like Meta AI.

Meta plans to start using its own chips by 2026 for training purposes, aiming to reduce costs associated with AI model training. Chris Cox, Meta’s Chief Product Officer, discussed the company’s phased approach, noting that while progress has been slow, the success of the first-generation inference chip for recommendations has been a significant achievement. Despite the setbacks in developing custom chips, Meta continues to rely heavily on Nvidia’s GPUs for its AI needs, making it one of Nvidia’s largest customers.

The broader AI industry has raised questions about the effectiveness of scaling up large language models with ever more data and computing power. Chinese startup DeepSeek has introduced new, more efficient AI models that rely more heavily on inference rather than the computationally expensive training process. This has sparked concerns about the future value of GPUs like those from Nvidia, which have faced significant market volatility this year.

Celestial AI Secures $250 Million to Enhance AI Chip Connectivity

Silicon Valley-based startup Celestial AI has raised an additional $250 million in venture capital, bringing its total funding to $515 million. The company aims to accelerate AI computing by leveraging photonics—a technology that uses light instead of electrical signals—to enhance the speed of data transfer between AI processing and memory chips.

Memory bandwidth, which determines the efficiency of AI systems, is a crucial factor in chip performance and a key consideration in U.S. government export controls aimed at limiting China’s AI capabilities. Currently, Nvidia dominates this space with its proprietary NVLink and NVSwitch technologies, prompting a surge in investments to develop alternative solutions. Celestial AI’s competitors, Lightmatter and Ayar Labs, have raised $850 million and $370 million, respectively, in similar efforts.

Celestial AI is backed by AMD Ventures, the investment arm of Nvidia’s competitor Advanced Micro Devices (AMD). The company is working on a “photonic fabric” that acts as a high-speed bridge between multiple chips. According to CEO Dave Lazovsky, the technology improves efficiency by reducing energy consumption and latency while saving valuable chip space.

“There are no good answers outside of Nvidia,” Lazovsky said in an interview at Celestial AI’s headquarters in Santa Clara, California. “What we’ve created with photonic fabric achieves similar functionality but with superior energy efficiency and lower latency.”

The funding round was led by Fidelity Management & Research and included BlackRock, Maverick Capital, Tiger Global Management, and former Cadence Design Systems CEO Lip-Bu Tan. Existing investors such as AMD Ventures, Koch Disruptive Technologies, Singapore’s state investor Temasek, and Porsche Automobil Holding also participated.