Meta Tests Its First In-House AI Training Chip
Meta, the parent company of Facebook, has initiated testing of its first in-house chip designed specifically for training artificial intelligence (AI) systems. This development marks a significant step in Meta’s plan to reduce its reliance on external chip suppliers like Nvidia and move toward producing its own custom silicon. Sources told Reuters that Meta has begun a small deployment of the chip and plans to expand production if the test proves successful.
Meta’s push to develop in-house chips is part of a broader strategy to reduce the high infrastructure costs associated with its AI projects. The company has forecast total 2025 expenses between $114 billion and $119 billion, including up to $65 billion in capital expenditure largely driven by investments in AI infrastructure.
The new chip is a dedicated accelerator, meaning it is built specifically for AI tasks, making it more power-efficient compared to graphics processing units (GPUs) typically used for AI workloads. Meta is collaborating with Taiwan-based TSMC to produce the chip. The initial design, known as the “tape-out,” has been completed, a crucial milestone in chip development. While tape-out is expensive, costing tens of millions of dollars, it is an essential part of the process to test the chip’s functionality.
Meta has experienced setbacks in its Meta Training and Inference Accelerator (MTIA) series in the past, even scrapping one chip after its initial tests failed. However, last year, Meta began using a MTIA inference chip for content recommendation systems on platforms like Facebook and Instagram. This progress has encouraged Meta to pursue further development of custom chips, aiming to use them for both training and inference of AI models, including generative AI products like Meta AI.
Meta plans to start using its own chips by 2026 for training purposes, aiming to reduce costs associated with AI model training. Chris Cox, Meta’s Chief Product Officer, discussed the company’s phased approach, noting that while progress has been slow, the success of the first-generation inference chip for recommendations has been a significant achievement. Despite the setbacks in developing custom chips, Meta continues to rely heavily on Nvidia’s GPUs for its AI needs, making it one of Nvidia’s largest customers.
The broader AI industry has raised questions about the effectiveness of scaling up large language models with ever more data and computing power. Chinese startup DeepSeek has introduced new, more efficient AI models that rely more heavily on inference rather than the computationally expensive training process. This has sparked concerns about the future value of GPUs like those from Nvidia, which have faced significant market volatility this year.



