Meta has reportedly started testing its first in-house chipsets designed for training artificial intelligence (AI) models. These processors, developed under the Meta Training and Inference Accelerator (MTIA) program, mark a significant step in the company’s effort to reduce reliance on third-party chip suppliers. A limited number of these custom chips have been deployed for initial testing to evaluate their performance and efficiency. If the tests yield positive results, Meta is expected to scale up production and integrate these chipsets into its AI infrastructure.
According to a Reuters report, Meta has collaborated with Taiwan Semiconductor Manufacturing Company (TSMC) to develop these AI-focused processors. The company has reportedly completed the tape-out stage—one of the final steps in chip design—indicating that the project is moving closer to full-scale deployment. While testing is still in its early stages, Meta’s move highlights its commitment to developing proprietary AI hardware, potentially giving it more control over performance optimization and cost management.
This is not Meta’s first venture into AI chip development. Previously, the company introduced inference accelerators designed specifically for AI inference tasks. However, until now, Meta lacked in-house chipsets dedicated to training large-scale AI models such as its Llama family of large language models (LLMs). With these new processors, the company aims to enhance its AI capabilities while reducing dependence on external chip manufacturers like Nvidia and AMD.
If Meta successfully scales up production of its custom AI chipsets, it could lead to more efficient AI training, improved model performance, and lower operational costs. The move aligns with a broader industry trend where major tech firms, including Google and Amazon, are investing in custom AI chips to stay competitive in the rapidly evolving AI landscape. As Meta continues its AI hardware push, further details about its chip performance and deployment strategy are expected to emerge in the coming months.