Meta is testing its first in-house AI chips: report

Meta, the parent company of Facebook, Instagram, and WhatsApp, is testing its first in-house chip designed to train artificial intelligence (AI) systems, according to sources cited by Reuters. The move marks a significant step in Meta's efforts to reduce its dependence on external suppliers like Nvidia and lower its massive infrastructure costs.
The new chip, part of Meta's Meta Training and Inference Accelerator (MTIA) series, is a dedicated accelerator tailored for AI-specific tasks. This makes it potentially more power-efficient than the general-purpose graphics processing units (GPUs) currently used for AI workloads.
Meta is working with Taiwan Semiconductor Manufacturing Company (TSMC) to produce the chip, with a small-scale deployment already underway. If successful, the company plans to ramp up production for wider use.
Developing in-house chips is a key part of Meta's strategy to manage its soaring expenses, which are projected to reach $114 billion to $119 billion in 2025, including up to $65 billion in capital expenditure driven by AI infrastructure.
The company aims to use its own chips for AI training by 2026, starting with recommendation systems for Facebook and Instagram feeds before expanding to generative AI products like its Meta AI chatbot.
Meta's chip development journey has faced challenges. The company previously scrapped an in-house inference chip after a failed test deployment in 2022, leading to a shift back to Nvidia GPUs. Despite this, Meta remains one of Nvidia's largest customers, relying heavily on its GPUs for training AI models, including its Llama foundation model series, and for inference tasks across its apps, which are used by over 3 billion people daily.
The push for custom silicon comes as doubts grow within the AI research community about the sustainability of scaling up large language models by adding more data and computing power. These concerns were highlighted by the recent launch of low-cost, computationally efficient models from Chinese startup DeepSeek, which rely more on inference than traditional training methods.
According to the Reuters report, Meta's Chief Product Officer, Chris Cox, described the company's chip development as a "walk, crawl, run" process, noting that the first-generation inference chip for recommendations has been a "big success". However, the success of the new training chip remains uncertain, as the current test phase involves a costly and time-consuming "tape-out" process, with no guarantee of success.
If the chip performs well, it could help Meta reduce costs and gain more control over its AI infrastructure. For now, the company continues to balance its in-house efforts with its reliance on Nvidia's dominant GPU technology. Meta has not yet publicly commented on the testing of these in-house AI chips.
Comments