Yazılar

Meta Tests Its First In-House AI Training Chip

Meta, the parent company of Facebook, has initiated testing of its first in-house chip designed specifically for training artificial intelligence (AI) systems. This development marks a significant step in Meta’s plan to reduce its reliance on external chip suppliers like Nvidia and move toward producing its own custom silicon. Sources told Reuters that Meta has begun a small deployment of the chip and plans to expand production if the test proves successful.

Meta’s push to develop in-house chips is part of a broader strategy to reduce the high infrastructure costs associated with its AI projects. The company has forecast total 2025 expenses between $114 billion and $119 billion, including up to $65 billion in capital expenditure largely driven by investments in AI infrastructure.

The new chip is a dedicated accelerator, meaning it is built specifically for AI tasks, making it more power-efficient compared to graphics processing units (GPUs) typically used for AI workloads. Meta is collaborating with Taiwan-based TSMC to produce the chip. The initial design, known as the “tape-out,” has been completed, a crucial milestone in chip development. While tape-out is expensive, costing tens of millions of dollars, it is an essential part of the process to test the chip’s functionality.

Meta has experienced setbacks in its Meta Training and Inference Accelerator (MTIA) series in the past, even scrapping one chip after its initial tests failed. However, last year, Meta began using a MTIA inference chip for content recommendation systems on platforms like Facebook and Instagram. This progress has encouraged Meta to pursue further development of custom chips, aiming to use them for both training and inference of AI models, including generative AI products like Meta AI.

Meta plans to start using its own chips by 2026 for training purposes, aiming to reduce costs associated with AI model training. Chris Cox, Meta’s Chief Product Officer, discussed the company’s phased approach, noting that while progress has been slow, the success of the first-generation inference chip for recommendations has been a significant achievement. Despite the setbacks in developing custom chips, Meta continues to rely heavily on Nvidia’s GPUs for its AI needs, making it one of Nvidia’s largest customers.

The broader AI industry has raised questions about the effectiveness of scaling up large language models with ever more data and computing power. Chinese startup DeepSeek has introduced new, more efficient AI models that rely more heavily on inference rather than the computationally expensive training process. This has sparked concerns about the future value of GPUs like those from Nvidia, which have faced significant market volatility this year.

Meta AI Introduces New Memory Feature and Personalized Recommendations

Meta has announced new upgrades to its AI chatbot, aiming to provide users with a more personalized experience. The latest enhancements include a memory feature that allows Meta AI to retain specific details shared in individual chats and a personalized recommendation system that tailors suggestions based on users’ social media activity. These features are designed to make interactions with Meta AI more intuitive and relevant, aligning with users’ preferences over time.

In a newsroom post, Meta highlighted how these new features will enhance the chatbot’s capabilities. The company has been testing a memory feature that enables Meta AI to remember details from previous conversations. This means users won’t have to repeatedly provide the same information, making interactions more seamless. The feature is expected to improve over time as the AI learns user preferences and adapts accordingly.

Memory in Meta AI is designed to function only within individual conversations. Users can explicitly instruct the AI to remember certain details, or it can automatically retain relevant information based on interactions. For example, if a user asks Meta AI for breakfast suggestions and specifies that they are vegetarian, the chatbot will remember this preference and only suggest vegetarian meals in future conversations.

The second feature, personalized recommendations, takes this customization further by analyzing user activity across Meta’s social media platforms. By leveraging data from interactions on apps like Facebook and Instagram, Meta AI will be able to provide tailored content suggestions, event recommendations, and product discoveries. With these enhancements, Meta AI is set to offer a smarter, more user-centric chatbot experience.

WhatsApp for Android Allegedly Developing New Widget for Meta AI Integration

WhatsApp for Android is reportedly working on a new widget designed to provide users with quick access to Meta AI, the company’s in-house artificial intelligence chatbot. According to a feature tracker, this new widget will allow users to interact with Meta AI directly from their home screen, offering a more convenient and seamless way to access the chatbot. Meta AI was first introduced to WhatsApp in 2024, initially rolled out in select regions, and provides a native method for users to engage with Meta’s AI technology. The widget, which is still under development, is not yet available for beta testers to try out.

A screenshot shared in the report reveals that the new widget will be a 4 x 1-sized element, similar to the Google Search widget commonly found on Android devices. It features a light grey background with a white text field, accompanied by a camera icon. This minimalist design makes it easy for users to interact with Meta AI while keeping their home screens uncluttered. The text box includes the ring-shaped Meta AI logo, along with the prompt “Ask Meta AI,” prompting users to engage with the chatbot right from the home screen.

Upon tapping the text field, users will be able to type in a question or prompt, and once sent, WhatsApp’s full-screen interface will open to the Meta AI chat, where users can read the chatbot’s response. However, according to the feature tracker, the widget will not display the chatbot’s replies directly within the widget itself. Instead, users will need to open the full interface to read the response, allowing them to maintain a streamlined experience while still accessing the chatbot’s capabilities.

The addition of this new widget would further integrate Meta AI into WhatsApp, making it more accessible to users who wish to leverage artificial intelligence for everyday tasks and inquiries. This move also highlights Meta’s ongoing efforts to enhance its messaging platform with AI-powered features, positioning WhatsApp as a more versatile tool in the world of instant messaging. As the widget continues to be developed, it could represent an exciting new feature for WhatsApp users, offering them an innovative way to interact with AI technology.