Yazılar

Google Set to Collaborate with MediaTek on Upcoming AI Chip, Report Says

Google is reportedly preparing to partner with MediaTek to develop the next generation of its AI chips, Tensor Processing Units (TPUs), which are expected to be produced next year. According to a report by The Information, this collaboration will allow Google to diversify its AI chip development efforts, with MediaTek playing a key role in the new iteration of its cutting-edge technology. Despite this new partnership, Google has not severed its long-standing relationship with Broadcom, a chip designer that has been working exclusively with the tech giant on AI chips for several years.

The decision to team up with MediaTek does not signal the end of Google’s collaboration with Broadcom. According to sources familiar with the matter, Google is continuing to work with Broadcom on its existing chip technologies. As a result, the company appears to be taking a dual approach to AI chip development, utilizing the expertise of both MediaTek and Broadcom to create chips that will enhance its cloud offerings and internal capabilities.

Google has also made significant strides in designing its own AI server chips, which it uses for internal research and development, as well as offering them to cloud customers. This approach, which mirrors that of competitors like Nvidia, gives Google a competitive edge in the rapidly growing AI market. By reducing its reliance on Nvidia chips, which have seen surging demand from companies like Microsoft-backed OpenAI and Meta Platforms, Google is positioning itself to better compete in the evolving AI space.

In late 2024, Google introduced its sixth-generation TPU, aimed at providing both internal and cloud-based alternatives to Nvidia’s chips, which remain the industry’s most sought-after processors. This move signifies Google’s commitment to offering powerful AI solutions that are not only competitive with Nvidia but also provide more options for cloud customers who are increasingly turning to AI-driven technologies. With the upcoming collaboration with MediaTek, Google is taking a step toward solidifying its position in the AI hardware landscape.

Google Cloud Integrates Chirp 3 Audio Generation Model into Vertex AI Platform

Google Cloud has expanded its AI offerings by bringing the Chirp 3 audio generation model to its Vertex AI platform, marking a significant step in enhancing the platform’s capabilities. Initially available in private preview, Chirp 3 is now accessible to all Vertex AI users. This cutting-edge model is designed to create human-like audio with a variety of custom voices, providing a more natural and expressive listening experience. The latest version of Chirp 3 introduces eight new voices and supports 31 different languages, further expanding its versatility and global reach.

The official announcement was made during the “Gemini for the United Kingdom” event held at Google DeepMind’s headquarters in London, where Google Cloud unveiled several notable updates and advancements related to artificial intelligence. Chirp 3’s integration into Vertex AI is poised to add significant value to the platform by enabling users to generate high-quality audio with nuanced and dynamic voice inflections, which can be useful across various applications, from virtual assistants to content creation.

Starting next week, Chirp 3 will be fully integrated into Vertex AI, joining other notable AI models such as Gemini, Imagen, and Veo. The addition of Chirp 3 will enhance the platform’s offerings, providing users with the ability to create realistic and expressive speech. With the introduction of its HD Voices feature, Chirp 3 will be available in 31 languages and offer 248 unique voices, including eight speaker options to cater to a wide range of preferences and needs.

One of the standout features of Chirp 3 is its ability to generate speech with human-like intonation and emotional depth, making it a powerful tool for creating immersive and lifelike audio experiences. Google Cloud’s continuous innovation in AI models like Chirp 3 signals the company’s commitment to advancing the field of artificial intelligence and empowering users with sophisticated tools for a wide range of applications.

Google Unveils Gemma 3 Open-Source AI Models, Optimized to Run on a Single GPU

Google has officially launched the Gemma 3 family of open-source artificial intelligence (AI) models, marking a significant advancement over the previous Gemma 2 series introduced in August 2024. The new models come with enhanced text and visual reasoning capabilities, offering the ability to process and analyze images, text, and short videos. One of the key selling points of the Gemma 3 series is its support for over 35 languages, with the ability to be fine-tuned to support up to 140 languages. This makes it an incredibly versatile tool for developers and organizations looking to integrate AI into multilingual applications. Additionally, these models are optimized to run on a single GPU or Google’s custom Tensor Processing Unit (TPU), making them more accessible and easier to deploy.

The Gemma 3 models are part of Google’s broader initiative to provide small language models (SLMs) that maintain high performance while being resource-efficient. Built using the same underlying technology as Google’s Gemini 2.0 models, Gemma models have already seen impressive uptake, with over 100 million downloads and more than 60,000 variants created by developers. By making these models open-source, Google continues its push to democratize AI, allowing a wide range of developers to leverage the power of advanced AI models without needing extensive computational resources.

In terms of performance, the Gemma 3 series has proven itself to be competitive with other industry-leading models. According to Google, it outperforms Meta’s Llama-405B, DeepSeek-V3, and OpenAI’s o3-mini models on the LMArena’s leaderboard. Available in four sizes — 1B, 4B, 12B, and 27B parameters — these models can be tailored to meet different use cases, whether for text processing or image and video analysis. Furthermore, the Gemma 3 models come equipped with a context window of 128,000 tokens, enabling them to handle larger data inputs efficiently. They also support function calling, allowing developers to integrate agentic capabilities into their applications and software.

Google has emphasized that these models were developed with careful attention to safety and risk management. The company has incorporated internal safety protocols through fine-tuning and benchmark evaluations to ensure that the models function responsibly. Additionally, the Gemma 3 models underwent testing with more capable AI models to ensure that they performed reliably while maintaining a low risk profile. By focusing on both performance and safety, Google aims to provide powerful AI tools that are not only effective but also secure and responsible in their deployment.