Yazılar

Google Unveils Gemma 3 Open-Source AI Models, Optimized to Run on a Single GPU

Google has officially launched the Gemma 3 family of open-source artificial intelligence (AI) models, marking a significant advancement over the previous Gemma 2 series introduced in August 2024. The new models come with enhanced text and visual reasoning capabilities, offering the ability to process and analyze images, text, and short videos. One of the key selling points of the Gemma 3 series is its support for over 35 languages, with the ability to be fine-tuned to support up to 140 languages. This makes it an incredibly versatile tool for developers and organizations looking to integrate AI into multilingual applications. Additionally, these models are optimized to run on a single GPU or Google’s custom Tensor Processing Unit (TPU), making them more accessible and easier to deploy.

The Gemma 3 models are part of Google’s broader initiative to provide small language models (SLMs) that maintain high performance while being resource-efficient. Built using the same underlying technology as Google’s Gemini 2.0 models, Gemma models have already seen impressive uptake, with over 100 million downloads and more than 60,000 variants created by developers. By making these models open-source, Google continues its push to democratize AI, allowing a wide range of developers to leverage the power of advanced AI models without needing extensive computational resources.

In terms of performance, the Gemma 3 series has proven itself to be competitive with other industry-leading models. According to Google, it outperforms Meta’s Llama-405B, DeepSeek-V3, and OpenAI’s o3-mini models on the LMArena’s leaderboard. Available in four sizes — 1B, 4B, 12B, and 27B parameters — these models can be tailored to meet different use cases, whether for text processing or image and video analysis. Furthermore, the Gemma 3 models come equipped with a context window of 128,000 tokens, enabling them to handle larger data inputs efficiently. They also support function calling, allowing developers to integrate agentic capabilities into their applications and software.

Google has emphasized that these models were developed with careful attention to safety and risk management. The company has incorporated internal safety protocols through fine-tuning and benchmark evaluations to ensure that the models function responsibly. Additionally, the Gemma 3 models underwent testing with more capable AI models to ensure that they performed reliably while maintaining a low risk profile. By focusing on both performance and safety, Google aims to provide powerful AI tools that are not only effective but also secure and responsible in their deployment.

Meta Reportedly Testing In-House AI Training Chipsets for the First Time

Meta has reportedly started testing its first in-house chipsets designed for training artificial intelligence (AI) models. These processors, developed under the Meta Training and Inference Accelerator (MTIA) program, mark a significant step in the company’s effort to reduce reliance on third-party chip suppliers. A limited number of these custom chips have been deployed for initial testing to evaluate their performance and efficiency. If the tests yield positive results, Meta is expected to scale up production and integrate these chipsets into its AI infrastructure.

According to a Reuters report, Meta has collaborated with Taiwan Semiconductor Manufacturing Company (TSMC) to develop these AI-focused processors. The company has reportedly completed the tape-out stage—one of the final steps in chip design—indicating that the project is moving closer to full-scale deployment. While testing is still in its early stages, Meta’s move highlights its commitment to developing proprietary AI hardware, potentially giving it more control over performance optimization and cost management.

This is not Meta’s first venture into AI chip development. Previously, the company introduced inference accelerators designed specifically for AI inference tasks. However, until now, Meta lacked in-house chipsets dedicated to training large-scale AI models such as its Llama family of large language models (LLMs). With these new processors, the company aims to enhance its AI capabilities while reducing dependence on external chip manufacturers like Nvidia and AMD.

If Meta successfully scales up production of its custom AI chipsets, it could lead to more efficient AI training, improved model performance, and lower operational costs. The move aligns with a broader industry trend where major tech firms, including Google and Amazon, are investing in custom AI chips to stay competitive in the rapidly evolving AI landscape. As Meta continues its AI hardware push, further details about its chip performance and deployment strategy are expected to emerge in the coming months.

Former Intel CEO Pat Gelsinger Joins Playground Global as General Partner

Pat Gelsinger, the former CEO of Intel, has joined venture capital firm Playground Global as a general partner. In addition to his new role, Gelsinger has also joined the board of xLight, a startup focused on developing advanced chip manufacturing technology.

Playground Global and Gelsinger’s Role

Founded in 2015, Playground Global is a Silicon Valley-based venture capital firm with $1.2 billion in assets under management. The firm specializes in deep technology investments, including semiconductors and AI. Playground’s notable investments include MosaicML, an AI firm sold to Databricks in a $1.3 billion stock deal, and PsiQuantum, a quantum computing firm raising funds to build quantum computers in the U.S. and Australia.

Gelsinger, who left Intel after disagreements with its board over his turnaround strategy, will focus on supporting 10 to 20 of Playground’s portfolio companies. His mission is to identify technologies that can deliver breakthroughs, specifically those that can perform at least 10 times better than current solutions.

Focus on Innovation in Semiconductor Technology

One of Gelsinger’s first moves is to join xLight, a Playground portfolio company, as executive chairman. xLight is developing a new type of laser technology to produce extreme ultraviolet (EUV) light for chip manufacturing. This technology aims to use significantly less electricity than current EUV lasers, which are produced by ASML Holding, the industry leader in lithography machines.

Gelsinger believes that this new laser technology could significantly enhance chip production capabilities, making chips smaller and faster—a continuation of the progress first outlined by Moore’s Law, which predicts the doubling of transistors on a chip approximately every two years. He emphasized the importance of advancing these technologies domestically, particularly in the U.S., to ensure continued innovation in the semiconductor industry.

Looking Ahead

Gelsinger’s move to Playground Global signals his commitment to driving innovation in the semiconductor and tech industries. His extensive experience at Intel and deep understanding of chip manufacturing will bring valuable insights as he works to accelerate advancements in cutting-edge technologies that could shape the future of computing.