Yazılar

Google Assistant on Android Devices Set to Be Replaced by AI-Driven Gemini

Google is making a significant shift in its virtual assistant strategy by replacing Google Assistant with its AI-powered assistant, Gemini, for Android smartphone users. Announced on Friday, the company revealed that this transition will take place over the next few months, with Gemini gradually becoming the default assistant across more devices. This change will not be limited to smartphones alone, as Google plans to roll out Gemini to other devices like tablets, Android Auto, and accessories such as headphones and earphones that connect to Android smartphones.

Gemini has been available to Android users for some time now, but it was initially offered as an optional feature. Users with compatible devices could choose to make Gemini their default virtual assistant, allowing them to take advantage of its advanced AI capabilities. At the same time, those who preferred the traditional Google Assistant experience had the option to continue using it. However, the upcoming changes will remove this choice, making Gemini the sole default assistant for all Android devices.

The shift to Gemini marks a notable departure from the legacy Google Assistant, which has been a cornerstone of Android’s virtual assistant ecosystem for years. The decision is part of Google’s broader strategy to integrate more AI-driven technologies into its products, providing users with smarter and more responsive digital experiences. Gemini’s advanced features, which are expected to be powered by cutting-edge AI, will enhance how users interact with their devices, offering more context-aware responses and deeper integration with Google’s services.

Although Google will make Gemini the default assistant, users still have the option to install and use third-party virtual assistants if they prefer. This move signifies Google’s confidence in Gemini’s ability to provide a more robust and dynamic assistant experience, but it also offers flexibility for those who may want to explore alternatives. As this transition unfolds, Android users can expect to see a more seamless and AI-enhanced virtual assistant experience across their devices.

Google Unveils Gemma 3 Open-Source AI Models, Optimized to Run on a Single GPU

Google has officially launched the Gemma 3 family of open-source artificial intelligence (AI) models, marking a significant advancement over the previous Gemma 2 series introduced in August 2024. The new models come with enhanced text and visual reasoning capabilities, offering the ability to process and analyze images, text, and short videos. One of the key selling points of the Gemma 3 series is its support for over 35 languages, with the ability to be fine-tuned to support up to 140 languages. This makes it an incredibly versatile tool for developers and organizations looking to integrate AI into multilingual applications. Additionally, these models are optimized to run on a single GPU or Google’s custom Tensor Processing Unit (TPU), making them more accessible and easier to deploy.

The Gemma 3 models are part of Google’s broader initiative to provide small language models (SLMs) that maintain high performance while being resource-efficient. Built using the same underlying technology as Google’s Gemini 2.0 models, Gemma models have already seen impressive uptake, with over 100 million downloads and more than 60,000 variants created by developers. By making these models open-source, Google continues its push to democratize AI, allowing a wide range of developers to leverage the power of advanced AI models without needing extensive computational resources.

In terms of performance, the Gemma 3 series has proven itself to be competitive with other industry-leading models. According to Google, it outperforms Meta’s Llama-405B, DeepSeek-V3, and OpenAI’s o3-mini models on the LMArena’s leaderboard. Available in four sizes — 1B, 4B, 12B, and 27B parameters — these models can be tailored to meet different use cases, whether for text processing or image and video analysis. Furthermore, the Gemma 3 models come equipped with a context window of 128,000 tokens, enabling them to handle larger data inputs efficiently. They also support function calling, allowing developers to integrate agentic capabilities into their applications and software.

Google has emphasized that these models were developed with careful attention to safety and risk management. The company has incorporated internal safety protocols through fine-tuning and benchmark evaluations to ensure that the models function responsibly. Additionally, the Gemma 3 models underwent testing with more capable AI models to ensure that they performed reliably while maintaining a low risk profile. By focusing on both performance and safety, Google aims to provide powerful AI tools that are not only effective but also secure and responsible in their deployment.

Google Rebrands Gemini Extensions as ‘Apps’, Enhanced with Gemini 2.0 Flash Thinking

Google is introducing several updates to its Gemini platform, announced on Friday, aimed at improving user experience and enhancing its artificial intelligence (AI) capabilities. The updates focus on two main areas: a change in terminology for Gemini tools and an upgrade to how the AI chatbot integrates with other apps. This rollout is currently being extended to all Google Workspace accounts, but the terminology shift applies to all users of Gemini, bringing a more streamlined and unified experience.

One of the key changes involves renaming the Gemini extensions as “apps.” While the functionality of these extensions remains unchanged, Google has decided to remove the term “extensions” from the platform entirely. Instead, the Gemini interface will refer to these tools simply as apps, eliminating any mention of the previous term across both the Gemini app and the web client. This shift is designed to make the overall experience feel more cohesive and intuitive for users, aligning with a growing trend toward simplifying interface language.

As part of this update, the Gemini extensions menu is now labeled as the “Apps” menu. The description has also been updated to reflect the new terminology, now reading, “Bring it all together with Gemini and your favourite apps.” This replaces the earlier phrasing that mentioned extensions. Furthermore, the option to manage Gemini tools has been reworded from “Turn Gemini Extensions on or off anytime” to “Manage which apps Gemini connects to,” further emphasizing the move toward simplifying the platform’s language and user controls.

These changes signal Google’s ongoing efforts to enhance the integration between its AI services and other apps within the ecosystem. By adopting the term “apps,” the company aims to create a more seamless connection between Gemini and the wider array of tools available, improving the platform’s flexibility and user-friendliness. As Gemini continues to evolve, these updates are just a part of a broader push to make AI-driven interactions more accessible and easier to navigate for users across different platforms.