Yazılar

Google Begins Rolling Out Gemini Assistant for Android Auto

Gemini starts rolling out on Android Auto with Live support [Gallery]

Google has reportedly begun rolling out its Gemini assistant to Android Auto, marking a significant step in integrating AI-driven functionality into the in-car experience. Over the past few days, several users have spotted Gemini appearing in their Android Auto interfaces, suggesting that the Mountain View-based company is gradually introducing the assistant. While it remains unclear whether this rollout is part of a beta program or intended for wider public access, the development follows Google’s initial announcement of the feature at Google I/O in May.

According to a 9to5Google report, Gemini has been observed on Android Auto 15.6 when connected to the Google Pixel 10 Pro XL, and on Android Auto 15.7 when paired with the Samsung Galaxy Z Fold 7. Both of these Android Auto versions are currently in beta, indicating that Google may be using the beta environment to test Gemini’s performance and compatibility before a full-scale launch.

At this stage, there is no official word from Google regarding whether the rollout is exclusively a beta test or the beginning of a broader deployment. Users encountering Gemini in Android Auto might simply be part of an initial controlled rollout, with more devices and regions expected to gain access gradually. This phased approach allows Google to monitor performance, gather user feedback, and make adjustments before releasing the feature globally.

Despite the uncertainty, the introduction of Gemini in Android Auto signals Google’s ongoing push to bring AI assistants deeper into everyday workflows, including in-car navigation and hands-free interaction. By leveraging Gemini’s capabilities, drivers could potentially access smarter route suggestions, contextual reminders, and natural language queries, enhancing both convenience and safety for Android Auto users in the near future.

Anthropic Allegedly Developing Voice Mode Feature for Claude AI

Anthropic is reportedly working on a highly anticipated voice mode feature for its AI chatbot, Claude. The company, based in San Francisco, is expected to launch the new feature as early as this month, marking a significant shift for the AI firm. While competitors like OpenAI and Google have already integrated voice capabilities into their chatbots—such as ChatGPT’s voice feature and Gemini’s similar tool—Claude has so far only offered text-based interactions. This move comes shortly after Anthropic introduced an educational subscription plan, mirroring OpenAI’s Edu offering, signaling the company’s broader push into more dynamic AI tools.

The new voice mode feature is expected to be rolled out gradually, with a Bloomberg report suggesting that the feature could begin rolling out in April. Initially, it will be available to a select group of users, with plans possibly subject to change. The inclusion of voice capabilities would place Claude on a more competitive footing with its peers, allowing users to interact with the AI in a more natural, conversational manner. The voice mode is likely to make the AI experience more immersive, combining the capabilities of voice recognition with Claude’s advanced text-based responses.

According to sources familiar with the development, the feature will include three distinct voices: Airy, Mellow, and Buttery. Notably, Buttery is expected to feature a British accent, adding a unique element to the AI’s vocal range. The discovery of this feature was first noted by an app researcher named “M1Astra,” who found clues about the voices in the code of Claude’s iOS app. However, details about the voice mode remain sparse, and it is unclear whether the feature will serve as a basic text-to-speech function or if it will feature more advanced, human-like voice synthesis, akin to ChatGPT’s more sophisticated voice interaction system.

Anthropic’s delayed entry into the voice chatbot arena comes as major players in the AI space, including OpenAI, Google, and Microsoft, have already rolled out voice-based features. Meta, too, is reportedly developing a two-way voice chat mode for its Meta AI, further intensifying the competition. As Anthropic looks to add this new functionality to Claude, it will be interesting to see how the feature stacks up against the already established voice capabilities of its rivals. The feature’s availability to all users or its potential restriction to premium subscribers is also yet to be determined, leaving room for further speculation about the company’s future plans.

Microsoft Enhances Copilot AI With Memory, Podcast Creation, and Agent-Like Abilities

Microsoft has unveiled a major update to its Copilot AI, introducing a suite of new features designed to make interactions more personalized, intelligent, and functional. These enhancements aim to bring Copilot closer to being a truly versatile assistant by enabling it to remember user preferences, create podcasts, and perform more complex tasks online. Previously limited to the web version, many of these features are now being rolled out across mobile devices and Windows desktop apps, broadening their accessibility.

One of the most significant additions is Copilot’s new memory capability. This feature allows the AI to retain important user-specific details like favorite foods, birthdays of family members, and personal interests. By recalling this information, Copilot can offer more contextually relevant suggestions and proactive reminders tailored to each individual. Microsoft emphasizes that users retain full control over this memory function — they can view, modify, or completely disable it at any time, ensuring privacy and comfort remain a priority.

In addition to memory, Microsoft has also introduced agentic capabilities to Copilot, giving it the power to independently complete certain web-based tasks on behalf of users. This means it can now perform multi-step actions like booking appointments, conducting in-depth research, or even completing shopping tasks — all with minimal user input. This is part of Microsoft’s broader effort to make AI more action-oriented and capable of handling real-world tasks with efficiency and minimal supervision.

Other features being rolled out include the expansion of Copilot Vision, which enhances the AI’s ability to understand visual content, and the addition of new tools such as Podcasts, Shopping, and Deep Research. These allow users to create audio content, browse and compare products more intelligently, and dive deep into complex topics with structured assistance. With this comprehensive upgrade, Microsoft is positioning Copilot as a deeply integrated assistant that can evolve with the user’s needs — blurring the lines between a chatbot and a full-fledged digital agent.