Yazılar

Google Integrates Deep Research AI Agent into Gemini App on Android, Enhancing Research Assistance

Google Expands Deep Research AI Agent to Gemini App on Android

Google is bringing its Deep Research AI agent to the Gemini app for Android, expanding its capabilities beyond the web version. Initially launched in December 2024, this AI-powered research assistant was designed to create multi-step research plans, conduct web searches, and compile detailed reports on complex topics. Until now, this advanced tool was only accessible via the web, but with its integration into the mobile app, users will have greater flexibility in conducting in-depth research on the go. However, the feature remains exclusive to paid Gemini subscribers.

The official Gemini handle on X (formerly known as Twitter) confirmed the rollout of the Deep Research AI agent for Android users. According to the announcement, the feature is being gradually deployed and may take a few weeks to become available worldwide. Once integrated, users can access Deep Research through the Gemini Advanced drop-down menu within the app. This move is expected to enhance the app’s functionality, providing a more seamless and efficient research experience for mobile users.

One of the key highlights of the Deep Research AI agent is its multilingual support. Upon its initial launch, Google stated that the tool would be available in 45 languages, including Arabic, Bengali, English, French, Japanese, Russian, Tamil, and Vietnamese. This wide linguistic range makes the AI-powered research assistant more accessible to users across different regions, allowing them to conduct research in their preferred language with ease.

Deep Research is powered by Gemini 1.5 Pro, Google’s latest AI model, which enables it to process and analyze complex queries efficiently. As AI continues to evolve, integrating research-focused tools like this into mobile applications signifies Google’s commitment to making advanced AI-driven assistance more accessible. With the expansion of Deep Research into the Gemini Android app, users can expect a more comprehensive and intelligent research experience right at their fingertips.

Google Developing ‘Talk Live About Screen’ Shortcut for Gemini Live

Google is reportedly developing a new shortcut for its Gemini Live feature, making AI interactions even more seamless. First mentioned during the recent Galaxy Unpacked event, the “Talk Live About Screen” shortcut will allow users to have real-time, two-way voice conversations with Gemini AI about the content displayed on their screens. While initially showcased for the Samsung Galaxy S25 series, the feature is expected to roll out to other Android devices in the near future. A recent leak has provided further evidence that Google is actively working on integrating this shortcut into Gemini Live.

A well-known tipster, AssembleDebug, shared insights about the new shortcut on X (formerly Twitter). Although the method used to discover it remains unclear, it was likely found in the latest beta version of the Google or Gemini app. A screenshot shared in the post reveals the presence of a redesigned Gemini overlay, where a new “Ask About Screen” icon sits atop the user interface. This feature allows the AI assistant to capture a quick screenshot, enabling users to type their queries and receive AI-powered insights.

Currently, Gemini allows users to analyze on-screen content through text input, but voice-based interactions are not yet supported in this context. The new “Talk Live About Screen” shortcut aims to address this limitation by enabling spoken conversations about on-screen elements. The shortcut is positioned directly above the “Ask About Screen” button, offering users a more intuitive and efficient way to engage with Gemini AI.

At the Galaxy Unpacked event, a Google representative explained that tapping the shortcut would instantly take a screenshot and open the Gemini Live interface, allowing users to verbally interact with the AI. Although Google has not officially announced a release date, the presence of this feature in testing suggests that it could be rolled out soon, potentially transforming how users engage with AI for real-time screen analysis.

Google Introduces New Class of Cheap AI Models as Cost Concerns Intensify

Google has introduced new, cost-effective AI models under its Gemini family, responding to increasing competition and concerns over the escalating costs of artificial intelligence. The new offerings, including the “Flash-Lite” model, are designed to compete with cheaper AI models like DeepSeek’s, a Chinese rival that has drawn attention for its low-cost AI training.

The company unveiled several versions of its Gemini 2.0 models, which offer varying levels of performance and pricing. Among these is the “Gemini 2.0 Flash,” which was released to the general public after being previewed to developers in December. Flash-Lite, a more affordable variant, has been developed in response to positive feedback on the earlier Flash 1.5 model. However, the cost of Gemini 2.0 Flash is higher than its predecessor.

Google’s new pricing strategy comes amid growing scrutiny from investors over the rising expenses of AI model development. Recently, DeepSeek revealed it spent just $6 million on the final training run of one of its models, prompting comparisons to the significantly higher costs cited by major U.S. AI firms, including Alphabet, Microsoft, and Meta. Despite this, DeepSeek’s low-cost model has spurred competitors to accelerate their AI spending, leading to concerns about the long-term profitability of such investments.

Pricing for Gemini Flash-Lite is competitive, with certain inputs costing as little as $0.019 per 1 million tokens. This is cheaper than OpenAI’s flagship model, which costs $0.075 per million tokens, and slightly higher than DeepSeek’s $0.014 model (though DeepSeek’s pricing will rise fivefold on February 8).

These updates reflect Alphabet’s response to the growing pressure to provide affordable AI models while maintaining a competitive edge in the rapidly evolving AI space. However, despite these advancements, investor concerns remain about the sustainability of high capital expenditures in AI development.