Yazılar

Google Rebrands Gemini Extensions as ‘Apps’, Enhanced with Gemini 2.0 Flash Thinking

Google is introducing several updates to its Gemini platform, announced on Friday, aimed at improving user experience and enhancing its artificial intelligence (AI) capabilities. The updates focus on two main areas: a change in terminology for Gemini tools and an upgrade to how the AI chatbot integrates with other apps. This rollout is currently being extended to all Google Workspace accounts, but the terminology shift applies to all users of Gemini, bringing a more streamlined and unified experience.

One of the key changes involves renaming the Gemini extensions as “apps.” While the functionality of these extensions remains unchanged, Google has decided to remove the term “extensions” from the platform entirely. Instead, the Gemini interface will refer to these tools simply as apps, eliminating any mention of the previous term across both the Gemini app and the web client. This shift is designed to make the overall experience feel more cohesive and intuitive for users, aligning with a growing trend toward simplifying interface language.

As part of this update, the Gemini extensions menu is now labeled as the “Apps” menu. The description has also been updated to reflect the new terminology, now reading, “Bring it all together with Gemini and your favourite apps.” This replaces the earlier phrasing that mentioned extensions. Furthermore, the option to manage Gemini tools has been reworded from “Turn Gemini Extensions on or off anytime” to “Manage which apps Gemini connects to,” further emphasizing the move toward simplifying the platform’s language and user controls.

These changes signal Google’s ongoing efforts to enhance the integration between its AI services and other apps within the ecosystem. By adopting the term “apps,” the company aims to create a more seamless connection between Gemini and the wider array of tools available, improving the platform’s flexibility and user-friendliness. As Gemini continues to evolve, these updates are just a part of a broader push to make AI-driven interactions more accessible and easier to navigate for users across different platforms.

Gemini for iOS Receives Update with Six New Lockscreen Widgets and Control Centre Integration

Gemini for iOS has received a major update that brings several new lockscreen widgets, enhancing user accessibility and convenience. The update, rolled out on Monday with Gemini for iOS version 1.2025.0762303, introduces six new widgets designed to provide quicker access to specific features within the app. With these additions, iPhone users can now interact with the Gemini app directly from their lockscreen without needing to unlock their devices. This update is part of Google’s ongoing effort to improve the functionality and usability of its AI-powered app, providing a more seamless experience for users.

The new lockscreen widgets are diverse, offering a range of features tailored to different needs. One of the most useful additions is the “Type Prompt” widget, which allows users to type a query directly from the lockscreen and receive a response without unlocking the phone. Another exciting widget is “Talk Live,” which opens Gemini Live for real-time, two-way conversations with the AI assistant. This is the quickest way to access Gemini Live, eliminating the need for a multi-step process. Additionally, the “Open Mic” widget enables users to use voice commands by simply opening the microphone, making it easier to interact with the app hands-free.

The other two widgets, “Use Camera” and “Share Image,” are particularly helpful for users looking to interact with visual content. The “Use Camera” widget opens the camera instantly, allowing users to capture an image and send it to Gemini for analysis. Similarly, the “Share Image” and “Share File” widgets allow users to quickly share images or files with Gemini and ask related queries, expanding the app’s utility in various contexts. Notably, all six widgets can also be set as corner buttons on the iPhone’s lockscreen, further enhancing their accessibility.

Alongside the lockscreen widget update, Google also introduced new Gemini Live features at the Mobile World Congress (MWC) 2025 in Barcelona. These new capabilities were showcased to demonstrate how the app is evolving, with real-time interaction and advanced functionality becoming integral parts of the Gemini experience. With these updates, Gemini continues to solidify its position as a powerful and convenient AI tool, now with even more ways for users to interact and access its features on the go.

Google Chrome for iOS to Introduce New ‘Search Screen with Google Lens’ Feature

Google has announced an exciting update for Chrome and the Google app on iOS, introducing a new visual lookup feature that integrates Google Lens. This enhancement, unveiled on Wednesday, allows users to perform visual searches directly from their devices, without needing to leave the browser or take screenshots. The feature uses artificial intelligence to identify objects on the screen, translate text, or even recognize music playing in the background. Google emphasized that this new tool will also offer AI-generated overviews, providing more in-depth results based on the visual search.

The new “Search Screen with Google Lens” feature will work seamlessly across all web pages in Google Chrome for iOS. By simply tapping, highlighting, or drawing around objects on a page, users can instantly activate the visual lookup. This eliminates the hassle of taking a screenshot and opening the Google Lens app separately. Instead, everything can now be done directly within the browser, making searches quicker and more efficient. Google believes that this integration will make browsing more interactive and intuitive, enhancing the user experience.

Google Lens has been a valuable tool for millions of users, with over 20 billion visual searches conducted each month. The company is expanding this functionality by integrating it into their iOS apps, starting with Chrome. With this update, users will have the ability to perform a wide range of tasks with just a tap. Whether identifying a landmark, translating foreign text, or recognizing an object, this feature aims to simplify and enhance how users interact with the web.

Additionally, Google promises that the visual search tool will not only help identify items but will also offer more detailed AI-driven overviews for a deeper understanding of the objects in question. As this feature rolls out, it represents a significant step forward in merging AI technology with everyday tasks, providing users with more efficient and powerful search capabilities.