Yazılar

Wear OS Smartwatches May Soon Gain AI-Powered Query Responses and Enhanced Gemini Features

Google’s Gemini AI Set to Enhance Wear OS Smartwatches with Smarter Task Management

After confirming plans to extend its Gemini AI beyond smartphones, Google appears ready to bring these smart capabilities to Wear OS-powered smartwatches, including Samsung’s Galaxy Watch line. Recent leaks suggest that Samsung’s upcoming One UI 8 update will integrate Gemini Actions, enabling smartwatch users to get intelligent responses to queries, summarize emails, and perform various automated tasks right from their wrist. This expansion aims to boost the productivity and convenience of smartwatches by leveraging the same AI-driven automation features already found on smartphones.

The discovery comes from a detailed teardown of leaked One UI 8 APK files by Android Authority, in collaboration with AssembleDebug. Inside the Google Assistant app’s code, references to Gemini Actions hint at a range of new functionalities in development. These features include managing calendar events, summarizing emails, and answering general questions—functions that transform the smartwatch from a simple notification device into a more proactive assistant capable of handling everyday tasks on the go.

According to the code, users may also be able to interact with Gemini through customizable tiles, allowing quick access to specific AI-powered actions like rescheduling meetings or checking the weather. Additionally, there are indications that the smartwatch interface will provide easy controls to mute the AI’s voice feedback, giving users flexible options for interaction depending on their environment. This combination of voice and touch controls aims to make the AI assistant more intuitive and less intrusive during use.

While the leaked code strongly suggests these features will debut on Samsung Galaxy Watches with One UI 8, industry insiders speculate that the Gemini AI rollout could extend to other Wear OS smartwatches as well. Notably, Samsung might skip the intermediate One UI 7 update entirely and move directly to One UI 8 for its wearables, signaling a significant leap in software capabilities. This development reflects Google and Samsung’s commitment to making smartwatches smarter and more helpful through AI innovation.

Samsung Launches 2025 Smart TV Series in India Featuring Vision AI: Pricing and Availability Details

Samsung has officially launched its 2025 smart TV lineup in India, unveiling a diverse range of models including Neo QLED 8K, Neo QLED 4K, OLED, QLED, and The Frame series. A standout feature of this new collection is the introduction of Vision AI technology, marking the first time Samsung integrates this advanced artificial intelligence system into its smart TVs in the Indian market. Initially showcased at CES 2025 earlier this year, Vision AI is designed to enhance the viewing experience by making the TVs more interactive and responsive to user needs.

Pricing for Samsung’s 2025 smart TV lineup varies by model and features. The premium Neo QLED 8K series starts at Rs. 2,72,990, while the Neo QLED 4K models begin from Rs. 89,990. The OLED range is priced starting at Rs. 1,54,990, and the QLED smart TVs are available from Rs. 49,490. For those interested in Samsung’s artistic “The Frame” TVs, prices kick off at Rs. 63,990. Customers eager to own these TVs can place pre-orders starting May 7 via Samsung’s official website, popular e-commerce portals, and offline retail outlets.

To sweeten the deal, Samsung is offering attractive launch promotions including a free soundbar worth up to Rs. 90,990, cashback offers of up to 20 percent, and zero down payment options on EMI transactions. These offers are valid until May 28, making it an ideal time for consumers to upgrade their home entertainment setup. This launch is also notable for Samsung’s strategic push to position its smart TVs as intelligent home hubs, thanks to the Vision AI integration.

Vision AI brings several innovative features to the lineup. One such feature is Universal Gesture Control, which enables users to operate their TVs through simple hand gestures when paired with a compatible Galaxy Watch. Another exciting addition is Generative Wallpaper, allowing users to personalize their TV’s idle screen with custom 4K wallpapers created by AI. Furthermore, Samsung has embedded Vision AI within its SmartThings ecosystem, transforming the smart TV into a central control point for smart home devices — delivering real-time home status updates, safety alerts, and automation recommendations for a seamless connected living experience.

Hugging Face Unveils Free AI Agent Capable of Performing Digital Tasks Autonomously

Hugging Face has launched a new open-source AI tool called the Open Computer Agent, designed to autonomously perform various browser-based tasks. Released as a free demo, the tool is now publicly accessible through the Hugging Face website. The AI agent can navigate web platforms like Google Search, Google Maps, and even ticket booking sites to complete actions on behalf of the user — all without direct human input at each step. This development builds on Hugging Face’s smolagents framework, which was introduced earlier this year to facilitate lightweight autonomous agents.

Announced by Aymeric Roucher, Project Lead for Agents at Hugging Face, the Open Computer Agent is powered by a virtualized Linux environment and includes applications like Mozilla Firefox. This setup allows the AI agent to interact with the web as a human would — clicking, typing, and navigating through browser interfaces in real time. With its open-source foundation, the project invites developers, researchers, and enthusiasts to explore and expand its capabilities.

The intelligence behind the agent comes from the Qwen2-VL-72B, a powerful vision-language model capable of interpreting images and interfaces based on visual coordinates. This means the agent can “see” what’s on screen, make decisions, and perform follow-up actions like clicking buttons or typing search queries. Hugging Face’s smolagents library adds the logic layer that enables these autonomous interactions, forming the basis of the agentic workflow.

Users trying out the demo can instruct the agent to carry out tasks like finding directions using Google Maps. Once prompted, the agent launches a browser, navigates to the correct site, inputs the required information, and completes the task — all without the user having to touch their keyboard or mouse. With the release of the Open Computer Agent, Hugging Face continues its push toward more accessible and transparent AI tools, empowering the public to experiment with emerging forms of digital automation.