Yazılar

Google Introduces Enhanced AI and Accessibility Tools for Android and Chrome Users

Google has unveiled a range of new artificial intelligence (AI) and accessibility enhancements for Android devices and the Chrome browser, timed to coincide with Global Accessibility Awareness Day, which falls on the third Thursday of May each year. These updates are designed to make digital experiences more inclusive, particularly for users with vision and hearing challenges. The tech giant has integrated advanced Gemini AI capabilities into existing features and expanded access to previously US-only tools, while also introducing new functionalities to Chrome aimed at improving accessibility for those with low vision.

On the Android front, Google is enhancing its TalkBack screen reader by broadening the Gemini-powered alt text description feature. Previously, this feature allowed TalkBack to generate detailed descriptions of images lacking alt text, but now users can interact more deeply by asking questions about the images or even their overall screen content. This conversational ability brings a new level of interactivity and independence for users relying on screen readers. Additionally, Google is expanding its Expressive Captions feature—an AI-powered enhancement that enriches live captions with emotional and contextual cues, such as tone and volume—which was previously limited to the US.

Expressive Captions helps convey the mood and nuances behind speech in subtitles. For example, instead of a simple “no,” the captions might display “noooooo” to indicate emphasis or frustration, or show excitement with phrases like “amaaazing shot” during a sports broadcast. This feature is now rolling out in English to users in Australia, Canada, the UK, and the US on devices running Android 15 or later, aiming to make captions feel more natural and expressive.

The Chrome browser is also receiving significant accessibility upgrades. One notable addition is optical character recognition (OCR) support for scanned PDF documents. Until now, screen readers were unable to interpret text within scanned PDFs, limiting access for users with visual impairments. With the new OCR feature, Chrome can now recognize, highlight, copy, and search text in scanned PDFs, while enabling screen readers to vocalize the content. These improvements mark an important step toward making web content more accessible and usable for everyone.

Google Chrome Patches 23-Year-Old Bug That Exposed Users’ Browsing History

Google Chrome is finally addressing a longstanding privacy vulnerability that has existed for over two decades. This bug allowed malicious websites to detect whether users had previously visited certain links by exploiting how browsers visually indicate visited links. Although some browsers implemented workarounds over the years, Google’s upcoming update introduces a more comprehensive fix. The patch is set to arrive with Chrome version 136, which is expected to begin rolling out later this month.

The root of the issue lies in the CSS :visited selector—a styling rule that changes the appearance of hyperlinks a user has already clicked on. Typically, visited links appear in purple while unvisited ones are blue. However, because this styling was applied across websites, it created a potential for abuse. If a malicious website included the same link present on another site, it could determine if a user had visited that link simply by checking its appearance, effectively exposing parts of the user’s browsing history.

To address this, Google has implemented a technique known as :visited link partitioning. In a recent post on the Chrome Developers Blog, the company explained that the browser will now partition visited link history on a per-site basis. This means a link visited on one website will no longer be marked as visited on a different domain, preventing cross-site detection through CSS styling. According to Google, this change significantly improves user privacy and prevents sites from identifying previously visited URLs using old exploit techniques.

Interestingly, although the bug was only officially acknowledged in 2022, the underlying issue dates back nearly 23 years, making it one of the oldest privacy flaws to persist in modern web browsers. By partitioning visited link data, Google Chrome is catching up with privacy measures that have become more common in other browsers. This update marks a crucial step forward in Chrome’s ongoing efforts to enhance user privacy and security, especially as users become increasingly aware of how their data is tracked online.

Google Chrome for iOS to Introduce New ‘Search Screen with Google Lens’ Feature

Google has announced an exciting update for Chrome and the Google app on iOS, introducing a new visual lookup feature that integrates Google Lens. This enhancement, unveiled on Wednesday, allows users to perform visual searches directly from their devices, without needing to leave the browser or take screenshots. The feature uses artificial intelligence to identify objects on the screen, translate text, or even recognize music playing in the background. Google emphasized that this new tool will also offer AI-generated overviews, providing more in-depth results based on the visual search.

The new “Search Screen with Google Lens” feature will work seamlessly across all web pages in Google Chrome for iOS. By simply tapping, highlighting, or drawing around objects on a page, users can instantly activate the visual lookup. This eliminates the hassle of taking a screenshot and opening the Google Lens app separately. Instead, everything can now be done directly within the browser, making searches quicker and more efficient. Google believes that this integration will make browsing more interactive and intuitive, enhancing the user experience.

Google Lens has been a valuable tool for millions of users, with over 20 billion visual searches conducted each month. The company is expanding this functionality by integrating it into their iOS apps, starting with Chrome. With this update, users will have the ability to perform a wide range of tasks with just a tap. Whether identifying a landmark, translating foreign text, or recognizing an object, this feature aims to simplify and enhance how users interact with the web.

Additionally, Google promises that the visual search tool will not only help identify items but will also offer more detailed AI-driven overviews for a deeper understanding of the objects in question. As this feature rolls out, it represents a significant step forward in merging AI technology with everyday tasks, providing users with more efficient and powerful search capabilities.