Yazılar

Google I/O 2025 Kicks Off Today: Here’s How to Tune In to the Keynote Live Stream

Google I/O 2025 is just hours away, and excitement is building as the tech giant prepares to unveil a range of new software features and innovations. The event promises to spotlight the next wave of developments across Google’s ecosystem, including updates to Android 16 and Wear OS 6. Recent teasers suggest a strong emphasis on artificial intelligence (AI), indicating that AI-powered enhancements will play a central role in the announcements at this year’s annual developer conference. Additionally, Google is expected to reveal more about Android XR, its upcoming operating system designed specifically for extended reality (XR) devices, signaling a push into the immersive technology space.

The event’s schedule includes a highly anticipated keynote address by Google CEO Sundar Pichai, set to take place at 10 a.m. Pacific Time (10:30 p.m. IST) at the Shoreline Amphitheatre in Mountain View, California. This opening keynote will lay out Google’s vision and showcase the most important new features and products. For developers and tech enthusiasts who want to dive deeper, a more technical developer keynote will follow later, starting at 2 a.m. IST, providing detailed insights into Google’s software and platform updates.

Throughout the first day of Google I/O 2025, attendees and viewers can look forward to sessions focused on AI, Android, web technologies, and cloud computing. These sessions will begin streaming live at 3 a.m. Pacific Time (4 a.m. IST), offering a comprehensive look at the tools and innovations that Google plans to make available to developers and users alike. If you miss any of the live sessions, recorded replays will be available for viewing afterward, ensuring that no important announcements or technical details are missed.

For those eager to watch the event live, Google I/O 2025 will be streamed through the official Google for Developers YouTube channel. This means you can easily tune in using a web browser or the YouTube app on any mobile device. The event’s accessibility makes it easy for a global audience to follow along with the announcements and explore the future direction of Google’s software and services. Additionally, day two of the conference on May 21 will continue with more in-depth sessions, streamed live and available for replay, allowing developers and fans to keep pace with all the latest updates.

Google Introduces Enhanced AI and Accessibility Tools for Android and Chrome Users

Google has unveiled a range of new artificial intelligence (AI) and accessibility enhancements for Android devices and the Chrome browser, timed to coincide with Global Accessibility Awareness Day, which falls on the third Thursday of May each year. These updates are designed to make digital experiences more inclusive, particularly for users with vision and hearing challenges. The tech giant has integrated advanced Gemini AI capabilities into existing features and expanded access to previously US-only tools, while also introducing new functionalities to Chrome aimed at improving accessibility for those with low vision.

On the Android front, Google is enhancing its TalkBack screen reader by broadening the Gemini-powered alt text description feature. Previously, this feature allowed TalkBack to generate detailed descriptions of images lacking alt text, but now users can interact more deeply by asking questions about the images or even their overall screen content. This conversational ability brings a new level of interactivity and independence for users relying on screen readers. Additionally, Google is expanding its Expressive Captions feature—an AI-powered enhancement that enriches live captions with emotional and contextual cues, such as tone and volume—which was previously limited to the US.

Expressive Captions helps convey the mood and nuances behind speech in subtitles. For example, instead of a simple “no,” the captions might display “noooooo” to indicate emphasis or frustration, or show excitement with phrases like “amaaazing shot” during a sports broadcast. This feature is now rolling out in English to users in Australia, Canada, the UK, and the US on devices running Android 15 or later, aiming to make captions feel more natural and expressive.

The Chrome browser is also receiving significant accessibility upgrades. One notable addition is optical character recognition (OCR) support for scanned PDF documents. Until now, screen readers were unable to interpret text within scanned PDFs, limiting access for users with visual impairments. With the new OCR feature, Chrome can now recognize, highlight, copy, and search text in scanned PDFs, while enabling screen readers to vocalize the content. These improvements mark an important step toward making web content more accessible and usable for everyone.

Google Rolls Out Gemini Live with Camera and Screen Sharing to All Android Devices

Google Expands Gemini Live with Camera and Screen Sharing to All Android Devices

Google has officially expanded the Gemini Live features, including Camera and Screen Share, to all compatible Android devices. Initially introduced last week for select models like the Google Pixel 9 and Samsung Galaxy S25 series, this new functionality is now available for any Android device that supports the Gemini app. However, it’s important to note that access to these features still requires a Gemini Advanced subscription, meaning they are not available for free to all users.

The expansion announcement was made via the official Google Gemini app account on X (formerly Twitter), where the company shared that the Gemini Live features had received positive feedback from users. Google emphasized that the rollout is happening gradually and will eventually reach all devices capable of running the Gemini app, offering more users the ability to use the new tools.

The Gemini Live features, including real-time camera assistance and screen sharing, were first previewed at Google I/O last year. After nearly a year of development, the features were shown again at the 2025 Mobile World Congress (MWC), where they garnered attention for their advanced capabilities. Developed by Google DeepMind as part of Project Astra, these tools enable the Gemini AI chatbot to provide live, contextual support through a user’s device camera feed or screen capture, allowing for more dynamic and interactive assistance.

These upgrades mark a significant step in Google’s push to enhance its AI offerings. By integrating real-time visual and screen-based interactions, Gemini Live aims to revolutionize how users interact with AI, providing hands-on, personalized help directly on their mobile devices. As the rollout continues, more Android users will be able to explore how these cutting-edge features can improve their experience with the Gemini platform.