Yazılar

Google Rolls Out Gemini Live with Camera and Screen Sharing to All Android Devices

Google Expands Gemini Live with Camera and Screen Sharing to All Android Devices

Google has officially expanded the Gemini Live features, including Camera and Screen Share, to all compatible Android devices. Initially introduced last week for select models like the Google Pixel 9 and Samsung Galaxy S25 series, this new functionality is now available for any Android device that supports the Gemini app. However, it’s important to note that access to these features still requires a Gemini Advanced subscription, meaning they are not available for free to all users.

The expansion announcement was made via the official Google Gemini app account on X (formerly Twitter), where the company shared that the Gemini Live features had received positive feedback from users. Google emphasized that the rollout is happening gradually and will eventually reach all devices capable of running the Gemini app, offering more users the ability to use the new tools.

The Gemini Live features, including real-time camera assistance and screen sharing, were first previewed at Google I/O last year. After nearly a year of development, the features were shown again at the 2025 Mobile World Congress (MWC), where they garnered attention for their advanced capabilities. Developed by Google DeepMind as part of Project Astra, these tools enable the Gemini AI chatbot to provide live, contextual support through a user’s device camera feed or screen capture, allowing for more dynamic and interactive assistance.

These upgrades mark a significant step in Google’s push to enhance its AI offerings. By integrating real-time visual and screen-based interactions, Gemini Live aims to revolutionize how users interact with AI, providing hands-on, personalized help directly on their mobile devices. As the rollout continues, more Android users will be able to explore how these cutting-edge features can improve their experience with the Gemini platform.

OpenAI Said to Be Developing an AI-Driven Social Media Network

OpenAI is reportedly preparing to launch its own social media platform, according to recent reports. The San Francisco-based artificial intelligence company is said to be working on integrating AI capabilities into this new social app, though specifics about how the AI features will be used remain unclear. The platform is rumored to be positioned as a competitor to Elon Musk’s X (formerly Twitter) and the suite of social apps owned by Mark Zuckerberg’s Meta. Notably, both X and Meta have recently introduced AI features into their ecosystems, highlighting a growing trend of blending AI with social experiences. This news surfaces just days after OpenAI announced its latest advancements with the GPT-4.1 family of models.

According to a report from The Verge, OpenAI’s social platform could be based heavily on ChatGPT. Sources close to the project suggest that an internal prototype already exists, reportedly emphasizing GPT-4o’s image-generation capabilities. The platform’s design includes a public feed where AI-created images may be displayed, hinting at a highly visual, content-driven experience. While it has been described as similar to X, the integration of generative AI at the core could set OpenAI’s project apart from more traditional social networks.

CEO Sam Altman has reportedly sought external feedback on the early prototype, though major questions remain. It is still unclear whether OpenAI intends to launch a standalone social app or incorporate these features directly into the existing ChatGPT interface. Observers have pointed out similarities to OpenAI’s video generation platform, Sora, which also features a content feed—though Sora lacks a true social element, as creators are not identified. Early indications suggest that OpenAI’s approach might prioritize showcasing AI capabilities in a social context, rather than building a purely human-driven network supplemented by AI, like X or Instagram.

The move into social media would also intensify OpenAI’s ongoing rivalry with X and Meta. Elon Musk, owner of X, has been openly critical of Sam Altman and OpenAI’s shift toward a for-profit structure. Musk previously filed a lawsuit against the company and even made a bid to acquire it, to which Altman responded sharply, joking that OpenAI would instead offer to buy Twitter for $9.74 billion. With tensions already high, OpenAI’s entry into the social networking space could further escalate competition among tech giants racing to dominate the future of AI-powered digital experiences.

Gemini to Receive Enhancements with New Audio Overview and Canvas Features

Google has announced the rollout of two exciting new artificial intelligence (AI) features for Gemini, enhancing the platform’s capabilities for both free and Gemini Advanced subscribers. The first new feature, called Canvas, offers an interactive space where users can collaborate directly with AI on a variety of tasks, including document creation and coding. This feature aims to bridge the gap between human creativity and AI efficiency, allowing users to generate drafts, make edits, and refine their work through AI assistance. The second new addition, Audio Overview, is a feature that was previously exclusive to Google’s NotebookLM but is now making its way to Gemini. This tool lets users transform documents, slides, and Deep Research reports into an engaging, podcast-style audio discussion, making it easier to digest complex content.

Both features are being introduced as part of Gemini’s ongoing evolution, following the introduction of Deep Research—a tool designed to generate detailed reports on complex topics—and exclusive lockscreen widgets for iOS users. The addition of Canvas and Audio Overview comes as part of a broader strategy to enrich user experience by offering new, intuitive ways to interact with AI. These new functionalities will be available across both the web and mobile versions of Gemini, allowing users to access them seamlessly across devices.

Canvas allows users to add documents or lines of code into a dedicated workspace within the Gemini interface. By clicking on the newly introduced Canvas button next to the Deep Research option, users can start working on a project where the AI generates a first draft based on the user’s prompt. From there, users can collaborate with the AI, editing the draft and refining the output to their liking. This feature is designed to facilitate a more hands-on, creative process where human expertise and AI capabilities complement each other, making it ideal for projects that require a mix of creativity and technical input.

On the other hand, Audio Overview offers an innovative way to engage with written content. This feature takes documents, presentations, and reports and transforms them into a podcast-like audio experience. Users can simply input a document or presentation, and Gemini will generate an engaging, narrated summary, making it easier for people to absorb the content in an auditory format. This feature is especially useful for users on the go who prefer listening to content instead of reading, offering a more flexible and interactive way to consume information. With these additions, Gemini is further positioning itself as a powerful AI tool for both personal and professional use.