Yazılar

Gemini Advanced and NotebookLM Plus Now Free for Students Through 2026, with 2TB Cloud Storage Included

Gemini Advanced, the AI service bundled with the Google One AI Premium plan, is now available for free to U.S. college students for a full year. As part of a special promotion, students can claim 15 months of free access to Gemini Advanced. This move comes as Google looks to compete with OpenAI’s recent initiative to offer two months of ChatGPT Plus for students in the U.S. Along with Gemini Advanced, students will also gain access to 2TB of cloud storage for Google products on their personal accounts, alongside other Gemini AI features.

The offer is currently limited to students in the U.S. who are enrolled in college. According to details shared on the Google Gemini website, students can claim the promotion until June 30. This offer allows them to experience a wide array of AI tools without the usual cost. If a student has already subscribed to the Google One AI Premium plan, they will need to cancel their subscription and wait until the next billing cycle to claim the offer. For students interested in claiming the benefits, they must use an email address ending in “.edu,” which is provided to students by U.S. colleges to verify eligibility.

Once students claim the offer, they will receive access to Gemini Advanced (featuring Gemini 2.5 Pro), Notebook LM Plus (for research), Whisk (for image and animation generation), and Veo 2 (for video generation). In addition to these AI tools, students will also be able to use Gemini’s features across Google Workspace apps, enhancing productivity and creativity in their academic work.

The promotion includes a generous 2TB of cloud storage, which will be activated on a student’s personal Google account once they sign up and verify their eligibility. These benefits will remain available to students until “spring 2026,” giving them several years of access to cutting-edge AI tools and ample storage for their academic needs.

Google Rolls Out Gemini Live with Camera and Screen Sharing to All Android Devices

Google Expands Gemini Live with Camera and Screen Sharing to All Android Devices

Google has officially expanded the Gemini Live features, including Camera and Screen Share, to all compatible Android devices. Initially introduced last week for select models like the Google Pixel 9 and Samsung Galaxy S25 series, this new functionality is now available for any Android device that supports the Gemini app. However, it’s important to note that access to these features still requires a Gemini Advanced subscription, meaning they are not available for free to all users.

The expansion announcement was made via the official Google Gemini app account on X (formerly Twitter), where the company shared that the Gemini Live features had received positive feedback from users. Google emphasized that the rollout is happening gradually and will eventually reach all devices capable of running the Gemini app, offering more users the ability to use the new tools.

The Gemini Live features, including real-time camera assistance and screen sharing, were first previewed at Google I/O last year. After nearly a year of development, the features were shown again at the 2025 Mobile World Congress (MWC), where they garnered attention for their advanced capabilities. Developed by Google DeepMind as part of Project Astra, these tools enable the Gemini AI chatbot to provide live, contextual support through a user’s device camera feed or screen capture, allowing for more dynamic and interactive assistance.

These upgrades mark a significant step in Google’s push to enhance its AI offerings. By integrating real-time visual and screen-based interactions, Gemini Live aims to revolutionize how users interact with AI, providing hands-on, personalized help directly on their mobile devices. As the rollout continues, more Android users will be able to explore how these cutting-edge features can improve their experience with the Gemini platform.

OpenAI Unveils O3 and O4-Mini Models Featuring Advanced Visual Reasoning

OpenAI Launches O3 and O4-Mini AI Models With Enhanced Visual Reasoning

OpenAI has unveiled two new AI models—O3 and O4-Mini—designed to push the boundaries of machine reasoning and visual understanding. These models are successors to the earlier O1 and O3-Mini versions and are available to paid ChatGPT users. Highlighted for their visible chain-of-thought (CoT) capabilities, the new models are built to process complex queries involving both text and visual inputs. Their release follows closely on the heels of the GPT-4.1 model series, marking a busy week for the San Francisco-based AI research company.

Announced via a post on X (formerly Twitter), OpenAI described O3 and O4-Mini as their “smartest and most capable” models to date. One of their standout features is enhanced visual reasoning—the ability to interpret and draw inferences from images. This advancement allows the models to extract detailed context, understand spatial relationships, and interpret ambiguous visual data more effectively than their predecessors.

OpenAI also revealed that these are the first models capable of autonomously using all the tools integrated into ChatGPT, such as Python coding, web browsing, file analysis, and image generation. This multi-tool synergy enables the models to handle more dynamic tasks, such as manipulating images (cropping, zooming, flipping), running analytical scripts, or retrieving information even from flawed or low-quality visuals. The potential applications range from reading difficult handwriting to identifying obscure details in images.

In terms of performance, OpenAI claims that both O3 and O4-Mini outperform previous versions—including GPT-4o and O1—on benchmarks like MMMU, MathVista, “VLMs are blind,” and CharXiv. While no comparisons were made with third-party models, these internal benchmarks suggest a notable leap in reasoning and image-based comprehension. As OpenAI continues to iterate, these releases underscore its ongoing focus on building increasingly versatile and intelligent AI systems.