Yazılar

Gemini 2.5 Pro Enters Public Preview as Google Boosts AI Studio Rate Limits

Google Expands Access to Gemini 2.5 Pro with Public Preview and New Pricing

Google has officially transitioned its Gemini 2.5 Pro AI model from experimental preview to public preview, allowing broader access for developers. Initially launched last month with limited rate caps, the advanced language model is now available with increased usage limits via the Gemini API and Google AI Studio. This shift opens the door for more robust experimentation and development, especially for those looking to integrate high-performance AI into their workflows.

According to Google, early interest in Gemini 2.5 Pro exceeded expectations, prompting the company to expand availability. While the model is now accessible through the Gemini API in AI Studio, it is still pending rollout on Vertex AI. Developers can take advantage of the new access tier immediately, giving them greater flexibility and speed in deploying AI-driven applications.

With expanded access comes clarified pricing. Google has introduced a two-tier pricing structure for Gemini 2.5 Pro. Under the standard tier, which includes up to 200,000 tokens, the model is priced at $1.25 per million input tokens and $10 per million output tokens. Input tokens cover all forms of content including text, images, and audio, while output tokens are calculated based on the model’s reasoning and response generation.

For developers who exceed the 200,000-token threshold, the higher tier pricing kicks in at $2.50 per million input tokens and $15 per million output tokens. Meanwhile, Google is continuing to offer the experimental version of Gemini with limited access at no cost. Emphasizing affordability, Google claims its rates are highly competitive — especially when compared to rivals like Anthropic’s Claude 3.7 Sonnet, which charges $3 and $15 for input and output tokens respectively.

Microsoft Enhances Copilot AI With Memory, Podcast Creation, and Agent-Like Abilities

Microsoft has unveiled a major update to its Copilot AI, introducing a suite of new features designed to make interactions more personalized, intelligent, and functional. These enhancements aim to bring Copilot closer to being a truly versatile assistant by enabling it to remember user preferences, create podcasts, and perform more complex tasks online. Previously limited to the web version, many of these features are now being rolled out across mobile devices and Windows desktop apps, broadening their accessibility.

One of the most significant additions is Copilot’s new memory capability. This feature allows the AI to retain important user-specific details like favorite foods, birthdays of family members, and personal interests. By recalling this information, Copilot can offer more contextually relevant suggestions and proactive reminders tailored to each individual. Microsoft emphasizes that users retain full control over this memory function — they can view, modify, or completely disable it at any time, ensuring privacy and comfort remain a priority.

In addition to memory, Microsoft has also introduced agentic capabilities to Copilot, giving it the power to independently complete certain web-based tasks on behalf of users. This means it can now perform multi-step actions like booking appointments, conducting in-depth research, or even completing shopping tasks — all with minimal user input. This is part of Microsoft’s broader effort to make AI more action-oriented and capable of handling real-world tasks with efficiency and minimal supervision.

Other features being rolled out include the expansion of Copilot Vision, which enhances the AI’s ability to understand visual content, and the addition of new tools such as Podcasts, Shopping, and Deep Research. These allow users to create audio content, browse and compare products more intelligently, and dive deep into complex topics with structured assistance. With this comprehensive upgrade, Microsoft is positioning Copilot as a deeply integrated assistant that can evolve with the user’s needs — blurring the lines between a chatbot and a full-fledged digital agent.

Microsoft Unveils AI-Powered Playable Quake II Demo for Gamers

Microsoft introduced an innovative AI-generated playable demo of Quake II through its Copilot Labs platform. This interactive real-time gameplay experience showcases the potential of artificial intelligence in video game development. The tech giant used its newly released Muse AI models in combination with a cutting-edge approach called World and Human Action MaskGIT Model (WHAMM) to create the demo. This new method allows for dynamic world generation within the game, offering an experience that adapts in real-time to player actions. While this demo is currently available as a research preview to the public, Microsoft has outlined several limitations to the AI-generated gameplay, providing users with an understanding of its current boundaries.

In a detailed blog post, Microsoft’s researchers elaborated on how they harnessed the power of AI to build this playable demo. The integration of AI into 2D and 3D game generation has become an exciting frontier for game developers and researchers alike. The challenge lies in training AI models to generate real-time, interactive environments that can also adapt to the mechanics of a human player. This experiment is more than just a game demo—it’s part of a larger effort to test AI’s capabilities in simulating real-world tasks, such as controlling robots and other physical systems, by leveraging its ability to respond to user inputs in a digital environment.

Quake II, the iconic 1997 first-person shooter developed by id Software and published by Microsoft-owned Activision, serves as the perfect testing ground for this AI-driven experiment. The game, known for its fast-paced action and intricate level design, incorporates a variety of mechanics including shooting, jumping, crouching, and environmental destruction, which all needed to be accurately replicated by the AI. The demo available through Copilot Labs allows users to experience one level of Quake II for about two minutes, offering a glimpse into how AI can mimic complex gameplay mechanics.

For players, this demo provides an exciting opportunity to experience Quake II in a way never seen before, using either a controller or keyboard to navigate through the AI-generated world. While the demo is still in its early stages, the potential applications for AI in game development are vast. By demonstrating its ability to create interactive, responsive game environments, Microsoft is pushing the boundaries of both gaming and artificial intelligence, offering a sneak peek into the future of gaming technology.