Yazılar

Gemini 2.5 Pro Enters Public Preview as Google Boosts AI Studio Rate Limits

Google Expands Access to Gemini 2.5 Pro with Public Preview and New Pricing

Google has officially transitioned its Gemini 2.5 Pro AI model from experimental preview to public preview, allowing broader access for developers. Initially launched last month with limited rate caps, the advanced language model is now available with increased usage limits via the Gemini API and Google AI Studio. This shift opens the door for more robust experimentation and development, especially for those looking to integrate high-performance AI into their workflows.

According to Google, early interest in Gemini 2.5 Pro exceeded expectations, prompting the company to expand availability. While the model is now accessible through the Gemini API in AI Studio, it is still pending rollout on Vertex AI. Developers can take advantage of the new access tier immediately, giving them greater flexibility and speed in deploying AI-driven applications.

With expanded access comes clarified pricing. Google has introduced a two-tier pricing structure for Gemini 2.5 Pro. Under the standard tier, which includes up to 200,000 tokens, the model is priced at $1.25 per million input tokens and $10 per million output tokens. Input tokens cover all forms of content including text, images, and audio, while output tokens are calculated based on the model’s reasoning and response generation.

For developers who exceed the 200,000-token threshold, the higher tier pricing kicks in at $2.50 per million input tokens and $15 per million output tokens. Meanwhile, Google is continuing to offer the experimental version of Gemini with limited access at no cost. Emphasizing affordability, Google claims its rates are highly competitive — especially when compared to rivals like Anthropic’s Claude 3.7 Sonnet, which charges $3 and $15 for input and output tokens respectively.

OpenAI Set to Launch Open-Source AI Model Focused on Reasoning Capabilities

OpenAI to Release Open-Source AI Model Focused on Reasoning

OpenAI is preparing to launch its first open-source artificial intelligence (AI) model with a focus on reasoning. This marks a significant shift for the San Francisco-based AI firm, which has not released an open-source model since the GPT-2 back in November 2019. The new model is expected to be unveiled in the coming months, with OpenAI specifically seeking feedback from the developer community to refine the model based on their needs and insights. One of the primary concerns during development is ensuring the model’s safety, with OpenAI emphasizing responsible deployment.

The open-source AI space has seen significant growth in recent years, with a variety of players, including Meta, Mistral, Alibaba, and major tech companies like Google and Microsoft, all releasing multiple models for public use. However, OpenAI has largely stayed away from open-source initiatives since the launch of GPT-2, instead focusing on closed software solutions. These proprietary models have not been available for downloading or modification, limiting research and commercial applications.

Earlier this year, OpenAI’s CEO, Sam Altman, addressed the company’s position on open-source AI during an AMA session on Reddit. Altman acknowledged that OpenAI had been “on the wrong side of history” in its approach to open-source releases. He expressed the need to adopt a more open strategy but noted that it wasn’t the company’s top priority at the time. His comments highlighted OpenAI’s awareness of the evolving landscape and its desire to adjust its approach.

With this upcoming open-source release, OpenAI aims to re-enter the competitive landscape of open AI models, focusing on addressing key issues like reasoning capabilities and safety. This move is expected to enhance collaboration within the AI research community and contribute to more transparent and accessible AI development.

Google Unveils Gemma 3 Open-Source AI Models, Optimized to Run on a Single GPU

Google has officially launched the Gemma 3 family of open-source artificial intelligence (AI) models, marking a significant advancement over the previous Gemma 2 series introduced in August 2024. The new models come with enhanced text and visual reasoning capabilities, offering the ability to process and analyze images, text, and short videos. One of the key selling points of the Gemma 3 series is its support for over 35 languages, with the ability to be fine-tuned to support up to 140 languages. This makes it an incredibly versatile tool for developers and organizations looking to integrate AI into multilingual applications. Additionally, these models are optimized to run on a single GPU or Google’s custom Tensor Processing Unit (TPU), making them more accessible and easier to deploy.

The Gemma 3 models are part of Google’s broader initiative to provide small language models (SLMs) that maintain high performance while being resource-efficient. Built using the same underlying technology as Google’s Gemini 2.0 models, Gemma models have already seen impressive uptake, with over 100 million downloads and more than 60,000 variants created by developers. By making these models open-source, Google continues its push to democratize AI, allowing a wide range of developers to leverage the power of advanced AI models without needing extensive computational resources.

In terms of performance, the Gemma 3 series has proven itself to be competitive with other industry-leading models. According to Google, it outperforms Meta’s Llama-405B, DeepSeek-V3, and OpenAI’s o3-mini models on the LMArena’s leaderboard. Available in four sizes — 1B, 4B, 12B, and 27B parameters — these models can be tailored to meet different use cases, whether for text processing or image and video analysis. Furthermore, the Gemma 3 models come equipped with a context window of 128,000 tokens, enabling them to handle larger data inputs efficiently. They also support function calling, allowing developers to integrate agentic capabilities into their applications and software.

Google has emphasized that these models were developed with careful attention to safety and risk management. The company has incorporated internal safety protocols through fine-tuning and benchmark evaluations to ensure that the models function responsibly. Additionally, the Gemma 3 models underwent testing with more capable AI models to ensure that they performed reliably while maintaining a low risk profile. By focusing on both performance and safety, Google aims to provide powerful AI tools that are not only effective but also secure and responsible in their deployment.