Yazılar

Oppo Unveils Agentic AI Initiative, Introduces New System-Wide AI Search Feature

Oppo has revealed its ambitious plans for the future of artificial intelligence (AI) with the launch of its Agentic AI initiative at the Google Cloud Next 2025 event. The company is focused on advancing AI capabilities to create deeply personalized and intelligent experiences for its users. This new initiative aims to combine in-house AI development with a strategic collaboration with Google to introduce next-generation AI features that will enhance both hardware and software integration. Oppo is setting the stage for a future where AI agents, or agentic AI, take a central role in how users interact with their devices.

The core idea behind agentic AI is to create a system where a centralized AI model autonomously manages hardware and software components to perform tasks based on user commands. This innovative approach promises to make interactions with Oppo devices smarter, more intuitive, and highly personalized. Oppo’s goal is to ensure that its AI experiences are continuously refined, and the company is leveraging partnerships with industry leaders like Google Cloud to achieve this. Jason Liao, President of Oppo Research Institute, highlighted the company’s commitment to enhancing AI capabilities, signaling a new era for Oppo users in terms of seamless and intelligent device usage.

At the Google Cloud Next event, Oppo also unveiled a cutting-edge feature called AI Search. This system-wide AI tool will allow users to conduct multimodal searches across documents stored on their devices using natural language queries. With AI Search, users can quickly find specific information within their files directly from the home screen, streamlining how users interact with their content. This feature represents a leap forward in making AI a practical, everyday tool, seamlessly integrated into Oppo’s ecosystem. Additionally, Oppo highlighted its existing AI-driven features in areas such as productivity, imaging, and creativity, showcasing the breadth of its AI applications.

As part of this ambitious initiative, Oppo is developing a user knowledge system, which will serve as a central hub for storing and managing user data. This system is designed to tackle the issue of information fragmentation, a common problem with mobile devices, by creating a unified data repository. By leveraging this system, Oppo aims to further enhance the personalization of its AI features, ensuring that users’ experiences with their devices are not only smarter but also more tailored to their individual needs and preferences. With Agentic AI at its core, Oppo is positioning itself at the forefront of AI innovation, offering users more powerful and intuitive tech experiences.

Adobe Unveils Next-Gen Agentic AI Features for Acrobat, Photoshop, and Premiere Pro

Adobe Explores the Future of Creativity with AI-Powered Agents

Adobe is stepping into the next frontier of creative technology by previewing a new generation of AI agents designed to simplify manual tasks and enhance creative workflows. In a recent announcement, the company offered a glimpse into AI-driven features currently in development across key platforms such as Photoshop, Premiere Pro, Acrobat, Adobe Express, and Creative Cloud. These tools are being developed with a clear objective: to free up users from repetitive tasks and enable them to focus on high-value, creative thinking. While these capabilities remain under development and are not yet available to the public, the preview highlights Adobe’s broader vision for integrating “agentic” AI into its software ecosystem.

The concept of AI agents, as Adobe describes it, refers to intelligent systems that can independently carry out tasks by analyzing problems, generating solutions, and interacting with external tools. This goes beyond traditional automation. Adobe’s implementation of these agents focuses on adaptability and specialization. Each AI agent can be given a particular role — whether it’s acting as a research assistant in Acrobat or a creative collaborator in Express. For instance, in Acrobat, users will soon be able to upload multiple documents, prompt the AI with questions, and receive context-aware insights, summaries, and suggestions for further exploration.

One of the most compelling applications is coming to Adobe Express, where the AI agent is being positioned as a “creative partner.” Instead of merely executing commands, the AI will assist users throughout the entire design process. Whether it’s generating a layout, tweaking visuals based on feedback, or handing control back to the user, the agent is intended to collaborate with creators in a fluid, natural way. For businesses, this opens up scalable opportunities: enterprises can feed brand guidelines into the system to generate consistent on-brand materials, while smaller teams or startups can rely on the AI to accelerate design workflows without needing large in-house creative teams.

Photoshop will be among the first Adobe products to get a more tangible AI upgrade. Later this month, the company plans to introduce its first “creative agent” for the platform, paired with a redesigned Actions panel. This AI assistant will be able to suggest edits tailored to the context of an image, offer real-time recommendations, and allow users to apply or reject them instantly. With support for over 1,000 natural language commands, the tool promises to drastically streamline photo editing. Through these initiatives, Adobe is signaling a shift from passive AI features to more proactive, autonomous agents that work alongside users as intelligent co-creators.

Gemini 2.5 Pro Enters Public Preview as Google Boosts AI Studio Rate Limits

Google Expands Access to Gemini 2.5 Pro with Public Preview and New Pricing

Google has officially transitioned its Gemini 2.5 Pro AI model from experimental preview to public preview, allowing broader access for developers. Initially launched last month with limited rate caps, the advanced language model is now available with increased usage limits via the Gemini API and Google AI Studio. This shift opens the door for more robust experimentation and development, especially for those looking to integrate high-performance AI into their workflows.

According to Google, early interest in Gemini 2.5 Pro exceeded expectations, prompting the company to expand availability. While the model is now accessible through the Gemini API in AI Studio, it is still pending rollout on Vertex AI. Developers can take advantage of the new access tier immediately, giving them greater flexibility and speed in deploying AI-driven applications.

With expanded access comes clarified pricing. Google has introduced a two-tier pricing structure for Gemini 2.5 Pro. Under the standard tier, which includes up to 200,000 tokens, the model is priced at $1.25 per million input tokens and $10 per million output tokens. Input tokens cover all forms of content including text, images, and audio, while output tokens are calculated based on the model’s reasoning and response generation.

For developers who exceed the 200,000-token threshold, the higher tier pricing kicks in at $2.50 per million input tokens and $15 per million output tokens. Meanwhile, Google is continuing to offer the experimental version of Gemini with limited access at no cost. Emphasizing affordability, Google claims its rates are highly competitive — especially when compared to rivals like Anthropic’s Claude 3.7 Sonnet, which charges $3 and $15 for input and output tokens respectively.