Yazılar

Google Gemini Set to Introduce Easy Reply Selection and Sharing Capabilities

Google is reportedly working on a new feature to improve the user experience with its AI chatbot, Gemini, by making it easier to select and share parts of the generated responses. Currently, users face a multi-step process to copy or share specific portions of text, which can be cumbersome, especially when dealing with longer replies. The upcoming update aims to simplify this by enabling users to directly long-press and drag to select any part of the text within the chat interface, allowing for quicker sharing across other apps.

According to a report from Android Authority, this quality-of-life improvement has been spotted in the latest beta version of the Google app (version 16.20.48.sa.arm64). While the feature is not yet active or available for public testing, its presence in the beta code indicates that Google is actively developing it. Once implemented, it will allow users to bypass the current tedious workflow that involves navigating through multiple menus just to select text within Gemini’s responses.

At present, copying text from Gemini’s mobile app can be frustrating if you want to share only a part of the response. Although there is a “copy” button for the entire answer, selecting a specific segment requires a more complicated method: users must long-press or tap a three-dot menu, then choose a separate “Select text” option, which opens a new screen where the text can be highlighted and copied. This indirect approach interrupts the flow and can be particularly inconvenient when you need to extract several pieces of information.

The upcoming feature is expected to let users highlight text directly on the chat screen, cutting down on unnecessary taps and streamlining the process. However, some limitations may remain, such as the ability to select only one bullet point at a time in bulleted lists, which could complicate sharing multi-point responses. Despite this, the update will represent a significant step toward making Gemini’s interface more intuitive and user-friendly.

Sergey Brin Breaks Silence at Google I/O 2025, Shares Why He Came Back

At the Google I/O 2025 developer conference, attendees were treated to a major surprise on day one: the unexpected appearance of Google Co-Founder Sergey Brin. Scheduled as a fireside chat between DeepMind CEO Demis Hassabis and moderator Alex Kantrowitz, the session quickly turned into something far more notable when Brin joined the stage. The conversation centered around artificial intelligence, highlighting Google’s latest Gemini tools, the capabilities of its newest AI models, and a bold look toward the future of artificial general intelligence (AGI). Brin also used the opportunity to share why he returned to Google after years of stepping away from day-to-day operations.

Brin’s reentry into the spotlight appeared to be driven by a renewed sense of purpose. He expressed his excitement about the progress in AI and the potential for meaningful breakthroughs that could reshape technology—and even society. Speaking candidly, Brin acknowledged that developments like Gemini represent a pivotal shift in computing, and he believes his presence can help steer Google toward achieving AGI responsibly and effectively. “This is the most interesting and important challenge I’ve seen in decades,” he remarked.

Throughout the discussion, Demis Hassabis emphasized the distinction between current AI models and true AGI. According to Hassabis, AGI is not just about performing tasks—it’s about replicating the broad cognitive flexibility of the human brain. He explained that while today’s models are capable of impressive feats, they still fall short of the consistency, reasoning, and creativity that define general intelligence. Hassabis pointed to the need for breakthroughs in world modeling and logical reasoning before AGI becomes a reality, though he remains optimistic that those breakthroughs are within reach.

When pressed on a timeline for AGI, the panelists offered slightly different forecasts. Brin confidently predicted that AGI would arrive before 2030, aligning with Google’s ambitions for its Gemini platform. Hassabis, slightly more cautious, estimated it might emerge just after that milestone. Regardless of the exact date, both leaders agreed that AGI is no longer a distant dream but a near-future goal—one that Brin is now personally invested in helping realize.

OpenAI’s o3 Model Aids Discovery of Critical Zero-Day Flaw in Linux Kernel SMB Stack

A cybersecurity researcher recently leveraged OpenAI’s o3 artificial intelligence (AI) model to uncover a critical zero-day vulnerability in the Linux kernel’s Server Message Block (SMB) implementation, known as ksmbd. This previously unknown security flaw, now tracked as CVE-2025-37899, involved complex interactions between multiple users or connections, making it particularly difficult to detect through traditional methods. Fortunately, a patch addressing the vulnerability has already been released to protect affected systems.

The discovery marks a significant milestone in the use of AI for cybersecurity, as such models are seldom used to find zero-day bugs—security flaws that are unknown and potentially unexploited before detection. While manual code audits remain the predominant approach for finding vulnerabilities, they can be painstaking and time-consuming when dealing with massive codebases. Researcher Sean Heelan explained in a detailed blog post how the o3 model accelerated the identification process, demonstrating AI’s emerging role as a powerful aid in vulnerability research.

Interestingly, Heelan initially employed the AI to examine a different security issue, CVE-2025-37778, a Kerberos authentication vulnerability categorized as a “use-after-free” bug. This type of flaw occurs when a system frees a block of memory but subsequent processes continue to reference it, potentially causing crashes or exploitable conditions. While testing the AI on this bug, the model unexpectedly flagged the SMB flaw in about eight out of 100 runs, underscoring the AI’s potential to uncover hidden vulnerabilities beyond its primary task.

This breakthrough with OpenAI’s o3 model highlights the growing synergy between artificial intelligence and cybersecurity research. As AI tools become more sophisticated, they offer promising avenues for automating complex code analysis and enhancing the detection of elusive security threats. The Linux SMB vulnerability case exemplifies how AI can augment human expertise, making systems safer in an era of increasingly sophisticated cyberattacks.