Yazılar

Google Gemini Set to Introduce Easy Reply Selection and Sharing Capabilities

Google is reportedly working on a new feature to improve the user experience with its AI chatbot, Gemini, by making it easier to select and share parts of the generated responses. Currently, users face a multi-step process to copy or share specific portions of text, which can be cumbersome, especially when dealing with longer replies. The upcoming update aims to simplify this by enabling users to directly long-press and drag to select any part of the text within the chat interface, allowing for quicker sharing across other apps.

According to a report from Android Authority, this quality-of-life improvement has been spotted in the latest beta version of the Google app (version 16.20.48.sa.arm64). While the feature is not yet active or available for public testing, its presence in the beta code indicates that Google is actively developing it. Once implemented, it will allow users to bypass the current tedious workflow that involves navigating through multiple menus just to select text within Gemini’s responses.

At present, copying text from Gemini’s mobile app can be frustrating if you want to share only a part of the response. Although there is a “copy” button for the entire answer, selecting a specific segment requires a more complicated method: users must long-press or tap a three-dot menu, then choose a separate “Select text” option, which opens a new screen where the text can be highlighted and copied. This indirect approach interrupts the flow and can be particularly inconvenient when you need to extract several pieces of information.

The upcoming feature is expected to let users highlight text directly on the chat screen, cutting down on unnecessary taps and streamlining the process. However, some limitations may remain, such as the ability to select only one bullet point at a time in bulleted lists, which could complicate sharing multi-point responses. Despite this, the update will represent a significant step toward making Gemini’s interface more intuitive and user-friendly.

Sergey Brin Breaks Silence at Google I/O 2025, Shares Why He Came Back

At the Google I/O 2025 developer conference, attendees were treated to a major surprise on day one: the unexpected appearance of Google Co-Founder Sergey Brin. Scheduled as a fireside chat between DeepMind CEO Demis Hassabis and moderator Alex Kantrowitz, the session quickly turned into something far more notable when Brin joined the stage. The conversation centered around artificial intelligence, highlighting Google’s latest Gemini tools, the capabilities of its newest AI models, and a bold look toward the future of artificial general intelligence (AGI). Brin also used the opportunity to share why he returned to Google after years of stepping away from day-to-day operations.

Brin’s reentry into the spotlight appeared to be driven by a renewed sense of purpose. He expressed his excitement about the progress in AI and the potential for meaningful breakthroughs that could reshape technology—and even society. Speaking candidly, Brin acknowledged that developments like Gemini represent a pivotal shift in computing, and he believes his presence can help steer Google toward achieving AGI responsibly and effectively. “This is the most interesting and important challenge I’ve seen in decades,” he remarked.

Throughout the discussion, Demis Hassabis emphasized the distinction between current AI models and true AGI. According to Hassabis, AGI is not just about performing tasks—it’s about replicating the broad cognitive flexibility of the human brain. He explained that while today’s models are capable of impressive feats, they still fall short of the consistency, reasoning, and creativity that define general intelligence. Hassabis pointed to the need for breakthroughs in world modeling and logical reasoning before AGI becomes a reality, though he remains optimistic that those breakthroughs are within reach.

When pressed on a timeline for AGI, the panelists offered slightly different forecasts. Brin confidently predicted that AGI would arrive before 2030, aligning with Google’s ambitions for its Gemini platform. Hassabis, slightly more cautious, estimated it might emerge just after that milestone. Regardless of the exact date, both leaders agreed that AGI is no longer a distant dream but a near-future goal—one that Brin is now personally invested in helping realize.

Google and TSMC Ink Multi-Year Agreement to Produce Tensor Processors for Upcoming Pixel Devices: Report

Google is gearing up to launch its Pixel 10 series later this year, featuring the next-generation Tensor G5 chipset. This new SoC is reportedly being developed in close collaboration with Taiwan Semiconductor Manufacturing Company (TSMC), continuing the strong partnership between the two firms. According to recent reports from China, Google intends to maintain this collaboration for several years, potentially through the Pixel 14 series, which could arrive as late as 2029.

The long-term partnership between Google and TSMC reflects a deepening relationship with Taiwan’s semiconductor industry. Sources indicate that Google executives recently visited TSMC’s headquarters in Taiwan to discuss expanding their cooperation. The discussions reportedly confirmed a multi-year agreement, ensuring that TSMC will remain the primary manufacturer for Google’s custom Tensor chips well into the future, at least for the next three to five years.

Beyond smartphones, Google’s collaboration with Taiwanese technology firms is expected to grow into other areas, including cloud-based TPU chips, integrated circuit (IC) design, server technology, and advanced liquid cooling solutions. The Pixel 10 series, expected to debut in late 2025, will showcase the first Tensor G5 chip made on TSMC’s advanced 3nm manufacturing process. The lineup is rumored to consist of four models: the Pixel 10, Pixel 10 Pro, Pixel 10 Pro XL, and Pixel 10 Pro Fold.

Leaks suggest that the Tensor G5 will bring significant improvements over its predecessor, the Tensor G4. Enhancements may include an always-on compute (AoC) audio processor, Google’s Emerald Hill memory co-processor, a custom DSP called Google GXP, and an EdgeTPU for AI acceleration. The chip is expected to use Arm Cortex CPU cores and feature a GPU designed by Imagination Technologies DXT, promising better performance and efficiency for Google’s upcoming flagship devices.