Google Pixel 10 Series to Feature New ‘Pixel Sense’ Contextual Assistant

The upcoming Google Pixel 10 series is set to introduce an innovative new feature—Pixel Sense, a contextual AI assistant designed to provide a more personalized and intuitive user experience. Unlike its predecessors, Pixel Sense will leverage on-device processing, meaning that it will rely less on cloud-based data and more on data already stored on the device. This shift is expected to allow for faster, more secure responses while maintaining user privacy. With the ability to work seamlessly with various Google apps, Pixel Sense aims to offer more relevant and timely information to users.

Pixel Sense is expected to integrate deeply with multiple Google applications, including Google Calendar, Gmail, Chrome, Google Maps, YouTube, and many others. This integration will allow the assistant to offer context-aware suggestions, reminders, and alerts based on the user’s activity across these apps. For example, if a user has an upcoming meeting in Google Calendar, Pixel Sense might prompt them with travel times via Google Maps, or suggest relevant documents from Google Drive. The assistant’s ability to connect with a wide range of apps makes it stand out as a more robust, all-encompassing tool than previous AI assistants.

The new virtual assistant is anticipated to work on Google’s new Tensor G5 chip, which is expected to be integrated into the Pixel 10 series. This chip will be produced by TSMC and promises improved performance and efficiency. With Tensor G5 powering Pixel Sense, users can expect faster processing, better battery efficiency, and improved AI capabilities. The combination of powerful hardware and the new Pixel Sense assistant could make the Pixel 10 series a game-changer in the smartphone market.

At this point, it’s unclear if Pixel Sense will be exclusive to the Pixel 10 series or if it will eventually be available to older Pixel models. However, the integration with Google’s vast ecosystem of apps suggests that the assistant could evolve into a major feature across various devices, making it a cornerstone of Google’s approach to AI and user experience in the coming years. With the Pixel 10 series set to debut later this year, many are eagerly awaiting how Pixel Sense will enhance the overall smartphone experience.

NASA Announces New Missions to Explore and Map the Sun and the Universe

NASA is set to launch two groundbreaking missions aimed at expanding our understanding of space and the universe. Scheduled for March 2, 2025, the PUNCH and SPHEREx spacecraft will be launched aboard a SpaceX Falcon 9 rocket from Vandenberg Space Force Base in California. These missions, designed with separate but complementary scientific goals, will provide valuable insights into solar dynamics and the broader universe. The dual launch, part of NASA’s Launch Services Program, is expected to significantly enhance our knowledge of both solar activity and cosmic phenomena.

The PUNCH mission, short for Polarimeter to Unify the Corona and Heliosphere, will focus on the Sun’s corona and solar wind. This mission is designed to provide a detailed look at the Sun’s outer atmosphere by using four small satellites equipped to capture three-dimensional images. By employing polarized light, PUNCH will track solar events like coronal mass ejections (CMEs), which can affect space weather on Earth. These observations will help scientists understand solar wind dynamics and improve space weather predictions, which are crucial for protecting communication satellites and power grids on Earth.

In contrast, the SPHEREx mission will survey the universe using infrared observations, aiming to map the entire sky every six months. Unlike missions like the James Webb Space Telescope, which focus on capturing detailed images of specific regions, SPHEREx is designed to create broad cosmic maps in 102 different wavelengths. This approach will help scientists investigate the history of the universe, the formation of galaxies, and the role of water in planetary systems. Phil Korngut, an instrument scientist on the SPHEREx mission, highlighted that the data gathered will contribute to a deeper understanding of cosmic inflation and the origins of life-sustaining elements in the universe.

Together, these missions will provide valuable complementary data, with PUNCH offering a closer look at our Sun and SPHEREx expanding our understanding of the cosmos. Both missions promise to contribute significantly to the fields of heliophysics and cosmology, offering new insights that could shape future space exploration and deepen our understanding of the universe.

Gemini for iOS Receives Update with Six New Lockscreen Widgets and Control Centre Integration

Gemini for iOS has received a major update that brings several new lockscreen widgets, enhancing user accessibility and convenience. The update, rolled out on Monday with Gemini for iOS version 1.2025.0762303, introduces six new widgets designed to provide quicker access to specific features within the app. With these additions, iPhone users can now interact with the Gemini app directly from their lockscreen without needing to unlock their devices. This update is part of Google’s ongoing effort to improve the functionality and usability of its AI-powered app, providing a more seamless experience for users.

The new lockscreen widgets are diverse, offering a range of features tailored to different needs. One of the most useful additions is the “Type Prompt” widget, which allows users to type a query directly from the lockscreen and receive a response without unlocking the phone. Another exciting widget is “Talk Live,” which opens Gemini Live for real-time, two-way conversations with the AI assistant. This is the quickest way to access Gemini Live, eliminating the need for a multi-step process. Additionally, the “Open Mic” widget enables users to use voice commands by simply opening the microphone, making it easier to interact with the app hands-free.

The other two widgets, “Use Camera” and “Share Image,” are particularly helpful for users looking to interact with visual content. The “Use Camera” widget opens the camera instantly, allowing users to capture an image and send it to Gemini for analysis. Similarly, the “Share Image” and “Share File” widgets allow users to quickly share images or files with Gemini and ask related queries, expanding the app’s utility in various contexts. Notably, all six widgets can also be set as corner buttons on the iPhone’s lockscreen, further enhancing their accessibility.

Alongside the lockscreen widget update, Google also introduced new Gemini Live features at the Mobile World Congress (MWC) 2025 in Barcelona. These new capabilities were showcased to demonstrate how the app is evolving, with real-time interaction and advanced functionality becoming integral parts of the Gemini experience. With these updates, Gemini continues to solidify its position as a powerful and convenient AI tool, now with even more ways for users to interact and access its features on the go.