Yazılar

Google Unveils AI Glasses in Live Demo, Teases Upcoming Gemini Features

Google has recently unveiled several exciting updates to its Gemini platform, including the introduction of the Gemini 2.5 models. However, these updates are just a glimpse into what the tech giant has in store for the future. At a recent TedTalk, the company offered an exclusive live demonstration that showcased its new AI Glasses, providing a sneak peek into their potential capabilities. This demonstration not only highlighted the groundbreaking technology behind the glasses but also gave a hint of upcoming Gemini features that could significantly enhance user experience in the near future.

During the live demo, Shahram Izadi, Vice President and General Manager of Android XR at Google, introduced the AI Glasses in a captivating presentation. The wearable device, which appears to be inspired by the 2013 prototype that never made it to market, is now infused with Gemini’s advanced features. This fusion aims to enhance the glasses’ functionality, making them not just a product of wearables but also a smart device deeply integrated with Google’s AI advancements.

One of the standout features teased in the presentation was Gemini Live, a two-way real-time voice conversation feature designed to transform how users interact with AI. This feature is expected to expand the utility of the glasses, allowing users to engage in seamless, interactive conversations with the device. Whether for professional use, personal assistance, or entertainment, the real-time voice capabilities could redefine the way users experience AI-powered wearables.

Looking ahead, Google hinted at further enhancements to the Gemini platform, focusing on improving both its performance and its integration with wearable technology. With more Gemini features on the horizon, the company aims to create a richer, more interactive user experience, positioning its AI Glasses as an essential tool for the future of wearable technology. As Google continues to innovate, it seems clear that the company is setting the stage for a future where AI seamlessly blends into everyday life through advanced devices like these glasses.

Huawei Preparing to Ship New AI Chip as China Seeks Alternatives to Nvidia Solutions

Huawei Technologies is set to begin mass shipments of its new 910C artificial intelligence chip to Chinese customers as early as next month, according to sources familiar with the matter. These shipments come at a crucial time, as China faces increasing challenges in securing domestic alternatives to Nvidia’s AI chips, which have been restricted due to escalating tensions between the U.S. and China. Some shipments of the Huawei 910C have already been made, with many Chinese AI companies eagerly awaiting a local solution to meet their growing demand for high-performance AI hardware.

The timing of the release is significant, as Chinese AI firms have been scrambling to find alternatives to Nvidia’s H20 chip, which had been widely used in AI development. Recently, the U.S. government announced that sales of the H20 to China would now require an export license, placing additional strain on Chinese tech companies that rely heavily on Nvidia’s advanced GPUs for AI research and deployment. With the Huawei 910C, China is looking to reduce its dependency on foreign technology, particularly in the critical area of AI chip development.

The Huawei 910C, which is a graphics processing unit (GPU), represents an evolution of the company’s previous offerings rather than a revolutionary breakthrough. The 910C combines two 910B processors into a single package using advanced integration techniques, delivering performance that rivals Nvidia’s H100 chip. This architectural design allows Huawei to provide a competitive product without entirely reinventing the wheel, making it an appealing alternative for AI applications in China. While the company has yet to publicly confirm the details of the chip’s capabilities or its shipment schedule, the timing aligns with the urgent need for domestic alternatives to Nvidia’s technology.

The geopolitical context behind the 910C’s development is important, as the U.S. has been restricting the sale of its most advanced AI products to China, citing national security concerns. In addition to the H20 chip, China has also been cut off from Nvidia’s flagship B200 chip, further intensifying the need for local solutions. As Huawei ramps up its efforts to ship the 910C, it is positioning itself as a key player in China’s push to maintain technological independence in the face of foreign restrictions.

Microsoft Rolls Out Copilot Vision to All Users on Edge Browser

Microsoft has officially rolled out Copilot Vision to all users of its Edge browser, marking a significant expansion of its AI-powered capabilities. Initially introduced in December 2024, Copilot Vision was limited to Copilot Pro subscribers. However, as of last week, the feature is now freely available to every Edge user. Designed to work as a real-time assistant, Copilot Vision enables the AI chatbot to interpret and interact with the contents of any webpage, assisting users with tasks such as summarizing content, identifying visual elements, and even guiding them through online research or shopping.

The announcement was made by Mustafa Suleyman, CEO of Microsoft AI, in a post on X (formerly Twitter). He highlighted the feature’s usability and simplicity, saying it will “think out loud with you when you’re browsing online.” Suleyman emphasized that Copilot Vision is meant to reduce the friction of traditional browsing—eliminating the need to constantly copy-paste text or formulate specific search queries. This announcement signals Microsoft’s commitment to making its AI tools more accessible and integrated directly into everyday digital workflows.

Copilot Vision works by using computer vision to “see” the content of a webpage in real time. It then uses that visual context, combined with user prompts, to generate helpful responses. The tool includes a voice mode, allowing users to speak their requests instead of typing them. Microsoft has opted to make this a user-controlled, opt-in feature to address potential privacy concerns. To enable it, users need to open a specific link within Edge and follow the setup instructions. Once activated, a floating bar with a microphone and text field appears, allowing seamless interaction through voice or text.

In terms of practical uses, Copilot Vision is designed to enhance the browsing experience in meaningful ways. For instance, it can quickly summarize multiple product reviews, helping users make informed decisions. It can also identify and describe specific design elements in product photos—such as determining the style of a piece of furniture—and assist users in locating similar items using conversational prompts. By combining visual context with natural language understanding, Copilot Vision turns the Edge browser into a more intelligent and interactive space for users navigating the web.