Introducing Microsoft’s Enhanced Copilot Experience: Unveiling the Power of AI with OpenAI’s GPT-4 Turbo Model

Unlocking Advanced Capabilities: Microsoft Harnesses GPT-4 Turbo to Elevate Copilot’s Task Complexity and Duration

In the evolving landscape of artificial intelligence, Microsoft has diligently enhanced its suite of AI-driven services, extending the reach of intelligent tools to encompass Bing Search, Windows, and the comprehensive Office 365 suite, which includes popular applications like Microsoft Word, Excel, PowerPoint, and Teams. Unifying its AI endeavors, Microsoft recently unveiled the consolidation of various AI services, such as the Bing Chat AI chatbot, under the unified banner of Microsoft Copilot. As the company marks a year of Copilot’s evolution, it is poised to introduce a slew of innovative features in 2024.

In a recent blog post, Microsoft outlined its strategic vision for Copilot, announcing plans to imbue the AI companion with advanced capabilities in the upcoming weeks. A notable advancement lies in the integration of OpenAI’s latest model, GPT-4 Turbo, empowering Copilot to undertake more intricate and prolonged tasks. Currently undergoing testing with a select user group, GPT-4 Turbo is set to be seamlessly integrated across Copilot services in the near future. This groundbreaking development implies that the Copilot AI companion, formerly known as Bing Chat, will soon leverage the formidable capabilities of GPT-4 Turbo on platforms like the web, Windows, and other Microsoft services.

Beyond textual capabilities, Copilot is also embracing visual enhancements with the incorporation of the new DALL-E 3 model for image generation. This addition allows Copilot to generate richer and more contextually relevant images based on user prompts. The augmented image generation capabilities have already been deployed and can be experienced by visiting bing.com/create or instructing Copilot to generate an image. This strategic fusion of advanced language and image models underscores Microsoft’s commitment to pushing the boundaries of AI integration, promising users a more intuitive and dynamic experience with Copilot.

Microsoft also announced that Edge users would soon be able to write text based on Web pages. With Inline Compose with rewrite menu, users could select a block of text on any Website and ask Copilot to rewrite it. The company is also expanding visual search capabilities by combining GPT-4 with vision with Bing image search and web search data. This new capability, dubbed Multi-Modal with Search Grounding, will feature better image understanding for user queries and will be available soon.

Finally, the company is developing a Code Interpreter feature and a Deep Search tool for Bing. The former will be able to perform more accurate calculations, coding, data analysis, visualization, math and more, Microsoft claimed. The company is currently gathering feedback on Code Interpreter and intends to roll it out widely soon.

 

 

Deep Search, on the other hand, will deliver optimised search results in Bing for complex topics. Available as a button next to Bing Search bar, the tool will bring more relevant search results by expanding queries into more comprehensive descriptions.

Last month, Microsoft announced that its AI assistant Copilot, already available on Windows 11, would be coming to Windows 10. The tech giant also launched its own custom-designed AI computing chips, Maia and Cobalt, at its Ignite developer conference in November.

Microsoft Copilot was first integrated with Windows 11 as part of an update in September. The update also brought AI features to Paint and Snipping Tool.