France’s Mistral Launches Europe’s First AI Reasoning Model to Challenge US and China

French AI startup Mistral has unveiled Europe’s first AI reasoning model, aiming to rival leading American and Chinese competitors by leveraging logical thinking for complex problem-solving. The launch marks a significant step in Europe’s bid to carve out a homegrown presence in the competitive AI landscape.

Mistral’s reasoning models utilize “chain-of-thought” techniques, enabling the AI to generate intermediate reasoning steps when tackling difficult questions. This approach could help overcome current limitations faced by the industry’s traditional strategy of simply scaling up model size with more data and computing power.

Backed by venture capital at a $6.2 billion valuation, Mistral differentiates itself by emphasizing its European roots and commitment to open source, contrasting with proprietary models from companies like OpenAI and Google. French President Emmanuel Macron has publicly supported the startup, highlighting its strategic importance.

Mistral’s product lineup includes an open-source Magistral Small model available for free download on Hugging Face, and a more advanced Magistral Medium tailored for business clients. The models support reasoning in multiple languages including English, French, Spanish, Arabic, and simplified Chinese.

While American AI giants have largely kept their most advanced reasoning models proprietary, Chinese firms like DeepSeek and Alibaba have adopted open-source approaches to showcase their technology. Meta has integrated reasoning capabilities into its latest models but has yet to release a standalone reasoning model.

Industry observers see Mistral’s launch as Europe’s best chance to catch up in the AI arms race, particularly as the field shifts focus from brute-force scaling to more sophisticated reasoning abilities.

OpenAI Partners with Google Cloud in Surprising AI Rivalry Deal

OpenAI has struck a significant cloud computing deal with Alphabet’s Google Cloud to support its growing AI infrastructure needs, sources told Reuters. This collaboration, finalized in May, marks an unprecedented partnership between two major competitors in artificial intelligence.

The move signals OpenAI’s efforts to diversify beyond its longtime partner Microsoft, which had exclusively provided data center services until January. Google Cloud will now supply additional computing power to OpenAI for training and running its large language models, including ChatGPT.

The deal highlights the immense compute demands required for AI development and how competitive dynamics are evolving. Despite the fierce rivalry—OpenAI’s ChatGPT poses a strong challenge to Google’s dominant search business—both companies have chosen to cooperate in meeting infrastructure needs.

Alphabet’s stock rose 2.1% following the news, while Microsoft shares slipped 0.6%. Analysts at Scotiabank called the partnership “somewhat surprising” but a strategic win for Google Cloud, which has been aggressively expanding its AI hardware offerings, including tensor processing units (TPUs) used internally and for other customers like Apple.

OpenAI’s recent moves to reduce dependency on Microsoft include partnerships with SoftBank, Oracle, and CoreWeave, as well as plans to develop its own AI chips to cut reliance on external hardware providers. Meanwhile, Microsoft and OpenAI continue to renegotiate their multibillion-dollar investment terms.

Google’s Cloud business, generating $43 billion in sales in 2024, aims to capture market share against rivals Amazon and Microsoft by positioning itself as a neutral cloud provider favored by AI startups with costly infrastructure needs.

This deal presents a complex balancing act for Alphabet CEO Sundar Pichai, who must allocate limited chip capacity between competing demands from Google’s own AI projects and cloud customers. Despite ChatGPT’s threat to Google’s search dominance, Pichai remains confident in the company’s position.

Meta’s Threads to Test Direct Messaging Feature in Select Markets

Meta Platforms announced it will begin testing a direct messaging feature on its Threads app in select markets, including Hong Kong and Thailand. This new feature will introduce a dedicated inbox within Threads, removing the need for users to switch to Instagram’s messaging platform, CEO Mark Zuckerberg said on Tuesday.

Currently, messages on Threads will not be encrypted. The addition of direct messaging aims to make Threads more competitive with rivals such as X (formerly Twitter), TikTok, and Reddit, enhancing user engagement by offering a more complete social experience.

Threads, launched in 2023 as a direct competitor to X after Elon Musk’s takeover, has grown rapidly and now has over 350 million monthly active users. In April, Meta expanded advertising on Threads to all eligible advertisers worldwide, although the company does not expect Threads to be a major revenue driver in 2025.

Research firm Emarketer projects that Threads’ monthly active users in the U.S. will rise 17.5% to 60.5 million by next year, surpassing X’s expected decline to 50 million users.

Amid U.S. trade restrictions and the rise of AI-powered ad targeting, social media platforms are enhancing their features and user experience to maintain competitiveness in a crowded market.