Mistral Unveils Open Source 8X22B Mixture of Experts AI Model, Benchmarks Show Significant Improvements

Mistral Releases 8X22B AI Model Open Source via Torrent Magnet Link

Mistral, a prominent player in the realm of artificial intelligence (AI), unveiled its latest AI model, dubbed 8X22B, on Wednesday. This new iteration follows the release of the 8X7B model in December 2023, boasting a larger parameter size. Renowned for its commitment to fully open-source AI models, Mistral took an unconventional approach with the launch of the 8X22B model, eschewing traditional announcement posts or blog updates.

While Mistral itself did not provide benchmark data for the 8X22B model, members of the Hugging Face community took it upon themselves to test the model and share benchmark scores. Impressively, these results suggest that the 8X22B model narrows the performance gap with closed models from industry giants like OpenAI and Google.

In a departure from standard release practices, Mistral chose to distribute the 8X22B AI model through a torrent magnet link, shared via its official X (formerly Twitter) account. This direct distribution method aligns with Mistral’s ethos of making AI models readily accessible for download, bypassing the need for formal announcements or intermediaries.

Mistral stands out in the AI landscape as one of the few platforms offering truly open-source models, not only providing access to model weights but also making the entire architecture openly available. However, it’s important to note that running powerful AI models like the 8X22B on conventional devices may exceed their computational capabilities, potentially leading to malfunctions. The substantial file size of 262GB further underscores the model’s complexity and resource requirements.

 

 

As an autocomplete AI model, Mistral’s 8X22B serves a distinct purpose compared to other variants like chat or instruct models. While chat models such as OpenAI’s ChatGPT and Google’s Gemini AI excel at understanding natural language queries, instruct models like Meta’s Code Llama 7B and 13B are tailored for specific tasks. In contrast, autocomplete models like the 8X22B specialize in completing sentences based on provided prompts, offering a unique approach to AI-driven language generation

The official X (formerly known as Twitter) account of Mistral released the 8X22B AI model via a torrent magnet link, continuing its unconventional method of dropping AI models without an announcement and directly for people to download. Mistral is also one of the only truly open-source platforms that not only makes the weights open but also the entire architecture. However, it should be noted that most devices are not equipped to run powerful AI models on-device, and it might cause the device to malfunction. The total file size is 262GB.

Mistral’s 8X22B is an autocomplete AI model. These are typically different from instruct or chat variants of AI models. OpenAI’s ChatGPT and Google’s Gemini AI are chat models, whereas Meta’s Code Llama 7B and 13B are instruct models. Chat models are those which can understand natural language and contextual queries to provide the correct response. Instruct models are mainly used by developers who ask the AI model to perform a specific task. In contrast, an autocomplete model completes the sentence that has been provided in the prompt.