Qualcomm Revolutionizes Android Smartphones with On-Device Generative AI: Highlights from MWC 2024
Unlocking the Power of AI: Qualcomm’s AI Hub with Over 75 Optimized Models
At the highly anticipated Mobile World Congress (MWC) 2024, Qualcomm emerged as a trailblazer in the realm of artificial intelligence (AI) for Android smartphones. Stepping into the spotlight, Qualcomm unveiled a spectrum of cutting-edge generative AI features set to transform user experiences. Powered by Snapdragon and Qualcomm platforms, these innovations are poised to redefine the boundaries of on-device AI capabilities. Among the unveiled features are a dedicated large language model (LLM) tailored for multimodal responses and an ingenious image generation tool, promising users a seamless blend of creativity and functionality.
One of the most compelling aspects of Qualcomm’s AI showcase lies in its commitment to device-centric AI processing. In a departure from conventional practices that rely heavily on remote servers, Qualcomm’s AI models are entirely localized within the device itself. This paradigm shift not only enhances user privacy and data security but also unlocks new avenues for personalized experiences. By harnessing the power of on-device AI, Qualcomm empowers developers to craft applications that seamlessly adapt to individual preferences, ushering in a new era of user-centric design and functionality.
Central to Qualcomm’s vision is the democratization of AI development. Recognizing the pivotal role of developers in driving innovation, the chipmaker has made significant strides in expanding accessibility to AI tools and resources. At the heart of this initiative lies the Qualcomm AI Hub, a comprehensive repository boasting over 75 meticulously optimized AI models. From Whisper to ControlNet, Stable Diffusion, and Baichuan 7B, developers are equipped with a rich tapestry of AI frameworks to fuel their creative endeavors. Through partnerships with GitHub and Hugging Face, Qualcomm ensures that these resources are readily available to aspiring developers worldwide, fostering a vibrant ecosystem of AI-driven innovation and collaboration.
The company says that these AI models will also take less computational power and will cost less to build apps on since they are optimised for its platforms. However, the fact that all 75 models are small in size and made for particular tasks is also a contributing factor. So, while users will not see a one-stop shop chatbot, these would offer ample use cases for niche tasks such as image editing or transcription.
To make the process of developing apps using the models faster, Qualcomm has added multiple automation processes to its AI library. “The AI model library automatically handles model translation from source framework to popular runtimes and works directly with the Qualcomm AI Engine direct SDK, then applies hardware-aware optimizations,” it stated.
Apart from the small AI models, the American semiconductor company also unveiled LLM tools. These are currently in the research phase and were only demonstrated at the MWC event. The first is Large Language and Vision Assistant (LLaVA), a multimodal LLM with more than seven billion parameters. Qualcomm said it can accept multiple types of data inputs, including text and images, and generate multi-turn conversations with an AI assistant about an image.
Another tool that was demonstrated is called the Low Rank Adaptation (LoRA). It was demoed on an Android smartphone and can generate AI-powered images using Stable Diffusion. It is not an LLM itself, however, it can reduce the number of trainable parameters of AI models to make them more efficient and scale-ready. Besides its usage in image generation, Qualcomm claimed that it can also be used for customised AI models to create tailored personal assistants, improved language translation, and more.