Apple Unveils Open Source MLX Framework to Optimize Machine Learning Performance on Apple Silicon
Apple has recently introduced MLX, also known as ML Explore, a cutting-edge machine learning (ML) framework tailored for Apple Silicon computers. Specifically crafted to streamline the training and execution of ML models on devices powered by Apple’s M1, M2, and M3 series chips, MLX stands out with its unified memory model. The framework, now open source, marks a significant advancement in the accessibility of machine learning capabilities for enthusiasts, enabling them to harness MLX on their laptops or computers.
Diving into the technical aspects, Apple has shared insightful details on GitHub, revealing that MLX features a C++ API and a Python API closely aligned with NumPy, a renowned Python library for scientific computing. Furthermore, users can leverage higher-level packages within MLX, empowering them to construct and execute more intricate models on their Apple Silicon-powered devices.
An integral feature of MLX is its ability to simplify the ML model training and execution process on computers. In the past, developers were constrained to rely on a translator for converting and optimizing their models through CoreML. With the advent of MLX, this cumbersome step is eliminated, allowing users with Apple Silicon computers to directly train and run their models on their devices. This not only enhances efficiency but also marks a significant stride in making ML capabilities more accessible and user-friendly for the Apple ecosystem.
Apple says that the MLX’s design follows other popular frameworks used today, including ArrayFire, Jax, NumPy, and PyTorch. The firm has touted its framework’s unified memory model — MLX arrays live in shared memory, while operations on them can be performed on any device types (currently, Apple supports the CPU and GPU) without the need to create copies of data.
The company has also shared examples of MLX in action, performing tasks like image generation using Stable Diffusion on Apple Silicon hardware. When generating a batch of images, Apple says that MLX is faster than PyTorch for batch sizes of 6,8,12, and 16 — with up to 40 percent higher throughput than the latter.
The tests were conducted on a Mac powered by an M2 Ultra chip, the company’s fastest processor to date — MLX is capable of generating 16 images in 90 seconds, while PyTorch would take around 120 seconds to perform the same task, according to the company.
Other examples of MLX in action include generating text using Meta’s open source LLaMA language model, as well as the Mistral large language model. AI and ML researchers can also use OpenAI’s open source Whisper tool to run the speech recognition models on their computer using MLX.
The release of Apple’s MLX framework could help make ML research and development easier on the company’s hardware, eventually allowing developers to bring better tools that could be used for apps and services that offer on-device ML features running efficiently on a user’s computer.