While mostly staying out of the generative AI competition, Apple has released an open source array framework on GitHub for building machine learning transformer language models and text generation AI on the company’s own silicon.

What is Apple’s MLX framework?

MLX is a set of tools for developers who are building AI models, including transformer language model training, large-scale text generation, text fine-tuning, generating images and speech recognition on Apple silicon. Apple machine learning research scientist Awni Hannun announced the MLX machine learning framework on X (formerly Twitter) on Dec. 5.

SEE: Apple recommends users update to iOS 17.1.2, iPadOS 17.1.2 and macOS 14.1.2 due to zero-day vulnerabilities. (TechRepublic)

MLX uses Meta’s LlaMA for text generation and low-rank adoption for text generation. MLX’s image generation is based on Stability AI’s Stable Diffusion, while MLX’s speech recognition hooks up to OpenAI’s Whisper.

MLX is intended to be familiar to deep learning researchers

MLX was inspired by NumPy, PyTorch, Jax and ArrayFire, but unlike its inspirations it is intended to keep arrays in shared memory, according to the MLX page on GitHub. Currently supported devices, which are CPUs and GPUs for now, can run MLX on-device without creating data copies.

MLX’s Python AI should be familiar to developers who already know how to use NumPy, the Apple team said on GitHub; developers can use MLX through a C++ API that mirrors the Python API. Other APIs similar to those used in PyTorch aim to simplify building complex machine learning models. Composable function transformations are built in, Apple said, meaning differentiation, vectorization and computation graph optimization can be done automatically. Computations in MLX are lazy as opposed to eager, meaning arrays only materialize when needed. Apple claims computation graphing and debugging are “simple and intuitive.”

“The framework is intended to be user-friendly, but still efficient to train and deploy models,” the Apple developers wrote on GitHub. “The design of the framework itself is also conceptually simple. We intend to make it easy for researchers to extend and improve MLX with the goal of quickly exploring new ideas.”

NVIDIA AI research scientist Jim Fan wrote on LinkedIn on Dec. 6.: “The release did an excellent job on designing an API familiar to the deep learning audience, and showing minimalistic examples on OSS models that most people care about: Llama, LoRA, Stable Diffusion, and Whisper.”

Apple’s place in the competitive AI landscape

Apple – which has had its artificial intelligence assistant Siri since well before the generative AI craze – seems to be focused on the tools to make large language models instead of producing the models themselves and the chatbots that can be built with them. However, Bloomberg’s Mark Gurman reported on Oct. 22, 2023 that “…Apple executives were caught off guard by the industry’s sudden AI fever and have been scrambling since late last year to make up for lost time,” and that Apple is working on upcoming generative AI features for iOS and Siri. Compare Apple to Google, which recently released its powerful Gemini large language model on the Pixel 8 Pro and in the Bard conversational AI. Google is still lagging behind its rival OpenAI in terms of widespread generative AI functionality.

Note: TechRepublic has reached out to Apple for more information about MLX. This article will be updated with more information based on Apple’s response.

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays