The Metis AIPU is a high-performance accelerator for Edge AI, delivering up to 5x faster inference for Computer Vision tasks. More than just raw speed, Voyager™ SDK optimizes the entire AI pipeline to ensure real-world application efficiency.
When compared to other AI accelerators, Metis consistently outperforms in key benchmarks. The chart and table below show the frames per second (FPS) processed by Metis, compared to the throughput of other AI accelerators.
We’ve tested numerous benchmarks and offer over 50 models in our Model Zoo for immediate use. At Axelera AI, software is a top priority, and we continuously enhance our models and capabilities to simplify AI development and integration. Performance matters only when users can trust inference accuracy. Thanks to Metis’ mixed precision architecture and our SDK’s quantization, we achieve state-of-the-art accuracy.
The table below compares accuracy for various models running on full numerical precision (FP32) versus Metis after quantization with Voyager SDK. As shown, accuracy reduction is minimal in many cases. Our software team remains committed to ongoing optimizations in future updates.
AI hardware is only as good as its software. That’s why we built Voyager™ SDK, enabling developers to maximize our high-performance hardware. With a simple, high-level YAML-based language, developers can build computer vision pipelines that integrate multiple neural networks and complex image processing tasks.
The SDK automatically compiles, optimizes, and deploys pipelines, running neural networks on the Metis AIPU while offloading preprocessing and post-processing to the host CPU, GPU, or media accelerator. Thanks to our flexible architecture, developers can allocate D-IMC cores as needed—whether running multiple models in parallel or dedicating cores to a single, compute-heavy model.
Running a Computer Vision application is much more than just running inference. At Axelera AI we believe it’s important to understand what the realized performance is – meaning, how long does it take to get the answer a user is looking for, that’s the full end-to-end measurement. The Axelera AI Voyager SDK helps optimize the entire data pipeline, including the parts that run on the host CPU or embedded GPU. Why does this matter? This means that both the developer and the users will have a better experience because the SDK will handle the work for the developer, and the user gets faster results.
As can be appreciated in the table, Voyager SDK manages to deliver the raw inference performance to the end-to-end application: by optimizing the execution of non-neural operations in the computer vision pipeline we ensure that the application can take full advantage of the unmatched capabilities of Metis.
The Voyager SDK is compatible with a variety of host architectures and platforms to accommodate different application environments. Additionally, the SDK allows embedding a pipeline into an inference service, providing various preconfigured solutions for use cases ranging from fully embedded applications to distributed processing of multiple 4K streams.
Why is Metis so powerful? One of the key innovations that sets Metis apart from its competition is its use of Digital In-Memory Computing (D-IMC) technology. D-IMC allows for the simultaneous processing and storage of data within memory cells, allowing extremely high throughput and power efficient matrix-vector-multiplication. This approach is particularly beneficial for AI workloads, which require high-speed data access and intensive computation, and all with an average power consumption below 10 watts!