Header Logo
  • AI Accelerators
    • AIPU
    • M.2 AI Accelerator Card
    • PCIe AI Accelerator Card
  • Software
    • Voyager SDK
    • Model Zoo
    • Benchmarks
  • Systems
    • Metis Compute Board
    • Metis Evaluation Systems
    • Development Systems
      • x86 Host
      • ARM Host
    • Partner Systems
      • Dell
      • Advantech
      • Lenovo
      • Aetina
      • SECO
      • Arduino
  • Use Cases
    • Industrial Manufacturing
    • Retail
    • Security
    • Healthcare
    • Smart Cities
    • Robotics
    • Agriculture
    • Computer Vision
    • High Performance Compute
    • Space
  • Partners
  • Resources
    • Blog
    • Community
    • Customer Portal
  • About
    • Our story
    • Our Team
    • Board & Advisors
    • News
      • Media Kit
    • Events
    • Investors
    • Careers
    • Contact Us
  • Product Inquiry
  • Shop
AI SOFTWARE

Metis AIPU Benchmarks

The Metis AIPU is a high-performance accelerator for Edge AI, delivering up to 5x faster inference for Computer Vision tasks. More than just raw speed, Voyager™ SDK optimizes the entire AI pipeline to ensure real-world application efficiency.

Read more
Voyager SDK now live on GitHub

Performance Results: Metis vs. Competition

When compared to other AI accelerators, Metis consistently outperforms in key benchmarks. The chart and table below show the frames per second (FPS) processed by Metis, compared to the throughput of other AI accelerators.

FPS-Measurement-Across-Various-Networks-Image

FPS-Measurement-Across-Various-Networks--Table-Image

We’ve tested numerous benchmarks and offer over 50 models in our Model Zoo for immediate use. At Axelera AI, software is a top priority, and we continuously enhance our models and capabilities to simplify AI development and integration. Performance matters only when users can trust inference accuracy. Thanks to Metis’ mixed precision architecture and our SDK’s quantization, we achieve state-of-the-art accuracy.

The table below compares accuracy for various models running on full numerical precision (FP32) versus Metis after quantization with Voyager SDK. As shown, accuracy reduction is minimal in many cases. Our software team remains committed to ongoing optimizations in future updates.

FPS-Measurement-Across-Various-Networks-Table-2-Image

 

Voyager™ SDK

AI hardware is only as good as its software. That’s why we built Voyager™ SDK, enabling developers to maximize our high-performance hardware. With a simple, high-level YAML-based language, developers can build computer vision pipelines that integrate multiple neural networks and complex image processing tasks.

The SDK automatically compiles, optimizes, and deploys pipelines, running neural networks on the Metis AIPU while offloading preprocessing and post-processing to the host CPU, GPU, or media accelerator. Thanks to our flexible architecture, developers can allocate D-IMC cores as needed—whether running multiple models in parallel or dedicating cores to a single, compute-heavy model.

Get a repo
Learn More

Application-level performance

Running a Computer Vision application is much more than just running inference. At Axelera AI we believe it’s important to understand what the realized performance is – meaning, how long does it take to get the answer a user is looking for, that’s the full end-to-end measurement. The Axelera AI Voyager SDK helps optimize the entire data pipeline, including the parts that run on the host CPU or embedded GPU. Why does this matter? This means that both the developer and the users will have a better experience because the SDK will handle the work for the developer, and the user gets faster results.

FPS-Measurement-Across-Various-Networks--Table-3-Image

As can be appreciated in the table, Voyager SDK manages to deliver the raw inference performance to the end-to-end application: by optimizing the execution of non-neural operations in the computer vision pipeline we ensure that the application can take full advantage of the unmatched capabilities of Metis.

The Voyager SDK is compatible with a variety of host architectures and platforms to accommodate different application environments. Additionally, the SDK allows embedding a pipeline into an inference service, providing various preconfigured solutions for use cases ranging from fully embedded applications to distributed processing of multiple 4K streams.

 

State-of-the-Art Digital In-Memory Computing

Why is Metis so powerful? One of the key innovations that sets Metis apart from its competition is its use of Digital In-Memory Computing (D-IMC) technology. D-IMC allows for the simultaneous processing and storage of data within memory cells, allowing extremely high throughput and power efficient matrix-vector-multiplication. This approach is particularly beneficial for AI workloads, which require high-speed data access and intensive computation, and all with an average power consumption below 10 watts!

Address

HTC5, High Tech Campus
5656 AE Eindhoven
The Netherlands
Email: info@axelera.ai

Menu
  • AI Accelerators
  • AI Software
  • Metis Evaluation Systems
  • Partners
  • Product Inquiry
Company
  • Our Story
  • Our Team
  • Careers
  • News
  • Contact


Reducing CO2 with
Axelera’s Forest

Follow Us
LinkedIn
Twitter
YouTube
Treedom
Sign Up for Our Newsletter

European Union Logo
Axelera AI is co-funded by the European Union’s Horizon research and innovation programme.

Privacy Policy  |  Cookie Policy  |  Terms of Sale  |  Vulnerability Disclosure Policy  |  End-User License Agreement

Copyright 2025 © All rights Reserved.