Ground-breaking technology
At Axelera AI we are developing a game-changing AI hardware and software platform to accelerate computer vision on edge devices. Thanks to our proprietary in-memory computing and RISC-V controlled dataflow technology, our platform delivers high performance and usability at a fraction of the cost and power consumption of solutions available today.
Read on to learn more about our ground-breaking technology, or watch the video!
Digital in-memory computing
In-memory computing is a radically different approach to data processing, in which crossbar arrays of memory devices can be used to store a matrix and perform matrix-vector multiplications “in-place” without intermediate movement of data. Our proprietary Digital In-Memory Computing (D-IMC) technology is key to delivering high energy efficiency and outstanding performance. Based on SRAM (Static Random-Access Memory) combined with digital computations, each memory cell effectively becomes a compute element. This radically increases the number of operations per computer cycle (one multiplication and one accumulation per cycle per memory cell) without suffering from issues such as noise or lower accuracy.
RISC-V
The rise of collaborative open-source technology, such as RISC-V, has been one of the biggest technology changes in recent years. Using an industry-standard Instruction Set Architecture (ISA) based on established RISC principles, Axelera AI both contributes to and benefits from the significant investments in developing the RISC-V ecosystem. This has already seen Tensorflow lite ported onto a RISC-V processor core for Edge AI applications including sensor data evaluation, gesture control, or vibration analysis.
Standard CMOS processing
A major advantage of Axelera AI systems is that its accelerator technology has been implemented in standard CMOS technology. Our SRAM-based D-IMC design uses proven, cost-effective and easily accessible materials and manufacturing processes, readily available to foundries. Memory technologies are also a key driver for lower lithography nodes. So, Axelera AI will be able to easily scale performance as the semiconductor industry brings advanced lithography nodes into volume production.
Neural Network optimization
One of the biggest challenges for Edge AI is optimizing neural networks to run efficiently when ported onto a mixed-precision accelerator solution. Our technology includes proven quantization techniques and mapping tools that significantly reduce AI computational load and increase energy efficiency. By employing a generic quantization flow methodology ensures our systems can be applied to a wide range of networks while minimizing accuracy loss. In fact, compared to a ResNet-50 neural network model running with Floating Point 32 (FP-32), Axelera AI’s post-training quantization technique achieves a relative accuracy of 99.9%.
Decoding Transformers on Edge Devices
July 12, 2023
Cheap Computing and the Balancing Act of Population Decline
June 14, 2023
Insights and Trends in Machine Learning for Computer Vision
May 12, 2022