Mythic has announced the M1108 AMP AI accelerator chip, which is a compute-in-memory technology based on a 40 nm process, and boasted as the industry’s first Analog Matrix Processor (AMP). About the AMP, Mythic says Mythic AMPs are designed as an array of compute tiles. At the heart of each AMP tile is the Mythic Analog Compute Engine (Mythic ACE™) which integrates a flash memory array and ADCs that combine to store the model parameters and perform low power, high-performance matrix multiplication. Each Mythic ACE is complemented by a digital subsystem that includes a 32-bit RISC-V nano processor, SIMD vector engine, 64KB of SRAM, and a high-throughput network-on-chip (NoC) router. The result is an Analog Matrix Processor that delivers power-efficient AI inference at up to 35 TOPS. Edge devices can now deploy powerful AI models without the challenges of high power consumption, thermal management, and form-factor constraints.”
The M1108 is equipped with a host of flash cells, ADCs, a 32-bit RISC-V nano-processor, a SIMD vector engine, SRAM, and a high-throughput Network-on-Chip (NOC) router. The 108 AMP tiles enables the M1108 to clock at up to 35 Trillion-Operations-per-Second (TOPS), thereby enabling ResNet-50 at up to 870 fps. This enables the M1108 carry out power-efficient execution of complex AI models like ResNet-50, YOLOv3, and OpenPose Body25. The M1108 supports different processors such as Intel x86, NXP iMX8, NVIDIA Jetson, and Qualcomm RB5. The M1108 has a low power consumption of about 4W when running AI models at peak. The M1108 AI accelerator chip features powerful pre-loaded models for the AI use cases. These models includes object detector and classifier, human pose estimator, image segmentation, just to name a few.
The M1108 M.2 card (22mm x 80mm) has a tiny footprint, which enables it to be easily integrated into many different systems. The M.2 card is suitable for processing deep neural network (DNN) models, you can execute multiple DNNs simultaneously with it. It also features 4-lane PCIe 2.1 for up to 2GB/s bandwidth, with no external DRAM. There is also PCIe evaluation card (156mm x 121mm) which enables you to evaluate Mythic’s high performance, and power-efficient AI inference solution for edge devices and servers. The AI workflow offers support for PyTorch, TensorFlow 2.0, and Caffe.