Answer
The i.MX 95 integrates the eIQ Neutron NPU, delivering up to 2.0 TOPS of neural-network inference throughput (nominal 800 MHz, overdrive 1.0 GHz). The NPU includes 1 MB of embedded SRAM shared with the SoC when ML acceleration is not active. For embedded engineers evaluating an i.MX 95 SOM for edge AI, this means you can run object detection, image classification, or anomaly-detection models locally without a discrete accelerator chip — reducing BOM cost, board area, and thermal requirements. NXP's eIQ toolkit supports TensorFlow Lite, ONNX, and PyTorch workflows.
/filters:background_color(white)/2025-11/N959_6_SMARCwith%20NX611%20-%20front%20connector%20removed_4.png)