SiFive launches XM Series for accelerating AI workloads

by · Electronics Weekly.com

SiFive has announced the SiFive IntelligenceXM Series designed for accelerating high performance AI workloads.

This is the first IP from SiFive to include a scalable AI matrix engine, which accelerates time to market for semiconductor companies building system on chip solutions for edge IoT, consumer devices, next generation electric and/or autonomous vehicles, data centers, and beyond.

SiFive also announced its intention to open source a reference implementation of its SiFive Kernel Library (SKL).

https://static.electronicsweekly.com/wp-content/uploads/2024/09/18204720/IMG_0337-150x150.jpeg
By integrating scalar, vector, and matrix engines, XM Series customers can, claims SiFive, take advantage of ery efficient memory bandwidth. The XM Series also continues SiFive’s legacy of offering extremely high performance per watt for compute-intensive applications.

“RISC-V was originally developed to efficiently support specialized computing engines including mixed-precision operations,” said Krste Asanovic, SiFive Founder and Chief Architect. “This, coupled with the inclusion of efficient vector instructions and the support of specialized AI extensions, are the reasons why many of the largest datacenter companies have already adopted RISC-V AI accelerators.”

Featuring four X-Cores per cluster, a cluster can deliver 16 TOPS (INT8) or 8 TFLOPS (BF16) per GHz. There is 1TB/s of sustained memory bandwidth per XM Series cluster, with the clusters being able to access memory via a high bandwidth port or via a CHI port for coherent memory access. SiFive envisions the creation of systems incorporating no host CPU or ones based on RISC-V, x86 or Arm.

More at: AIsolutions@sifive.com.