GAMA: High-Performance GEMM Acceleration on AMD Versal ML-Optimized AI Engines
GAMA: High-Performance GEMM Acceleration on AMD Versal ML-Optimized AI Engines
General matrix-matrix multiplication (GEMM) is a fundamental operation in machine learning (ML) applications. We present the first comprehensive performance acceleration of GEMM workloads on AMD's second-generation AIE-ML (AIE2) architecture, which is specifically optimized for ML applications. Compared to AI-Engine (AIE1), AIE offers increased compute throughput and larger on-chip memory capacity. We propose a novel design that maximizes AIE memory utilization, incorporates custom buffer placement within the AIE2 and staggered kernel placement across the AIE2 array, significantly reducing performance bottlenecks such as memory stalls and routing congestion, resulting in improved performance and efficiency compared to the default compiler provided by AMD. We evaluate the performance benefits of our design at three levels: single AIE, pack of AIEs and the complete AIE array. GAMA achieves state-of-the-art performance, delivering up to 165 TOPS (85% of peak) for int8 precision and 83 TBFLOPS (86% of peak) for bfloat16 precision GEMM workloads. Our solution achieves 8.7%, 9%, 39% and 53.6% higher peak throughput efficiency compared to the state-of-the-art AIE1 frameworks AMA, MAXEVA, ARIES and CHARM, respectively.
Kaustubh Mhatre、Endri Taka、Aman Arora
计算技术、计算机技术
Kaustubh Mhatre,Endri Taka,Aman Arora.GAMA: High-Performance GEMM Acceleration on AMD Versal ML-Optimized AI Engines[EB/OL].(2025-04-13)[2025-05-19].https://arxiv.org/abs/2504.09688.点此复制
评论