A Few Fit Most: Improving Performance Portability of SGEMM on GPUs using Multi-Versioning
A Few Fit Most: Improving Performance Portability of SGEMM on GPUs using Multi-Versioning
Hand-optimizing linear algebra kernels for different GPU devices and applications is complex and labor-intensive. Instead, many developers use automatic performance tuning (autotuning) to achieve high performance on a variety of devices. However, autotuning "overfits", and must be redone if any part of the environment changes, such as if the device or input characteristics change. In most non-trivial cases, a single compute kernel cannot maintain near-optimal performance across all environments. Changing the kernel to specialize it to the current execution environment is possible, but on GPUs, runtime tuning and compilation can be expensive. In this work, we use multi-versioning -- producing several variants of the same code -- as a way to generate performance portable code. We describe a framework called portability tuning that can automatically generate multi-versioned code whose performance is portable, requiring no retuning. We evaluate our framework on a dataset of execution times for GEMM kernels from the CLBlast linear algebra library. We find our portability tuning techniques outperform CLBlast's default kernels -- often approaching within 10% of the theoretical maximum performance -- despite CLBlast using autotuning techniques. Further, we find that our generated programs generalize well to new and unseen devices, matching the performance of autotuning without ever portability tuning for those devices.
Robert Hochgraf、Sreepathi Pai
计算技术、计算机技术
Robert Hochgraf,Sreepathi Pai.A Few Fit Most: Improving Performance Portability of SGEMM on GPUs using Multi-Versioning[EB/OL].(2025-07-21)[2025-08-18].https://arxiv.org/abs/2507.15277.点此复制
评论