MultiKernelBench: A Multi-Platform Benchmark for Kernel Generation
MultiKernelBench: A Multi-Platform Benchmark for Kernel Generation
The automatic generation of deep learning (DL) kernels using large language models (LLMs) has emerged as a promising approach to reduce the manual effort and hardware-specific expertise required for writing high-performance operator implementations. However, existing benchmarks for evaluating LLMs in this domain suffer from limited hardware support, coarse-grained kernel categorization, and imbalanced task coverage. To address these limitations, we introduce MultiKernelBench, the first comprehensive, multi-platform benchmark for LLM-based DL kernel generation. MultiKernelBench spans 285 tasks across 14 well-defined kernel categories and supports three major hardware platforms: Nvidia GPUs, Huawei NPUs, and Google TPUs. To enable future extensibility, we design a modular backend abstraction layer that decouples platform-specific logic from the core benchmarking infrastructure, allowing easy integration of new hardware platforms. We further propose a simple yet effective category-aware one-shot prompting method that improves generation quality by providing in-category exemplars. Through systematic evaluations of seven state-of-the-art LLMs, we reveal significant variation in task difficulty, poor generalization to platforms with less training exposure, and the effectiveness of targeted prompting strategies. MultiKernelBench is publicly available at https://github.com/wzzll123/MultiKernelBench.
Zhongzhen Wen、Yinghui Zhang、Zhong Li、Zhongxin Liu、Linna Xie、Tian Zhang
计算技术、计算机技术
Zhongzhen Wen,Yinghui Zhang,Zhong Li,Zhongxin Liu,Linna Xie,Tian Zhang.MultiKernelBench: A Multi-Platform Benchmark for Kernel Generation[EB/OL].(2025-07-26)[2025-08-10].https://arxiv.org/abs/2507.17773.点此复制
评论