|国家预印本平台
首页|Morello: Compiling Fast Neural Networks with Dynamic Programming and Spatial Compression

Morello: Compiling Fast Neural Networks with Dynamic Programming and Spatial Compression

Morello: Compiling Fast Neural Networks with Dynamic Programming and Spatial Compression

来源:Arxiv_logoArxiv
英文摘要

High-throughput neural network inference requires coordinating many optimization decisions, including parallel tiling, microkernel selection, and data layout. The product of these decisions forms a search space of programs which is typically intractably large. Existing approaches (e.g., auto-schedulers) often address this problem by sampling this space heuristically. In contrast, we introduce a dynamic-programming-based approach to explore more of the search space by iteratively decomposing large program specifications into smaller specifications reachable from a set of rewrites, then composing a final program from each rewrite that minimizes an affine cost model. To reduce memory requirements, we employ a novel memoization table representation, which indexes specifications by coordinates in $Z_{\geq 0}$ and compresses identical, adjacent solutions. This approach can visit a much larger set of programs than prior work. To evaluate the approach, we developed Morello, a compiler which lowers specifications roughly equivalent to a few-node XLA computation graph to x86. Notably, we found that an affine cost model is sufficient to surface high-throughput programs. For example, Morello synthesized a collection of matrix multiplication benchmarks targeting a Zen 1 CPU, including a 1x2048x16384, bfloat16-to-float32 vector-matrix multiply, which was integrated into Google's gemma.cpp.

Samuel J. Kaufman、René Just、Rastislav Bodik

计算技术、计算机技术

Samuel J. Kaufman,René Just,Rastislav Bodik.Morello: Compiling Fast Neural Networks with Dynamic Programming and Spatial Compression[EB/OL].(2025-05-02)[2025-06-14].https://arxiv.org/abs/2505.01637.点此复制

评论