|国家预印本平台
首页|Multi-objective Optimization in CPU Design Space Exploration: Attention is All You Need

Multi-objective Optimization in CPU Design Space Exploration: Attention is All You Need

Multi-objective Optimization in CPU Design Space Exploration: Attention is All You Need

来源:Arxiv_logoArxiv
英文摘要

Design Space Exploration (DSE) is essential to modern CPU design, yet current frameworks struggle to scale and generalize in high-dimensional architectural spaces. As the dimensionality of design spaces continues to grow, existing DSE frameworks face three fundamental challenges: (1) reduced accuracy and poor scalability of surrogate models in large design spaces; (2) inefficient acquisition guided by hand-crafted heuristics or exhaustive search; (3) limited interpretability, making it hard to pinpoint architectural bottlenecks. In this work, we present \textbf{AttentionDSE}, the first end-to-end DSE framework that \emph{natively integrates} performance prediction and design guidance through an attention-based neural architecture. Unlike traditional DSE workflows that separate surrogate modeling from acquisition and rely heavily on hand-crafted heuristics, AttentionDSE establishes a unified, learning-driven optimization loop, in which attention weights serve a dual role: enabling accurate performance estimation and simultaneously exposing the performance bottleneck. This paradigm shift elevates attention from a passive representation mechanism to an active, interpretable driver of design decision-making. Key innovations include: (1) a \textbf{Perception-Driven Attention} mechanism that exploits architectural hierarchy and locality, scaling attention complexity from $\mathcal{O}(n^2)$ to $\mathcal{O}(n)$ via sliding windows; (2) an \textbf{Attention-aware Bottleneck Analysis} that automatically surfaces critical parameters for targeted optimization, eliminating the need for domain-specific heuristics. Evaluated on high-dimensional CPU design space using the SPEC CPU2017 benchmark suite, AttentionDSE achieves up to \textbf{3.9\% higher Pareto Hypervolume} and over \textbf{80\% reduction in exploration time} compared to state-of-the-art baselines.

Hao Wu、Mingyu Yan、Ziheng Xiao、Guangyu Sun、Xiaochun Ye、Dongrui Fan、Runzhen Xue

计算技术、计算机技术

Hao Wu,Mingyu Yan,Ziheng Xiao,Guangyu Sun,Xiaochun Ye,Dongrui Fan,Runzhen Xue.Multi-objective Optimization in CPU Design Space Exploration: Attention is All You Need[EB/OL].(2025-08-14)[2025-08-24].https://arxiv.org/abs/2410.18368.点此复制

评论