LLM-DSE: Searching Accelerator Parameters with LLM Agents
LLM-DSE: Searching Accelerator Parameters with LLM Agents
Even though high-level synthesis (HLS) tools mitigate the challenges of programming domain-specific accelerators (DSAs) by raising the abstraction level, optimizing hardware directive parameters remains a significant hurdle. Existing heuristic and learning-based methods struggle with adaptability and sample efficiency. We present LLM-DSE, a multi-agent framework designed specifically for optimizing HLS directives. Combining LLM with design space exploration (DSE), our explorer coordinates four agents: Router, Specialists, Arbitrator, and Critic. These multi-agent components interact with various tools to accelerate the optimization process. LLM-DSE leverages essential domain knowledge to identify efficient parameter combinations while maintaining adaptability through verbal learning from online interactions. Evaluations on the HLSyn dataset demonstrate that LLM-DSE achieves substantial $2.55\times$ performance gains over state-of-the-art methods, uncovering novel designs while reducing runtime. Ablation studies validate the effectiveness and necessity of the proposed agent interactions. Our code is open-sourced here: https://github.com/Nozidoali/LLM-DSE.
Hanyu Wang、Xinrui Wu、Zijian Ding、Su Zheng、Chengyue Wang、Tony Nowatzki、Yizhou Sun、Jason Cong
计算技术、计算机技术
Hanyu Wang,Xinrui Wu,Zijian Ding,Su Zheng,Chengyue Wang,Tony Nowatzki,Yizhou Sun,Jason Cong.LLM-DSE: Searching Accelerator Parameters with LLM Agents[EB/OL].(2025-05-17)[2025-06-23].https://arxiv.org/abs/2505.12188.点此复制
评论