|国家预印本平台
首页|DiffAxE: Diffusion-driven Hardware Accelerator Generation and Design Space Exploration

DiffAxE: Diffusion-driven Hardware Accelerator Generation and Design Space Exploration

DiffAxE: Diffusion-driven Hardware Accelerator Generation and Design Space Exploration

来源:Arxiv_logoArxiv
英文摘要

Design space exploration (DSE) is critical for developing optimized hardware architectures, especially for AI workloads such as deep neural networks (DNNs) and large language models (LLMs), which require specialized acceleration. As model complexity grows, accelerator design spaces have expanded to O(10^17), becoming highly irregular, non-convex, and exhibiting many-to-one mappings from design configurations to performance metrics. This complexity renders direct inverse derivation infeasible and necessitates heuristic or sampling-based optimization. Conventional methods - including Bayesian optimization, gradient descent, reinforcement learning, and genetic algorithms - depend on iterative sampling, resulting in long runtimes and sensitivity to initialization. Deep learning-based approaches have reframed DSE as classification using recommendation models, but remain limited to small-scale (O(10^3)), less complex design spaces. To overcome these constraints, we propose a generative approach that models hardware design as 1-D image synthesis conditioned on target performance, enabling efficient learning of non-differentiable, non-bijective hardware-performance mappings. Our framework achieves 0.86% lower generation error than Bayesian optimization with a 17000x speedup, and outperforms GANDSE with 30% lower error at only 1.83x slower search. We further extend the method to a structured DSE setting, attaining 9.8% lower energy-delay product (EDP) and 6% higher performance, with up to 145.6x and 1312x faster search compared to existing optimization methods on O(10^17) design spaces. For LLM inference, our method achieves 3.37x and 7.75x lower EDP on a 32nm ASIC and Xilinx Ultrascale+ VPU13 FPGA, respectively, compared to the state-of-the-art DOSA framework.

Arkapravo Ghosh、Abhishek Moitra、Abhiroop Bhattacharjee、Ruokai Yin、Priyadarshini Panda

计算技术、计算机技术

Arkapravo Ghosh,Abhishek Moitra,Abhiroop Bhattacharjee,Ruokai Yin,Priyadarshini Panda.DiffAxE: Diffusion-driven Hardware Accelerator Generation and Design Space Exploration[EB/OL].(2025-08-14)[2025-08-24].https://arxiv.org/abs/2508.10303.点此复制

评论