|国家预印本平台
首页|Leveraging Application-Specific Knowledge for Energy-Efficient Deep Learning Accelerators on Resource-Constrained FPGAs

Leveraging Application-Specific Knowledge for Energy-Efficient Deep Learning Accelerators on Resource-Constrained FPGAs

Leveraging Application-Specific Knowledge for Energy-Efficient Deep Learning Accelerators on Resource-Constrained FPGAs

来源:Arxiv_logoArxiv
英文摘要

The growing adoption of Deep Learning (DL) applications in the Internet of Things has increased the demand for energy-efficient accelerators. Field Programmable Gate Arrays (FPGAs) offer a promising platform for such acceleration due to their flexibility and power efficiency. However, deploying DL models on resource-constrained FPGAs remains challenging because of limited resources, workload variability, and the need for energy-efficient operation. This paper presents a framework for generating energy-efficient DL accelerators on resource-constrained FPGAs. The framework systematically explores design configurations to enhance energy efficiency while meeting requirements for resource utilization and inference performance in diverse application scenarios. The contributions of this work include: (1) analyzing challenges in achieving energy efficiency on resource-constrained FPGAs; (2) proposing a methodology for designing DL accelerators with integrated Register Transfer Level (RTL) optimizations, workload-aware strategies, and application-specific knowledge; and (3) conducting a literature review to identify gaps and demonstrate the necessity of this work.

Chao Qian

电气化、电能应用自动化技术、自动化技术设备

Chao Qian.Leveraging Application-Specific Knowledge for Energy-Efficient Deep Learning Accelerators on Resource-Constrained FPGAs[EB/OL].(2025-04-12)[2025-04-29].https://arxiv.org/abs/2504.09151.点此复制

评论