|国家预印本平台
首页|LIFT: LLM-Based Pragma Insertion for HLS via GNN Supervised Fine-Tuning

LIFT: LLM-Based Pragma Insertion for HLS via GNN Supervised Fine-Tuning

LIFT: LLM-Based Pragma Insertion for HLS via GNN Supervised Fine-Tuning

来源:Arxiv_logoArxiv
英文摘要

FPGAs are increasingly adopted in datacenter environments for their reconfigurability and energy efficiency. High-Level Synthesis (HLS) tools have eased FPGA programming by raising the abstraction level from RTL to untimed C/C++, yet attaining high performance still demands expert knowledge and iterative manual insertion of optimization pragmas to modify the microarchitecture. To address this challenge, we propose LIFT, a large language model (LLM)-based coding assistant for HLS that automatically generates performance-critical pragmas given a C/C++ design. We fine-tune the LLM by tightly integrating and supervising the training process with a graph neural network (GNN), combining the sequential modeling capabilities of LLMs with the structural and semantic understanding of GNNs necessary for reasoning over code and its control/data dependencies. On average, LIFT produces designs that improve performance by 3.52x and 2.16x than prior state-of the art AutoDSE and HARP respectively, and 66x than GPT-4o.

Zijian Ding、Jason Cong、Neha Prakriya、Yizhou Sun

微电子学、集成电路

Zijian Ding,Jason Cong,Neha Prakriya,Yizhou Sun.LIFT: LLM-Based Pragma Insertion for HLS via GNN Supervised Fine-Tuning[EB/OL].(2025-04-29)[2025-05-25].https://arxiv.org/abs/2504.21187.点此复制

评论