|国家预印本平台
首页|Ultra Memory-Efficient On-FPGA Training of Transformers via Tensor-Compressed Optimization

Ultra Memory-Efficient On-FPGA Training of Transformers via Tensor-Compressed Optimization

Ultra Memory-Efficient On-FPGA Training of Transformers via Tensor-Compressed Optimization

来源:Arxiv_logoArxiv
英文摘要

Transformer models have achieved state-of-the-art performance across a wide range of machine learning tasks. There is growing interest in training transformers on resource-constrained edge devices due to considerations such as privacy, domain adaptation, and on-device scientific machine learning. However, the significant computational and memory demands required for transformer training often exceed the capabilities of an edge device. Leveraging low-rank tensor compression, this paper presents the first on-FPGA accelerator for end-to-end transformer training. On the algorithm side, we present a bi-directional contraction flow for tensorized transformer training, significantly reducing the computational FLOPS and intra-layer memory costs compared to existing tensor operations. On the hardware side, we store all highly compressed model parameters and gradient information on chip, creating an on-chip-memory-only framework for each stage in training. This reduces off-chip communication and minimizes latency and energy costs. Additionally, we implement custom computing kernels for each training stage and employ intra-layer parallelism and pipe-lining to further enhance run-time and memory efficiency. Through experiments on transformer models within $36.7$ to $93.5$ MB using FP-32 data formats on the ATIS dataset, our tensorized FPGA accelerator could conduct single-batch end-to-end training on the AMD Alevo U50 FPGA, with a memory budget of less than $6$-MB BRAM and $22.5$-MB URAM. Compared to uncompressed training on the NVIDIA RTX 3090 GPU, our on-FPGA training achieves a memory reduction of $30\times$ to $51\times$. Our FPGA accelerator also achieves up to $3.6\times$ less energy cost per epoch compared with tensor Transformer training on an NVIDIA RTX 3090 GPU.

Jiayi Tian、Jinming Lu、Hai Li、Xiangwei Wang、Cong Hao、Ian Young、Zheng Zhang

微电子学、集成电路计算技术、计算机技术

Jiayi Tian,Jinming Lu,Hai Li,Xiangwei Wang,Cong Hao,Ian Young,Zheng Zhang.Ultra Memory-Efficient On-FPGA Training of Transformers via Tensor-Compressed Optimization[EB/OL].(2025-08-06)[2025-08-16].https://arxiv.org/abs/2501.06663.点此复制

评论