|国家预印本平台
首页|Low-Precision Training of Large Language Models: Methods, Challenges, and Opportunities

Low-Precision Training of Large Language Models: Methods, Challenges, and Opportunities

Low-Precision Training of Large Language Models: Methods, Challenges, and Opportunities

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) have achieved impressive performance across various domains. However, the substantial hardware resources required for their training present a significant barrier to efficiency and scalability. To mitigate this challenge, low-precision training techniques have been widely adopted, leading to notable advancements in training efficiency. Despite these gains, low-precision training involves several components$\unicode{x2013}$such as weights, activations, and gradients$\unicode{x2013}$each of which can be represented in different numerical formats. The resulting diversity has created a fragmented landscape in low-precision training research, making it difficult for researchers to gain a unified overview of the field. This survey provides a comprehensive review of existing low-precision training methods. To systematically organize these approaches, we categorize them into three primary groups based on their underlying numerical formats, which is a key factor influencing hardware compatibility, computational efficiency, and ease of reference for readers. The categories are: (1) fixed-point and integer-based methods, (2) floating-point-based methods, and (3) customized format-based methods. Additionally, we discuss quantization-aware training approaches, which share key similarities with low-precision training during forward propagation. Finally, we highlight several promising research directions to advance this field. A collection of papers discussed in this survey is provided in https://github.com/Hao840/Awesome-Low-Precision-Training.

Zhiwei Hao、Jianyuan Guo、Li Shen、Yong Luo、Han Hu、Guoxia Wang、Dianhai Yu、Yonggang Wen、Dacheng Tao

计算技术、计算机技术

Zhiwei Hao,Jianyuan Guo,Li Shen,Yong Luo,Han Hu,Guoxia Wang,Dianhai Yu,Yonggang Wen,Dacheng Tao.Low-Precision Training of Large Language Models: Methods, Challenges, and Opportunities[EB/OL].(2025-05-02)[2025-07-16].https://arxiv.org/abs/2505.01043.点此复制

评论