|国家预印本平台
首页|A Survey on Memory-Efficient Transformer-Based Model Training in AI for Science

A Survey on Memory-Efficient Transformer-Based Model Training in AI for Science

A Survey on Memory-Efficient Transformer-Based Model Training in AI for Science

来源:Arxiv_logoArxiv
英文摘要

Scientific research faces high costs and inefficiencies with traditional methods, but the rise of deep learning and large language models (LLMs) offers innovative solutions. This survey reviews transformer-based LLM applications across scientific fields such as biology, medicine, chemistry, and meteorology, underscoring their role in advancing research. However, the continuous expansion of model size has led to significant memory demands, hindering further development and application of LLMs for science. This survey systematically reviews and categorizes memory-efficient pre-training techniques for large-scale transformers, including algorithm-level, system-level, and hardware-software co-optimization. Using AlphaFold 2 as an example, we demonstrate how tailored memory optimization methods can reduce storage needs while preserving prediction accuracy. By bridging model efficiency and scientific application needs, we hope to provide insights for scalable and cost-effective LLM training in AI for science.

Kaiyuan Tian、Linbo Qiao、Baihui Liu、Gongqingjian Jiang、Shanshan Li、Dongsheng Li

计算技术、计算机技术

Kaiyuan Tian,Linbo Qiao,Baihui Liu,Gongqingjian Jiang,Shanshan Li,Dongsheng Li.A Survey on Memory-Efficient Transformer-Based Model Training in AI for Science[EB/OL].(2025-07-29)[2025-08-23].https://arxiv.org/abs/2501.11847.点此复制

评论