|国家预印本平台
首页|The 4th Dimension for Scaling Model Size

The 4th Dimension for Scaling Model Size

The 4th Dimension for Scaling Model Size

来源:Arxiv_logoArxiv
英文摘要

Scaling the size of large language models typically involves three dimensions: depth, width, and the number of parameters. In this work, we explore a fourth dimension, virtual logical depth (VLD), which increases the effective algorithmic depth without changing the overall parameter count by reusing parameters within the model. Although parameter reuse is not a new concept, its potential and characteristics in model scaling have not been thoroughly studied. Through carefully designed controlled experiments, we make the following key discoveries regarding VLD scaling: VLD scaling forces the knowledge capacity of the model to remain almost constant, with only minor variations. VLD scaling enables a significant improvement in reasoning capability, provided the scaling method is properly implemented. The number of parameters correlates with knowledge capacity, but not with reasoning capability. Under certain conditions, it is not necessary to increase the parameter count to enhance reasoning. These findings are consistent across various model configurations and are likely to be generally valid within the scope of our experiments.

Ruike Zhu、Hanwen Zhang、Tianyu Shi、Chi Wang、Tianyi Zhou、Zengyi Qin

计算技术、计算机技术

Ruike Zhu,Hanwen Zhang,Tianyu Shi,Chi Wang,Tianyi Zhou,Zengyi Qin.The 4th Dimension for Scaling Model Size[EB/OL].(2025-06-23)[2025-07-17].https://arxiv.org/abs/2506.18233.点此复制

评论