Unpacking Softmax: How Temperature Drives Representation Collapse, Compression, and Generalization
Unpacking Softmax: How Temperature Drives Representation Collapse, Compression, and Generalization
The softmax function is a fundamental building block of deep neural networks, commonly used to define output distributions in classification tasks or attention weights in transformer architectures. Despite its widespread use and proven effectiveness, its influence on learning dynamics and learned representations remains poorly understood, limiting our ability to optimize model behavior. In this paper, we study the pivotal role of the softmax function in shaping the model's representation. We introduce the concept of rank deficit bias - a phenomenon in which softmax-based deep networks find solutions of rank much lower than the number of classes. This bias depends on the softmax function's logits norm, which is implicitly influenced by hyperparameters or directly modified by softmax temperature. Furthermore, we demonstrate how to exploit the softmax dynamics to learn compressed representations or to enhance their performance on out-of-distribution data. We validate our findings across diverse architectures and real-world datasets, highlighting the broad applicability of temperature tuning in improving model performance. Our work provides new insights into the mechanisms of softmax, enabling better control over representation learning in deep neural networks.
Wojciech Masarczyk、Mateusz Ostaszewski、Tin Sum Cheng、Tomasz Trzciński、Aurelien Lucchi、Razvan Pascanu
计算技术、计算机技术
Wojciech Masarczyk,Mateusz Ostaszewski,Tin Sum Cheng,Tomasz Trzciński,Aurelien Lucchi,Razvan Pascanu.Unpacking Softmax: How Temperature Drives Representation Collapse, Compression, and Generalization[EB/OL].(2025-06-02)[2025-06-23].https://arxiv.org/abs/2506.01562.点此复制
评论