A Minimum Description Length Approach to Regularization in Neural Networks
A Minimum Description Length Approach to Regularization in Neural Networks
State-of-the-art neural networks can be trained to become remarkable solutions to many problems. But while these architectures can express symbolic, perfect solutions, trained models often arrive at approximations instead. We show that the choice of regularization method plays a crucial role: when trained on formal languages with standard regularization ($L_1$, $L_2$, or none), expressive architectures not only fail to converge to correct solutions but are actively pushed away from perfect initializations. In contrast, applying the Minimum Description Length (MDL) principle to balance model complexity with data fit provides a theoretically grounded regularization method. Using MDL, perfect solutions are selected over approximations, independently of the optimization algorithm. We propose that unlike existing regularization techniques, MDL introduces the appropriate inductive bias to effectively counteract overfitting and promote generalization.
Matan Abudy、Orr Well、Emmanuel Chemla、Roni Katzir、Nur Lan
计算技术、计算机技术
Matan Abudy,Orr Well,Emmanuel Chemla,Roni Katzir,Nur Lan.A Minimum Description Length Approach to Regularization in Neural Networks[EB/OL].(2025-05-19)[2025-06-13].https://arxiv.org/abs/2505.13398.点此复制
评论