|国家预印本平台
首页|Understanding Generalization, Robustness, and Interpretability in Low-Capacity Neural Networks

Understanding Generalization, Robustness, and Interpretability in Low-Capacity Neural Networks

Understanding Generalization, Robustness, and Interpretability in Low-Capacity Neural Networks

来源:Arxiv_logoArxiv
英文摘要

Although modern deep learning often relies on massive over-parameterized models, the fundamental interplay between capacity, sparsity, and robustness in low-capacity networks remains a vital area of study. We introduce a controlled framework to investigate these properties by creating a suite of binary classification tasks from the MNIST dataset with increasing visual difficulty (e.g., 0 and 1 vs. 4 and 9). Our experiments reveal three core findings. First, the minimum model capacity required for successful generalization scales directly with task complexity. Second, these trained networks are robust to extreme magnitude pruning (up to 95% sparsity), revealing the existence of sparse, high-performing subnetworks. Third, we show that over-parameterization provides a significant advantage in robustness against input corruption. Interpretability analysis via saliency maps further confirms that these identified sparse subnetworks preserve the core reasoning process of the original dense models. This work provides a clear, empirical demonstration of the foundational trade-offs governing simple neural networks.

Yash Kumar

计算技术、计算机技术

Yash Kumar.Understanding Generalization, Robustness, and Interpretability in Low-Capacity Neural Networks[EB/OL].(2025-07-22)[2025-08-18].https://arxiv.org/abs/2507.16278.点此复制

评论