|国家预印本平台
首页|Towards Understanding The Calibration Benefits of Sharpness-Aware Minimization

Towards Understanding The Calibration Benefits of Sharpness-Aware Minimization

Towards Understanding The Calibration Benefits of Sharpness-Aware Minimization

来源:Arxiv_logoArxiv
英文摘要

Deep neural networks have been increasingly used in safety-critical applications such as medical diagnosis and autonomous driving. However, many studies suggest that they are prone to being poorly calibrated and have a propensity for overconfidence, which may have disastrous consequences. In this paper, unlike standard training such as stochastic gradient descent, we show that the recently proposed sharpness-aware minimization (SAM) counteracts this tendency towards overconfidence. The theoretical analysis suggests that SAM allows us to learn models that are already well-calibrated by implicitly maximizing the entropy of the predictive distribution. Inspired by this finding, we further propose a variant of SAM, coined as CSAM, to ameliorate model calibration. Extensive experiments on various datasets, including ImageNet-1K, demonstrate the benefits of SAM in reducing calibration error. Meanwhile, CSAM performs even better than SAM and consistently achieves lower calibration error than other approaches

Chengli Tan、Yubo Zhou、Haishan Ye、Guang Dai、Junmin Liu、Zengjie Song、Jiangshe Zhang、Zixiang Zhao、Yunda Hao、Yong Xu

计算技术、计算机技术

Chengli Tan,Yubo Zhou,Haishan Ye,Guang Dai,Junmin Liu,Zengjie Song,Jiangshe Zhang,Zixiang Zhao,Yunda Hao,Yong Xu.Towards Understanding The Calibration Benefits of Sharpness-Aware Minimization[EB/OL].(2025-05-29)[2025-07-03].https://arxiv.org/abs/2505.23866.点此复制

评论