Trust, or Don't Predict: Introducing the CWSA Family for Confidence-Aware Model Evaluation
Trust, or Don't Predict: Introducing the CWSA Family for Confidence-Aware Model Evaluation
In recent machine learning systems, confidence scores are being utilized more and more to manage selective prediction, whereby a model can abstain from making a prediction when it is unconfident. Yet, conventional metrics like accuracy, expected calibration error (ECE), and area under the risk-coverage curve (AURC) do not capture the actual reliability of predictions. These metrics either disregard confidence entirely, dilute valuable localized information through averaging, or neglect to suitably penalize overconfident misclassifications, which can be particularly detrimental in real-world systems. We introduce two new metrics Confidence-Weighted Selective Accuracy (CWSA) and its normalized variant CWSA+ that offer a principled and interpretable way to evaluate predictive models under confidence thresholds. Unlike existing methods, our metrics explicitly reward confident accuracy and penalize overconfident mistakes. They are threshold-local, decomposable, and usable in both evaluation and deployment settings where trust and risk must be quantified. Through exhaustive experiments on both real-world data sets (MNIST, CIFAR-10) and artificial model variants (calibrated, overconfident, underconfident, random, perfect), we show that CWSA and CWSA+ both effectively detect nuanced failure modes and outperform classical metrics in trust-sensitive tests. Our results confirm that CWSA is a sound basis for developing and assessing selective prediction systems for safety-critical domains.
Kourosh Shahnazari、Seyed Moein Ayyoubzadeh、Mohammadali Keshtparvar、Pegah Ghaffari
计算技术、计算机技术
Kourosh Shahnazari,Seyed Moein Ayyoubzadeh,Mohammadali Keshtparvar,Pegah Ghaffari.Trust, or Don't Predict: Introducing the CWSA Family for Confidence-Aware Model Evaluation[EB/OL].(2025-05-24)[2025-06-23].https://arxiv.org/abs/2505.18622.点此复制
评论