|国家预印本平台
首页|The Relative Instability of Model Comparison with Cross-validation

The Relative Instability of Model Comparison with Cross-validation

The Relative Instability of Model Comparison with Cross-validation

来源:Arxiv_logoArxiv
英文摘要

Existing work has shown that cross-validation (CV) can be used to provide an asymptotic confidence interval for the test error of a stable machine learning algorithm, and existing stability results for many popular algorithms can be applied to derive positive instances where such confidence intervals will be valid. However, in the common setting where CV is used to compare two algorithms, it becomes necessary to consider a notion of relative stability which cannot easily be derived from existing stability results, even for simple algorithms. To better understand relative stability and when CV provides valid confidence intervals for the test error difference of two algorithms, we study the soft-thresholded least squares algorithm, a close cousin of the Lasso. We prove that while stability holds when assessing the individual test error of this algorithm, relative stability fails to hold when comparing the test error of two such algorithms, even in a sparse low-dimensional linear model setting. Additionally, we empirically confirm the invalidity of CV confidence intervals for the test error difference when either soft-thresholding or the Lasso is used. In short, caution is needed when quantifying the uncertainty of CV estimates of the performance difference of two machine learning algorithms, even when both algorithms are individually stable.

Alexandre Bayle、Lucas Janson、Lester Mackey

计算技术、计算机技术

Alexandre Bayle,Lucas Janson,Lester Mackey.The Relative Instability of Model Comparison with Cross-validation[EB/OL].(2025-08-06)[2025-08-24].https://arxiv.org/abs/2508.04409.点此复制

评论