|国家预印本平台
首页|Regularized Neural Ensemblers

Regularized Neural Ensemblers

Regularized Neural Ensemblers

来源:Arxiv_logoArxiv
英文摘要

Ensemble methods are known for enhancing the accuracy and robustness of machine learning models by combining multiple base learners. However, standard approaches like greedy or random ensembling often fall short, as they assume a constant weight across samples for the ensemble members. This can limit expressiveness and hinder performance when aggregating the ensemble predictions. In this study, we explore employing regularized neural networks as ensemble methods, emphasizing the significance of dynamic ensembling to leverage diverse model predictions adaptively. Motivated by the risk of learning low-diversity ensembles, we propose regularizing the ensembling model by randomly dropping base model predictions during the training. We demonstrate this approach provides lower bounds for the diversity within the ensemble, reducing overfitting and improving generalization capabilities. Our experiments showcase that the regularized neural ensemblers yield competitive results compared to strong baselines across several modalities such as computer vision, natural language processing, and tabular data.

Sebastian Pineda Arango、Lennart Purucker、Arber Zela、Frank Hutter、Maciej Janowski、Josif Grabocka

计算技术、计算机技术

Sebastian Pineda Arango,Lennart Purucker,Arber Zela,Frank Hutter,Maciej Janowski,Josif Grabocka.Regularized Neural Ensemblers[EB/OL].(2025-06-23)[2025-07-16].https://arxiv.org/abs/2410.04520.点此复制

评论