NUTMEG: Separating Signal From Noise in Annotator Disagreement
NUTMEG: Separating Signal From Noise in Annotator Disagreement
NLP models often rely on human-labeled data for training and evaluation. Many approaches crowdsource this data from a large number of annotators with varying skills, backgrounds, and motivations, resulting in conflicting annotations. These conflicts have traditionally been resolved by aggregation methods that assume disagreements are errors. Recent work has argued that for many tasks annotators may have genuine disagreements and that variation should be treated as signal rather than noise. However, few models separate signal and noise in annotator disagreement. In this work, we introduce NUTMEG, a new Bayesian model that incorporates information about annotator backgrounds to remove noisy annotations from human-labeled training data while preserving systematic disagreements. Using synthetic data, we show that NUTMEG is more effective at recovering ground-truth from annotations with systematic disagreement than traditional aggregation methods. We provide further analysis characterizing how differences in subpopulation sizes, rates of disagreement, and rates of spam affect the performance of our model. Finally, we demonstrate that downstream models trained on NUTMEG-aggregated data significantly outperform models trained on data from traditionally aggregation methods. Our results highlight the importance of accounting for both annotator competence and systematic disagreements when training on human-labeled data.
Jonathan Ivey、Susan Gauch、David Jurgens
计算技术、计算机技术
Jonathan Ivey,Susan Gauch,David Jurgens.NUTMEG: Separating Signal From Noise in Annotator Disagreement[EB/OL].(2025-07-25)[2025-08-18].https://arxiv.org/abs/2507.18890.点此复制
评论