|国家预印本平台
首页|Mixed Signals: Understanding Model Disagreement in Multimodal Empathy Detection

Mixed Signals: Understanding Model Disagreement in Multimodal Empathy Detection

Mixed Signals: Understanding Model Disagreement in Multimodal Empathy Detection

来源:Arxiv_logoArxiv
英文摘要

Multimodal models play a key role in empathy detection, but their performance can suffer when modalities provide conflicting cues. To understand these failures, we examine cases where unimodal and multimodal predictions diverge. Using fine-tuned models for text, audio, and video, along with a gated fusion model, we find that such disagreements often reflect underlying ambiguity, as evidenced by annotator uncertainty. Our analysis shows that dominant signals in one modality can mislead fusion when unsupported by others. We also observe that humans, like models, do not consistently benefit from multimodal input. These insights position disagreement as a useful diagnostic signal for identifying challenging examples and improving empathy system robustness.

Maya Srikanth、Run Chen、Julia Hirschberg

计算技术、计算机技术

Maya Srikanth,Run Chen,Julia Hirschberg.Mixed Signals: Understanding Model Disagreement in Multimodal Empathy Detection[EB/OL].(2025-05-20)[2025-06-21].https://arxiv.org/abs/2505.13979.点此复制

评论