|国家预印本平台
首页|Advancing Face-to-Face Emotion Communication: A Multimodal Dataset (AFFEC)

Advancing Face-to-Face Emotion Communication: A Multimodal Dataset (AFFEC)

Advancing Face-to-Face Emotion Communication: A Multimodal Dataset (AFFEC)

来源:Arxiv_logoArxiv
英文摘要

Emotion recognition has the potential to play a pivotal role in enhancing human-computer interaction by enabling systems to accurately interpret and respond to human affect. Yet, capturing emotions in face-to-face contexts remains challenging due to subtle nonverbal cues, variations in personal traits, and the real-time dynamics of genuine interactions. Existing emotion recognition datasets often rely on limited modalities or controlled conditions, thereby missing the richness and variability found in real-world scenarios. In this work, we introduce Advancing Face-to-Face Emotion Communication (AFFEC), a multimodal dataset designed to address these gaps. AFFEC encompasses 84 simulated emotional dialogues across six distinct emotions, recorded from 73 participants over more than 5,000 trials and annotated with more than 20,000 labels. It integrates electroencephalography (EEG), eye-tracking, galvanic skin response (GSR), facial videos, and Big Five personality assessments. Crucially, AFFEC explicitly distinguishes between felt emotions (the participant's internal affect) and perceived emotions (the observer's interpretation of the stimulus). Baseline analyses spanning unimodal features and straightforward multimodal fusion demonstrate that even minimal processing yields classification performance significantly above chance, especially for arousal. Incorporating personality traits further improves predictions of felt emotions, highlighting the importance of individual differences. By bridging controlled experimentation with more realistic face-to-face stimuli, AFFEC offers a unique resource for researchers aiming to develop context-sensitive, adaptive, and personalized emotion recognition models.

Meisam J. Sekiavandi、Laurits Dixen、Jostein Fimland、Sree Keerthi Desu、Paolo Burelli、Antonia-Bianca Zserai、Ye Sul Lee、Maria Barrett

计算技术、计算机技术

Meisam J. Sekiavandi,Laurits Dixen,Jostein Fimland,Sree Keerthi Desu,Paolo Burelli,Antonia-Bianca Zserai,Ye Sul Lee,Maria Barrett.Advancing Face-to-Face Emotion Communication: A Multimodal Dataset (AFFEC)[EB/OL].(2025-04-26)[2025-05-16].https://arxiv.org/abs/2504.18969.点此复制

评论