REACT2023: the first Multi-modal Multiple Appropriate Facial Reaction Generation Challenge
REACT2023: the first Multi-modal Multiple Appropriate Facial Reaction Generation Challenge
The Multi-modal Multiple Appropriate Facial Reaction Generation Challenge (REACT2023) is the first competition event focused on evaluating multimedia processing and machine learning techniques for generating human-appropriate facial reactions in various dyadic interaction scenarios, with all participants competing strictly under the same conditions. The goal of the challenge is to provide the first benchmark test set for multi-modal information processing and to foster collaboration among the audio, visual, and audio-visual affective computing communities, to compare the relative merits of the approaches to automatic appropriate facial reaction generation under different spontaneous dyadic interaction conditions. This paper presents: (i) novelties, contributions and guidelines of the REACT2023 challenge; (ii) the dataset utilized in the challenge; and (iii) the performance of baseline systems on the two proposed sub-challenges: Offline Multiple Appropriate Facial Reaction Generation and Online Multiple Appropriate Facial Reaction Generation, respectively. The challenge baseline code is publicly available at \url{https://github.com/reactmultimodalchallenge/baseline_react2023}.
Cheng Luo、Michel Valstar、Fabien Ringeval、Sergio Escalera、Siyang Song、Tobias Baur、Elisabeth Andre、Micol Spitale、German Barquero、Cristina Palmero、Hatice Gunes
计算技术、计算机技术
Cheng Luo,Michel Valstar,Fabien Ringeval,Sergio Escalera,Siyang Song,Tobias Baur,Elisabeth Andre,Micol Spitale,German Barquero,Cristina Palmero,Hatice Gunes.REACT2023: the first Multi-modal Multiple Appropriate Facial Reaction Generation Challenge[EB/OL].(2023-06-11)[2025-08-02].https://arxiv.org/abs/2306.06583.点此复制
评论