MM-Gesture: Towards Precise Micro-Gesture Recognition through Multimodal Fusion
MM-Gesture: Towards Precise Micro-Gesture Recognition through Multimodal Fusion
In this paper, we present MM-Gesture, the solution developed by our team HFUT-VUT, which ranked 1st in the micro-gesture classification track of the 3rd MiGA Challenge at IJCAI 2025, achieving superior performance compared to previous state-of-the-art methods. MM-Gesture is a multimodal fusion framework designed specifically for recognizing subtle and short-duration micro-gestures (MGs), integrating complementary cues from joint, limb, RGB video, Taylor-series video, optical-flow video, and depth video modalities. Utilizing PoseConv3D and Video Swin Transformer architectures with a novel modality-weighted ensemble strategy, our method further enhances RGB modality performance through transfer learning pre-trained on the larger MA-52 dataset. Extensive experiments on the iMiGUE benchmark, including ablation studies across different modalities, validate the effectiveness of our proposed approach, achieving a top-1 accuracy of 73.213%.
Jihao Gu、Fei Wang、Kun Li、Yanyan Wei、Zhiliang Wu、Dan Guo
计算技术、计算机技术
Jihao Gu,Fei Wang,Kun Li,Yanyan Wei,Zhiliang Wu,Dan Guo.MM-Gesture: Towards Precise Micro-Gesture Recognition through Multimodal Fusion[EB/OL].(2025-07-11)[2025-07-25].https://arxiv.org/abs/2507.08344.点此复制
评论