|国家预印本平台
首页|Graph-Driven Multimodal Feature Learning Framework for Apparent Personality Assessment

Graph-Driven Multimodal Feature Learning Framework for Apparent Personality Assessment

Graph-Driven Multimodal Feature Learning Framework for Apparent Personality Assessment

来源:Arxiv_logoArxiv
英文摘要

Predicting personality traits automatically has become a challenging problem in computer vision. This paper introduces an innovative multimodal feature learning framework for personality analysis in short video clips. For visual processing, we construct a facial graph and design a Geo-based two-stream network incorporating an attention mechanism, leveraging both Graph Convolutional Networks (GCN) and Convolutional Neural Networks (CNN) to capture static facial expressions. Additionally, ResNet18 and VGGFace networks are employed to extract global scene and facial appearance features at the frame level. To capture dynamic temporal information, we integrate a BiGRU with a temporal attention module for extracting salient frame representations. To enhance the model's robustness, we incorporate the VGGish CNN for audio-based features and XLM-Roberta for text-based features. Finally, a multimodal channel attention mechanism is introduced to integrate different modalities, and a Multi-Layer Perceptron (MLP) regression model is used to predict personality traits. Experimental results confirm that our proposed framework surpasses existing state-of-the-art approaches in performance.

Kangsheng Wang、Chengwei Ye、Huanzhen Zhang、Linuo Xu、Shuyan Liu

10.62762/TETAI.2025.279350

计算技术、计算机技术

Kangsheng Wang,Chengwei Ye,Huanzhen Zhang,Linuo Xu,Shuyan Liu.Graph-Driven Multimodal Feature Learning Framework for Apparent Personality Assessment[EB/OL].(2025-04-15)[2025-05-12].https://arxiv.org/abs/2504.11515.点此复制

评论