|国家预印本平台
首页|基于面部表情和语音的多模态情感识别研究

基于面部表情和语音的多模态情感识别研究

Research on Multimodal Emotion Recognition Based on Facial Expression and Speech

中文摘要英文摘要

本文对基于面部表情和语音的多模态情感识别进行研究。首先采集特定人面部表情样本建立面部表情数据库,采集特定人语音样本建立语音数据库。然后利用主成分分析法分别对表情、语音样本进行特征提取,对提取的特征进行融合;针对语音的情感特性,得出语音时域特征,并将其与表情特征进行融合。最后利用支持向量机对融合后的特征进行分类,获得给定测试样本的情感类别信息。由面部表情特征和语音时域特征得到的融合特征,该融合特征的情感识别效果好于只用主成分分析得到的融合特征的情感识别效果。基于面部表情和语音的多模态情感识别效果比基于面部表情或语音的单模态情感识别效果好。

his topic is research of multimodal emotion recognition based on facial expression and speech. First, through colleting certain person's facial expression samples, the database of facial expression is established. And through colleting certain person's speech samples, the database of speech is established. Then, the features of facial expression and speech samples respectively are extracted using the principal component analysis (PCA). Considering speech emotion features, the speech time domain features are obtained. And they are fused of expression features. Finally, the features after the fusion are classified using support vector machine (SVM).And the emotional categories of test sample information are given. By the facial expression features and speech time domain features of fusion, the fusion feature of emotion recognition effect is better than only use principal component analysis get the fusion feature of emotion recognition effect. The effect of multimodal emotion recognition based on facial expression and speech is better than the effect of the single modal emotion recognition based on facial expression or speech.

周亚同、张寅、周丽君

电子技术应用

模式识别与智能系统特征融合支持向量机情感识别

pattern recognition and intelligent systemfeature fusionSupport Vector Machine (SVM)emotion recognition

周亚同,张寅,周丽君.基于面部表情和语音的多模态情感识别研究[EB/OL].(2013-12-05)[2025-08-04].http://www.paper.edu.cn/releasepaper/content/201312-87.点此复制

评论