高级检索

基于双通道特征融合网络的语音情感识别

Speech emotion recognition based on dual channel feature fusion network

  • 摘要: 针对语音情感识别中判别性的情感特征提取难题,结合卷积神经网络和视觉transformer网络结构,提出一种双通道特征融合的语音表征方法。使用基于倒瓶颈结构的卷积模块通道,并引入类transformer训练策略提取局部频谱特征,通过改进视觉transformer提取全局序列特征,利用卷积神经网络直接提取整个语谱图代替分块部分,更好地提取时序信息,将提取到的特征信息进行融合,能够获取判别性强的情感特征,最后输入到Softmax分类器得到识别结果。在EMO-DB和CASIA数据库上进行实验,文中所提模型的平均准确率分别达到了94.24%和93.05%,与其他模型进行对比试验,结果优于其他模型,表明了该方法的有效性。

     

    Abstract: To address the problems of discriminative emotional feature extraction in speech emotion recognition, a speech representation method based on two-channel feature fusion is proposed by combining convolutional neural network and vision transformer network structure. The convolutional module channel based on the inverted bottleneck structure is introduced into a transformer like training strategy to extract local spectral features. The global sequence features are extracted by improving the vision transformer, and the whole speech spectrogram is directly extracted instead of the chunked part by using a convolutional neural network for better extraction of the temporal information, and the extracted feature information is fused to obtain strong discriminant emotion features, which are finally input to the Softmax classifier to get recognition results. Experiments on EMO-DB and CASIA databases show that the average accuracy of the modle propsed in this paper is 94.24% and 93.05%, respectively. Compared with other models, the results are better, indicating the effectiveness of the methods.

     

/

返回文章
返回