基于多域信息融合的脑电情感识别研究
DOI:
CSTR:
作者:
作者单位:

1.南京邮电大学电子与光学工程学院、柔性电子(未来技术)学院 南京 210023; 2.南京邮电大学射频集成与微组装技术国家地方联合工程实验室 南京 210023

作者简介:

通讯作者:

中图分类号:

TP391

基金项目:


Research on EEG emotion recognition based on multi-domain information fusion
Author:
Affiliation:

1.School of Electronic and Optical Engineering, Nanjing University of Posts and Telecommunications,Nanjing 210023, China; 2.Nation-Local Joint Project Engineering Lab of RF Integration & Micropackage, Nanjing University of Posts and Telecommunications, Nanjing 210023, China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    脑电信号识别方法较少将空间、时间和频率信息相融合,为了充分挖掘脑电信号包含的丰富信息,本文提出一种多域信息融合的脑电情感识别方法。该方法利用二维卷积神经网络和一维卷积神经网络相结合的并行卷积神经网络(PCNN)模型学习脑电信号的空间、时间和频率特征,来对人类情感状态进行分类。其中,2D-CNN用于挖掘相邻EEG通道间的空间和频率信息,1D-CNN用于挖掘EEG的时间和频率信息。最后,将两个并行卷积模块提取的信息融合进行情感识别。在数据集SEED上的情感三分类实验结果表明,融合空间、时间、频率特征的PCNN整体分类准确率达到了98.04%,与只提取空频信息的2D-CNN和提取时频信息的1D-CNN相比,准确率分别提高了1.97%和0.60%。并于最近的类似工作相比,本文提出的方法对于脑电情感分类具有一定的优越性。

    Abstract:

    EEG signal recognition methods rarely integrate spatial, temporal and frequency information, in order to fully explore the rich information contained in EEG signals, this paper proposes a multi-information fusion EEG emotion recognition method. The method utilizes a parallel convolutional neural network model (Parallel Convolutional Neural Network, PCNN) that combines a two-dimensional convolutional neural network(2D-CNN) and a one-dimensional convolutional neural network(1D-CNN) to learn the spatial, temporal, and frequency features of the EEG signals to categorize the human emotional states. Among them, 2D-CNN is used to mine spatial and frequency information between neighboring EEG channels, and 1D-CNN is used to mine temporal and frequency information of EEG. Finally, the information extracted from the two parallel CNN modules is fused for emotion recognition. The experimental results of emotion triple classification on the dataset SEED show that the overall classification accuracy of the PCNN fusing spatial, temporal, and frequency features reaches 98.04%, which is an improvement of 1.97% and 0.60%, respectively, compared to the 2D-CNN extracting only null-frequency information and the 1D-CNN extracting temporal-frequency information. And compared with recent similar work, the method proposed in this paper is superior for EEG emotion classification.

    参考文献
    相似文献
    引证文献
引用本文

王泽田,张学军.基于多域信息融合的脑电情感识别研究[J].电子测量技术,2024,47(2):168-175

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2024-04-30
  • 出版日期:
文章二维码