Survey on bimodal speech emotion recognition from acoustic and linguistic information fusion

Bagus Tris Atmaja*, Akira Sasou, Masato Akagi

*Corresponding author for this work

Research output: Contribution to journalReview articlepeer-review

52 Citations (Scopus)

Abstract

Speech emotion recognition (SER) is traditionally performed using merely acoustic information. Acoustic features, commonly are extracted per frame, are mapped into emotion labels using classifiers such as support vector machines for machine learning or multi-layer perceptron for deep learning. Previous research has shown that acoustic-only SER suffers from many issues, mostly on low performances. On the other hand, not only acoustic information can be extracted from speech but also linguistic information. The linguistic features can be extracted from the transcribed text by an automatic speech recognition system. The fusion of acoustic and linguistic information could improve the SER performance. This paper presents a survey of the works on bimodal emotion recognition fusing acoustic and linguistic information. Five components of bimodal SER are reviewed: emotion models, datasets, features, classifiers, and fusion methods. Some major findings, including state-of-the-art results and their methods from the commonly used datasets, are also presented to give insights for the current research and to surpass these results. Finally, this survey proposes the remaining issues in the bimodal SER research for future research directions.

Original languageEnglish
Pages (from-to)11-28
Number of pages18
JournalSpeech Communication
Volume140
DOIs
Publication statusPublished - May 2022
Externally publishedYes

Keywords

  • Affective computing
  • Audiotextual information
  • Bimodal fusion
  • Information fusion
  • Speech emotion recognition

Fingerprint

Dive into the research topics of 'Survey on bimodal speech emotion recognition from acoustic and linguistic information fusion'. Together they form a unique fingerprint.

Cite this