TY - JOUR
T1 - Survey on bimodal speech emotion recognition from acoustic and linguistic information fusion
AU - Atmaja, Bagus Tris
AU - Sasou, Akira
AU - Akagi, Masato
N1 - Publisher Copyright:
© 2022 The Author(s)
PY - 2022/5
Y1 - 2022/5
N2 - Speech emotion recognition (SER) is traditionally performed using merely acoustic information. Acoustic features, commonly are extracted per frame, are mapped into emotion labels using classifiers such as support vector machines for machine learning or multi-layer perceptron for deep learning. Previous research has shown that acoustic-only SER suffers from many issues, mostly on low performances. On the other hand, not only acoustic information can be extracted from speech but also linguistic information. The linguistic features can be extracted from the transcribed text by an automatic speech recognition system. The fusion of acoustic and linguistic information could improve the SER performance. This paper presents a survey of the works on bimodal emotion recognition fusing acoustic and linguistic information. Five components of bimodal SER are reviewed: emotion models, datasets, features, classifiers, and fusion methods. Some major findings, including state-of-the-art results and their methods from the commonly used datasets, are also presented to give insights for the current research and to surpass these results. Finally, this survey proposes the remaining issues in the bimodal SER research for future research directions.
AB - Speech emotion recognition (SER) is traditionally performed using merely acoustic information. Acoustic features, commonly are extracted per frame, are mapped into emotion labels using classifiers such as support vector machines for machine learning or multi-layer perceptron for deep learning. Previous research has shown that acoustic-only SER suffers from many issues, mostly on low performances. On the other hand, not only acoustic information can be extracted from speech but also linguistic information. The linguistic features can be extracted from the transcribed text by an automatic speech recognition system. The fusion of acoustic and linguistic information could improve the SER performance. This paper presents a survey of the works on bimodal emotion recognition fusing acoustic and linguistic information. Five components of bimodal SER are reviewed: emotion models, datasets, features, classifiers, and fusion methods. Some major findings, including state-of-the-art results and their methods from the commonly used datasets, are also presented to give insights for the current research and to surpass these results. Finally, this survey proposes the remaining issues in the bimodal SER research for future research directions.
KW - Affective computing
KW - Audiotextual information
KW - Bimodal fusion
KW - Information fusion
KW - Speech emotion recognition
UR - http://www.scopus.com/inward/record.url?scp=85128871572&partnerID=8YFLogxK
U2 - 10.1016/j.specom.2022.03.002
DO - 10.1016/j.specom.2022.03.002
M3 - Review article
AN - SCOPUS:85128871572
SN - 0167-6393
VL - 140
SP - 11
EP - 28
JO - Speech Communication
JF - Speech Communication
ER -