Speech Emotion and Naturalness Recognitions with Multitask and Single-Task Learnings

Bagus Tris Atmaja*, Akira Sasou, Masato Akagi

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

11 Citations (Scopus)


This paper evaluates speech emotion and naturalness recognitions by utilizing deep learning models with multitask learning and single-task learning approaches. The emotion model accommodates valence, arousal, and dominance attributes known as dimensional emotion. The naturalness ratings are labeled on a five-point scale as dimensional emotion. Multitask learning predicts both dimensional emotion (as the main task) and naturalness scores (as an auxiliary task) simultaneously. The single-task learning predicts either dimensional emotion (valence, arousal, and dominance) or naturalness score independently. The results with multitask learning show improvement from previous studies on single-task learning for both dimensional emotion recognition and naturalness predictions. Within this study, single-task learning still shows superiority over multitask learning for naturalness recognition. The scatter plots of emotion and naturalness prediction scores against the true labels in multitask learning exhibit the lack of the model; it fails to predict the low and extremely high scores. The low score of naturalness prediction in this study is possibly due to a low number of samples of unnatural speech samples since the MSP-IMPROV dataset promotes the naturalness of speech. The finding that jointly predicting naturalness with emotion helps improve the performance of emotion recognition may be embodied in the emotion recognition model in future work.

Original languageEnglish
Pages (from-to)72381-72387
Number of pages7
JournalIEEE Access
Publication statusPublished - 2022


  • Speech emotion recognition
  • affective computing
  • multitask learning
  • speech naturalness recognition
  • speech processing


Dive into the research topics of 'Speech Emotion and Naturalness Recognitions with Multitask and Single-Task Learnings'. Together they form a unique fingerprint.

Cite this