Effects of Data Augmentations on Speech Emotion Recognition

Bagus Tris Atmaja*, Akira Sasou

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

Data augmentation techniques have recently gained more adoption in speech processing, including speech emotion recognition. Although more data tend to be more effective, there may be a trade-off in which more data will not provide a better model. This paper reports experiments on investigating the effects of data augmentation in speech emotion recognition. The investigation aims at finding the most useful type of data augmentation and the number of data augmentations for speech emotion recognition in various conditions. The experiments are conducted on the Japanese Twitter-based emotional speech and IEMOCAP datasets. The results show that for speaker-independent data, two data augmentations with glottal source extraction and silence removal exhibited the best performance among others, even with more data augmentation techniques. For the text-independent data (including speaker and text-independent), more data augmentations tend to improve speech emotion recognition performances. The results highlight the trade-off between the number of data augmentations and the performance of speech emotion recognition showing the necessity to choose a proper data augmentation technique for a specific condition.

Original languageEnglish
Article number5941
JournalSensors
Volume22
Issue number16
DOIs
Publication statusPublished - Aug 2022

Keywords

  • SVM
  • affective computing
  • data augmentations
  • speech emotion recognition
  • wav2vec 2.0

Fingerprint

Dive into the research topics of 'Effects of Data Augmentations on Speech Emotion Recognition'. Together they form a unique fingerprint.

Cite this