Journal article

KOCKMANN Marcel, BURGET Lukáš and ČERNOCKÝ Jan. Application of speaker- and language identification state-of-the-art techniques for emotion recognition. Speech Communication. Amsterdam: Elsevier Science, 2011, vol. 53, no. 9, pp. 1172-1185. ISSN 0167-6393. Available from:
Publication language:english
Original title:Application of speaker- and language identification state-of-the-art techniques for emotion recognition
Title (cs):Použití aktuálních technik pro identifikaci řečníka a jazyka v rozpoznávání emocí
Book:Speech Communication
Journal:Speech Communication, Vol. 53, No. 9, Amsterdam, NL
Publisher:Elsevier Science
Emotion recognition; Gaussian mixture models; Maximum-mutual-information; Intersession variability compensation; Score-level fusion
Authors of this article show that feature extraction and statistical modeling methods that are usually used in speaker and language recognition can be successfully used for emotion recognition as well.
This article describes our efforts of transferring feature extraction and statistical modeling techniques from the fields of speaker and language identification to the related field of emotion recognition. We give detailed insight to our acoustic and prosodic feature extraction and show how to apply Gaussian Mixture Modeling techniques on top of it. We focus on different flavors of Gaussian Mixture Models (GMMs), including more sophisticated approaches like discriminative training using Maximum-Mutual-Information (MMI) criterion and InterSession Variability (ISV) compensation. Both techniques show superior performance in language and speaker identification. Furthermore, we combine multiple system outputs by score-level fusion to exploit the complementary information in diverse systems. Our proposal is evaluated with several experiments on the FAU Aibo Emotion Corpus containing non-acted spontaneous emotional speech. Within the Interspeech 2009 Emotion Challenge we could achieve the best results for the 5-class task of the Open Performance Sub-Challenge with an unweighted average recall of 41.7%. Further additional experiments on the acted Berlin Database of Emotional Speech show the capability of intersession variability compensation for emotion recognition.
   author = {Marcel Kockmann and Luk{\'{a}}{\v{s}} Burget and
	Jan {\v{C}}ernock{\'{y}}},
   title = {Application of speaker- and language
	identification state-of-the-art techniques for
	emotion recognition},
   pages = {1172--1185},
   booktitle = {Speech Communication},
   journal = {Speech Communication},
   volume = {53},
   number = {9},
   year = {2011},
   publisher = {Elsevier Science},
   ISSN = {0167-6393},
   doi = {10.1016/j.specom.2011.01.007},
   language = {english},
   url = {}

Your IPv4 address:
Switch to https