Journal article

BUITELAAR Paul, WOOD Ian, NEGI Sapna, ARCAN Mihael, MCCRAE John P., ABELE Andrejs, ROBIN Cécile, ANDRYUSHECHKIN Vladimir, ZIAD Housam, SAGHA Hesam, SCHMITT Maxmilian, SCHULLER Björn W., SÁNCHEZ-RADA J. Fernando, IGLESIAS Carlos A., NAVARRO Carlos, GIEFER Andreas, HEISE Nicolaus, MASUCCI Vincenzo, DANZA Francesco A., CATERINO Ciro, SMRŽ Pavel, HRADIŠ Michal, POVOLNÝ Filip, KLIMEŠ Marek, MATĚJKA Pavel and TUMMARELLO Giovanni. MixedEmotions: An Open-Source Toolbox for Multimodal Emotion Analysis. IEEE Transactions on Multimedia. 2018, vol. 20, no. 9, pp. 2454-2465. ISSN 1520-9210. Available from: https://ieeexplore.ieee.org/document/8269329/?arnumber=8269329
Publication language:english
Original title:MixedEmotions: An Open-Source Toolbox for Multimodal Emotion Analysis
Title (cs):MixedEmotions: Nástroje pro multimodální emoční analýzu s otevřeným kódem
Pages:2454-2465
Place:US
Year:2018
URL:https://ieeexplore.ieee.org/document/8269329/?arnumber=8269329
Journal:IEEE Transactions on Multimedia, Vol. 20, No. 9, US
ISSN:1520-9210
DOI:10.1109/TMM.2018.2798287
Keywords
emotion analysis, open source toolbox, affective computing, linked data, audio processing, text processing, video processing
Annotation
Recently, there is an increasing tendency to embed functionalities for recognizing emotions from user-generated media content in automated systems such as call-centre operations, recommendations, and assistive technologies, providing richer and more informative user and content profiles. However, to date, adding these functionalities was a tedious, costly, and time-consuming effort, requiring identification and integration of diverse tools with diverse interfaces as required by the use case at hand. The MixedEmotions Toolbox leverages the need for such functionalities by providing tools for text, audio, video, and linked data processing within an easily integrable plug-and-play platform. These functionalities include: 1) for text processing: emotion and sentiment recognition; 2) for audio processing: emotion, age, and gender recognition; 3) for video processing: face detection and tracking, emotion recognition, facial landmark localization, head pose estimation, face alignment, and body pose estimation; and 4) for linked data: knowledge graph integration. Moreover, the MixedEmotions Toolbox is open-source and free. In this paper, we present this toolbox in the context of the existing landscape, and provide a range of detailed benchmarks on standard test-beds showing its state-of-the-art performance. Furthermore, three real-world use cases show its effectiveness, namely, emotion-driven smart TV, call center monitoring, and brand reputation analysis.
BibTeX:
@ARTICLE{
   author = {Paul Buitelaar and Ian Wood and Sapna Negi and
	Mihael Arcan and P. John McCrae and Andrejs Abele
	and C{\'{e}}cile Robin and Vladimir Andryushechkin
	and Housam Ziad and Hesam Sagha and Maxmilian
	Schmitt and W. Bj{\"{o}}rn Schuller and Fernando
	J. S{\'{a}}nchez-Rada and A. Carlos Iglesias and
	Carlos Navarro and Andreas Giefer and Nicolaus
	Heise and Vincenzo Masucci and A. Francesco Danza
	and Ciro Caterino and Pavel Smr{\v{z}} and Michal
	Hradi{\v{s}} and Filip Povoln{\'{y}} and Marek
	Klime{\v{s}} and Pavel Mat{\v{e}}jka and Giovanni
	Tummarello},
   title = {MixedEmotions: An Open-Source Toolbox for
	Multimodal Emotion Analysis},
   pages = {2454--2465},
   journal = {IEEE Transactions on Multimedia},
   volume = {20},
   number = {9},
   year = {2018},
   ISSN = {1520-9210},
   doi = {10.1109/TMM.2018.2798287},
   language = {english},
   url = {http://www.fit.vutbr.cz/research/view_pub.php.en.iso-8859-2?id=11815}
}

Your IPv4 address: 34.229.113.106