MixedEmotions: An open-source toolbox for multimodal emotion analysis

  • Paul Buitelaar
  • , Ian D. Wood
  • , Sapna Negi
  • , Mihael Arcan
  • , John P. McCrae
  • , Andrejs Abele
  • , Cécile Robin
  • , Vladimir Andryushechkin
  • , Housam Ziad
  • , Hesam Sagha
  • , Maximilian Schmitt
  • , Björn W. Schuller
  • , J. Fernando Sánchez-Rada
  • , Carlos A. Iglesias
  • , Carlos Navarro
  • , Andreas Giefer
  • , Nicolaus Heise
  • , Vincenzo Masucci
  • , Francesco A. Danza
  • , Ciro Caterino
  • Pavel Smrž, Michal Hradis, Filip Povolný, Marek Klimeś, Pavel Matějka, Giovanni Tummarello

Research output: Contribution to a Journal (Peer & Non Peer)Articlepeer-review

36 Citations (Scopus)

Abstract

Recently, there is an increasing tendency to embed functionalities for recognizing emotions from user-generated media content in automated systems such as call-centre operations, recommendations, and assistive technologies, providing richer and more informative user and content profiles. However, to date, adding these functionalities was a tedious, costly, and time-consuming effort, requiring identification and integration of diverse tools with diverse interfaces as required by the use case at hand. The MixedEmotions Toolbox leverages the need for such functionalities by providing tools for text, audio, video, and linked data processing within an easily integrable plug-and-play platform. These functionalities include: 1) for text processing: emotion and sentiment recognition; 2) for audio processing: emotion, age, and gender recognition; 3) for video processing: face detection and tracking, emotion recognition, facial landmark localization, head pose estimation, face alignment, and body pose estimation; and 4) for linked data: knowledge graph integration. Moreover, the MixedEmotions Toolbox is open-source and free. In this paper, we present this toolbox in the context of the existing landscape, and provide a range of detailed benchmarks on standard test-beds showing its state-of-the-art performance. Furthermore, three real-world use cases show its effectiveness, namely, emotion-driven smart TV, call center monitoring, and brand reputation analysis.

Original languageEnglish
Article number8269329
Pages (from-to)2454-2465
Number of pages12
JournalIEEE Transactions on Multimedia
Volume20
Issue number9
DOIs
Publication statusPublished - Sep 2018

Keywords

  • Emotion analysis
  • affective computing
  • audio processing
  • linked data
  • open source toolbox
  • text processing
  • video processing

Authors (Note for portal: view the doc link for the full list of authors)

  • Authors
  • Paul Buitelaar and Ian D. Wood and Sapna Negi and Mihael Arcan and John P. McCrae and Andrejs Abele and C\'ecile Robin and Vladimir Andryushechkin and Housam Ziad and Hesam Sagha and J. Fernando S\'anchez-Rada and Carlos A. Iglesias and Carlos Navarro and Andreas Giefer and Nicolaus Heise and Vincenzo Masucci and Francesco A. Danza and Ciro Caterino and Pavel Smr\vz and Michal Hradi\vs and Filip Povoln\'y and Marek Klime\vs and Pavel Mat\vejka and Giovanni Tummarello

Fingerprint

Dive into the research topics of 'MixedEmotions: An open-source toolbox for multimodal emotion analysis'. Together they form a unique fingerprint.

Cite this