A comparison of emotion annotation schemes and a new annotated data set

    Research output: Chapter in Book or Conference Publication/ProceedingConference Publicationpeer-review

    12 Citations (Scopus)

    Abstract

    While the recognition of positive/negative sentiment in text is an established task with many standard data sets and well developed methodologies, the recognition of more nuanced affect has received less attention, and in particular, there are very few publicly available gold standard annotated resources. To address this lack, we present a series of emotion annotation studies on tweets culminating in a publicly available collection of 2,019 tweets with scores on four emotion dimensions: valence, arousal, dominance and surprise, following the emotion representation model identified by Fontaine et.al. (Fontaine et al., 2007). Further, we make a comparison of relative vs. absolute annotation schemes. We find improved annotator agreement with a relative annotation scheme (comparisons) on a dimensional emotion model over a categorical annotation scheme on Ekman's six basic emotions (Ekman et al., 1987), however when we compare inter-annotator agreement for comparisons with agreement for a rating scale annotation scheme (both with the same dimensional emotion model), we find improved inter-annotator agreement with rating scales, challenging a common belief that relative judgements are more reliable.

    Original languageEnglish
    Title of host publicationLREC 2018 - 11th International Conference on Language Resources and Evaluation
    EditorsNicoletta Calzolari, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Koiti Hasida, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Helene Mazo, Asuncion Moreno, Jan Odijk, Stelios Piperidis, Takenobu Tokunaga
    PublisherEuropean Language Resources Association (ELRA)
    Pages1197-1202
    Number of pages6
    ISBN (Electronic)9791095546009
    Publication statusPublished - 2018
    Event11th International Conference on Language Resources and Evaluation, LREC 2018 - Miyazaki, Japan
    Duration: 7 May 201812 May 2018

    Publication series

    NameLREC 2018 - 11th International Conference on Language Resources and Evaluation

    Conference

    Conference11th International Conference on Language Resources and Evaluation, LREC 2018
    Country/TerritoryJapan
    CityMiyazaki
    Period7/05/1812/05/18

    Keywords

    • Affective-computing
    • Annotation
    • Annotator-agreement
    • Emotion
    • Social-media

    Fingerprint

    Dive into the research topics of 'A comparison of emotion annotation schemes and a new annotated data set'. Together they form a unique fingerprint.

    Cite this