Meme Sentiment Analysis Enhanced with Multimodal Spatial Encoding and Facial Embedding

Research output: Chapter in Book or Conference Publication/ProceedingConference Publicationpeer-review

Abstract

Internet memes are characterised by the interspersing of text amongst visual elements. State-of-the-art multimodal meme classifiers do not account for the relative positions of these elements across the two modalities, despite the latent meaning associated with where text and visual elements are placed. Against two meme sentiment classification datasets, we systematically show performance gains from incorporating the spatial position of visual objects, faces, and text clusters extracted from memes. In addition, we also present facial embedding as an impactful enhancement to image representation in a multimodal meme classifier. Finally, we show that incorporating this spatial information allows our fully automated approaches to outperform their corresponding baselines that rely on additional human validation of OCR-extracted text.
Original languageEnglish (Ireland)
Title of host publicationArtificial Intelligence and Cognitive Science The 30th Irish Conference
Place of PublicationCork, Ireland
DOIs
Publication statusPublished - 1 Dec 2022

Authors (Note for portal: view the doc link for the full list of authors)

  • Authors
  • Hazman, M; McKeever, S; Griffith, J

Fingerprint

Dive into the research topics of 'Meme Sentiment Analysis Enhanced with Multimodal Spatial Encoding and Facial Embedding'. Together they form a unique fingerprint.

Cite this