TY - GEN
T1 - Automatic Sentiment Labelling of Multimodal Data
AU - Biswas, Sumana
AU - Young, Karen
AU - Griffith, Josephine
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023
Y1 - 2023
N2 - This study investigates the challenging problem of automatically providing sentiment labels for training and testing multimodal data containing both image and textual information for supervised machine learning. Because both the image and text components, individually and collectively, convey sentiment, assessing the sentiment of multimodal data typically requires both image and text information. Consequently, the majority of studies classify sentiment by combining image and text features (‘Image+Text-features’). In this study, we propose ‘Combined-Text-Features’ that incorporate the object names and attributes identified in an image, as well as any accompanying superimposed or captioned text of that image, and utilize these text features to classify the sentiment of multimodal data. Inspired by our prior research, we employ the Afinn labelling method to automatically provide sentiment labels to the ‘Combined-Text-Features’. We test whether classifier models, using these ‘Combined-Text-Features’ with the Afinn labelling, can provide comparable results as when using other multimodal features and other labelling (human labelling). CNN, BiLSTM, and BERT models are used for the experiments on two multimodal datasets. The experimental results demonstrate the usefulness of the ‘Combined-Text-Features’ as a representation for multimodal data for the sentiment classification task. The results also suggest that the Afinn labelling approach can be a feasible alternative to human labelling for providing sentiment labels.
AB - This study investigates the challenging problem of automatically providing sentiment labels for training and testing multimodal data containing both image and textual information for supervised machine learning. Because both the image and text components, individually and collectively, convey sentiment, assessing the sentiment of multimodal data typically requires both image and text information. Consequently, the majority of studies classify sentiment by combining image and text features (‘Image+Text-features’). In this study, we propose ‘Combined-Text-Features’ that incorporate the object names and attributes identified in an image, as well as any accompanying superimposed or captioned text of that image, and utilize these text features to classify the sentiment of multimodal data. Inspired by our prior research, we employ the Afinn labelling method to automatically provide sentiment labels to the ‘Combined-Text-Features’. We test whether classifier models, using these ‘Combined-Text-Features’ with the Afinn labelling, can provide comparable results as when using other multimodal features and other labelling (human labelling). CNN, BiLSTM, and BERT models are used for the experiments on two multimodal datasets. The experimental results demonstrate the usefulness of the ‘Combined-Text-Features’ as a representation for multimodal data for the sentiment classification task. The results also suggest that the Afinn labelling approach can be a feasible alternative to human labelling for providing sentiment labels.
KW - Automatic labelling
KW - Deep learning
KW - Multimodal data
KW - NLP
KW - Sentiment analysis
UR - https://www.scopus.com/pages/publications/85172664331
U2 - 10.1007/978-3-031-37890-4_8
DO - 10.1007/978-3-031-37890-4_8
M3 - Conference Publication
AN - SCOPUS:85172664331
SN - 9783031378898
T3 - Communications in Computer and Information Science
SP - 154
EP - 175
BT - Data Management Technologies and Applications - 10th International Conference, DATA 2021, and 11th International Conference, DATA 2022, Revised Selected Papers
A2 - Cuzzocrea, Alfredo
A2 - Gusikhin, Oleg
A2 - Hammoudi, Slimane
A2 - Quix, Christoph
PB - Springer Science and Business Media Deutschland GmbH
T2 - Proceedings of the 10th International Conference and 11th International Conference on Data Management Technologies and Applications, DATA 2021 and DATA 2022
Y2 - 11 July 2022 through 13 July 2022
ER -