English-to-Low-Resource Translation: A Multimodal Approach for Hindi, Malayalam, Bengali, and Hausa

Research output: Chapter in Book or Conference Publication/ProceedingConference Publicationpeer-review

1 Citation (Scopus)

Abstract

Multimodal machine translation leverages multiple data modalities to enhance translation quality, particularly for low-resourced languages. This paper uses a multimodal model that integrates visual information with textual data to improve translation accuracy from English to Hindi, Malayalam, Bengali, and Hausa. This approach employs a gated fusion mechanism to effectively combine the outputs of textual and visual encoders, enabling more nuanced translations that consider both language and contextual visual cues. The model’s performance was evaluated against the text-only machine translation model based on BLEU, ChrF2 and TER. Experimental results demonstrate that the multimodal approach consistently outperforms the text-only baseline, highlighting the potential of integrating visual information in low-resourced language translation tasks.

Original languageEnglish
Title of host publicationWMT 2024 - 9th Conference on Machine Translation, Proceedings of the Conference
EditorsBarry Haddow, Tom Kocmi, Philipp Koehn, Christof Monz
PublisherAssociation for Computational Linguistics
Pages815-822
Number of pages8
ISBN (Electronic)9798891761797
Publication statusPublished - 2024
Event9th Conference on Machine Translation, WMT 2024 - Miami, United States
Duration: 15 Nov 202416 Nov 2024

Publication series

NameConference on Machine Translation - Proceedings
Volume2024-November
ISSN (Electronic)2768-0983

Conference

Conference9th Conference on Machine Translation, WMT 2024
Country/TerritoryUnited States
CityMiami
Period15/11/2416/11/24

Fingerprint

Dive into the research topics of 'English-to-Low-Resource Translation: A Multimodal Approach for Hindi, Malayalam, Bengali, and Hausa'. Together they form a unique fingerprint.

Cite this