Abstract
Despite advancements in neural machine translation, word sense disambiguation remains challenging, particularly with limited textual context. Multimodal Machine Translation enhances text-only models by integrating visual information, but its impact varies across translations. This study focuses on ambiguous sentences to investigate the effectiveness of utilizing visual information. By prioritizing these sentences, which benefit from visual cues, we aim to enhance hybrid multimodal and text-only translation approaches. We utilize Latent Semantic Analysis and Sentence-BERT to extract context vectors from the British National Corpus, enabling the assessment of semantic diversity. Our approach enhances translation quality for English-German and English-French on Multi30k, assessed through metrics including BLEU, chrF2, and TER.
| Original language | English |
|---|---|
| Pages | 154-166 |
| Number of pages | 13 |
| Publication status | Published - 2024 |
| Event | 16th Conference of the Association for Machine Translation in the Americas, AMTA 2024 - Hybrid, Chicago, United States Duration: 30 Sep 2024 → 2 Oct 2024 |
Conference
| Conference | 16th Conference of the Association for Machine Translation in the Americas, AMTA 2024 |
|---|---|
| Country/Territory | United States |
| City | Hybrid, Chicago |
| Period | 30/09/24 → 2/10/24 |