TY - GEN
T1 - Enhancing Multiple-Choice Question Answering with Causal Knowledge
AU - Dalal, Dhairya
AU - Arcan, Mihael
AU - Buitelaar, Paul
N1 - Publisher Copyright:
© 2021 Association for Computational Linguistics.
PY - 2021
Y1 - 2021
N2 - The task of causal question answering aims to reason about causes and effects over a provided real or hypothetical premise. Recent approaches have converged on using transformer-based language models to solve question answering tasks. However, pretrained language models often struggle when external knowledge is not present in the premise or when additional context is required to answer the question. To the best of our knowledge, no prior work has explored the efficacy of augmenting pretrained language models with external causal knowledge for multiple-choice causal question answering. In this paper, we present novel strategies for the representation of causal knowledge. Our empirical results demonstrate the efficacy of augmenting pretrained models with external causal knowledge. We show improved performance on the COPA (Choice of Plausible Alternatives) and WIQA (What If Reasoning Over Procedural Text) benchmark tasks. On the WIQA benchmark, our approach is competitive with the state-of-the-art and exceeds it within the evaluation subcategories of In-Paragraph and Out-of-Paragraph perturbations.
AB - The task of causal question answering aims to reason about causes and effects over a provided real or hypothetical premise. Recent approaches have converged on using transformer-based language models to solve question answering tasks. However, pretrained language models often struggle when external knowledge is not present in the premise or when additional context is required to answer the question. To the best of our knowledge, no prior work has explored the efficacy of augmenting pretrained language models with external causal knowledge for multiple-choice causal question answering. In this paper, we present novel strategies for the representation of causal knowledge. Our empirical results demonstrate the efficacy of augmenting pretrained models with external causal knowledge. We show improved performance on the COPA (Choice of Plausible Alternatives) and WIQA (What If Reasoning Over Procedural Text) benchmark tasks. On the WIQA benchmark, our approach is competitive with the state-of-the-art and exceeds it within the evaluation subcategories of In-Paragraph and Out-of-Paragraph perturbations.
UR - https://www.scopus.com/pages/publications/85121150430
M3 - Conference Publication
AN - SCOPUS:85121150430
T3 - Deep Learning Inside Out: 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, DeeLIO 2021 - Proceedings, co-located with the Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL-HLT 2021
SP - 70
EP - 80
BT - Deep Learning Inside Out
A2 - Agirre, Eneko
A2 - Apidianaki, Marianna
A2 - Vulic, Ivan
PB - Association for Computational Linguistics (ACL)
T2 - 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures: Deep Learning Inside Out, DeeLIO 2021
Y2 - 10 June 2021
ER -