[Lukáš Mikula]: Think Twice Before You Answer: Mitigating Biases of Question Answering Models (24.3.2022)
Abstract
Our work evaluates the impact of data-driven methods of eliminating biases learned by the state-of-the-art neural language models on a case of extractive question answering.
We set the following objectives:
1. Survey the literature for the known biases that question-answering language models are reported to expose within their predictions. If applicable, we extend a list of the known biases for novel ones.
2. Quantify the impact of these biases on the quality of the output of the selected language model.
3. Implement into the training process a selected method mitigating the impact of these biases on model predictions.
4. Evaluate the impact of the implemented bias mitigation method.
Readings
[1]: Jia, R., & Liang, P. (2017): Adversarial examples for evaluating reading comprehension systems: on: https://arxiv.org/pdf/1707.07328.pdf
[2]: Weissenborn, D., Wiese, G., & Seiffe, L. (2017): Making neural QA as simple as possible but not simpler: on: https://arxiv.org/pdf/1703.04816.pdf
[3]: Wang, T., Yang, D., & Wang, X. (2021): Identifying and mitigating spurious correlations for improving robustness in NLP models: on: https://arxiv.org/pdf/2110.07736.pdf