A Philosophically-Informed Contribution to the Generalization Problem of Neural Natural Language Inference: Shallow Heuristics, Bias, and the Varieties of Inference
Type
journal article
Date Issued
2022
Abstract (De)
Transformer-based pre-trained language models (PLMs) currently dominate the field of Natural Language Inference (NLI). It is also becoming increasingly clear that these models might not be learning the actual underlying task, namely NLI, during training. Rather, they learn what is often called bias, or shallow heuristics, leading to the problem of generalization. In this article, building on the philosophy of logics, we discuss the central concepts in which this problem is couched, we survey the proposed solutions, including those based on natural logic, and we propose or own dataset based on syllogisms to contribute to addressing the problem.
Language
English
HSG Classification
contribution to scientific community
HSG Profile Area
None
Refereed
Yes
Publisher
Association for Computational Linguistics
Publisher place
Galway Ireland
Start page
38
End page
50
Official URL
Subject(s)
Division(s)
Eprints ID
268410
File(s)![Thumbnail Image]()
Loading...
open.access
Name
paper_07_logicbert_merged_naloma_cameraready.pdf
Size
540.31 KB
Format
Adobe PDF
Checksum (MD5)
2b80e84cf815e44956b0a9e2cdc56649