Repository logo
  • English
  • Deutsch
Log In
or
  1. Home
  2. HSG CRIS
  3. HSG Publications
  4. A Philosophically-Informed Contribution to the Generalization Problem of Neural Natural Language Inference: Shallow Heuristics, Bias, and the Varieties of Inference
 
  • Details

A Philosophically-Informed Contribution to the Generalization Problem of Neural Natural Language Inference: Shallow Heuristics, Bias, and the Varieties of Inference

Type
journal article
Date Issued
2022
Author(s)
Gubelmann, Reto  
Niklaus, Christina  
Handschuh, Siegfried  
Abstract (De)
Transformer-based pre-trained language models (PLMs) currently dominate the field of Natural Language Inference (NLI). It is also becoming increasingly clear that these models might not be learning the actual underlying task, namely NLI, during training. Rather, they learn what is often called bias, or shallow heuristics, leading to the problem of generalization. In this article, building on the philosophy of logics, we discuss the central concepts in which this problem is couched, we survey the proposed solutions, including those based on natural logic, and we propose or own dataset based on syllogisms to contribute to addressing the problem.
Language
English
HSG Classification
contribution to scientific community
HSG Profile Area
None
Refereed
Yes
Publisher
Association for Computational Linguistics
Publisher place
Galway Ireland
Start page
38
End page
50
Official URL
https://aclanthology.org/2022.naloma-1.5
URL
https://www.alexandria.unisg.ch/handle/20.500.14171/109330
Subject(s)

computer science

Division(s)

ICS - Institute of Co...

Eprints ID
268410
File(s)
Loading...
Thumbnail Image

open.access

Name

paper_07_logicbert_merged_naloma_cameraready.pdf

Size

540.31 KB

Format

Adobe PDF

Checksum (MD5)

2b80e84cf815e44956b0a9e2cdc56649

here you can find instructions and news.

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Privacy policy
  • End User Agreement
  • Send Feedback