Options
Christina Niklaus
Title
Prof. Dr.
Last Name
Niklaus
First name
Christina
Email
christina.niklaus@unisg.ch
Phone
+41 71 224 3472
Now showing
1 - 10 of 21
-
PublicationCapturing the Varieties of Natural Language Inference: A Systematic Survey of Existing Datasets and Two Novel Benchmarks( 2023-11-20)
;Katis, IoannisTransformer-based Pre-Trained Language Models currently dominate the field of Natural Language Inference (NLI). We first survey existing NLI datasets, and we systematize them according to the different kinds of logical inferences that are being distinguished. This shows two gaps in the current dataset landscape, which we propose to address with one dataset that has been developed in argumentative writing research as well as a new one building on syllogistic logic. Throughout, we also explore the promises of ChatGPT. Our results show that our new datasets do pose a challenge to existing methods and models, including ChatGPT, and that tackling this challenge via fine-tuning yields only partly satisfactory results.Type: journal articleJournal: Journal of Logic, Language and Information -
PublicationA Canonical Context-Preserving Representation for Open IE: Extracting Semantically Typed Relational Tuples from Complex Sentences(Elsevier, 2023-05-23)
;Freitas, AndréModern systems that deal with inference in texts need automatized methods to extract meaning representations (MRs) from texts at scale. Open Information Extraction (IE) is a prominent way of extracting all potential relations from a given text in a comprehensive manner. Previous work in this area has mainly focused on the extraction of isolated relational tuples. Ignoring the cohesive nature of texts where important contextual information is spread across clauses or sentences, state-of-the- art Open IE approaches are thus prone to generating a loose arrangement of tuples that lack the expressiveness needed to infer the true meaning of complex assertions. To overcome this limitation, we present a method that allows existing Open IE systems to enrich their output with additional meta information. By leveraging the semantic hierarchy of minimal propositions generated by the discourse-aware Text Simplification (TS) approach presented in Niklaus et al. (2019), we propose a mechanism to extract semantically typed relational tuples from complex source sentences. Based on this novel type of output, we introduce a lightweight semantic representation for Open IE in the form of normalized and context-preserving relational tuples. It extends the shallow semantic representation of state-of-the-art approaches in the form of predicate-argument structures by capturing intra-sentential rhetorical structures and hierarchical relationships between the relational tuples. In that way, the semantic context of the extracted tuples is preserved, resulting in more informative and coherent predicate-argument structures which are easier to interpret. In addition, in a comparative analysis, we show that the semantic hierarchy of minimal propositions benefits Open IE approaches in a second dimension: the canonical structure of the simplified sentences is easier to process and analyze, and thus facilitates the extraction of relational tuples, resulting in an improved precision (up to 32%) and recall (up to 30%) of the extracted relations on a large benchmark corpus.Type: journal articleJournal: Knowledge-Based SystemsIssue: 268 -
PublicationA Philosophically-Informed Contribution to the Generalization Problem of Neural Natural Language Inference: Shallow Heuristics, Bias, and the Varieties of Inference(Association for Computational Linguistics, 2022)Type: journal article
-
PublicationType: journal articleVolume: 1
Scopus© Citations 1 -
PublicationSupporting Cognitive and Emotional Empathic Writing of StudentsWe present an annotation approach to capturing emotional and cognitive empathy in student-written peer reviews on business models in German. We propose an annotation scheme that allows us to model emotional and cognitive empathy scores based on three types of review components. Also, we conducted an annotation study with three annotators based on 92 student essays to evaluate our annotation scheme. The obtained inter-rater agreement of α = 0.79 for the components and the π = 0.41 for the empathy scores indicate that the proposed annotation scheme successfully guides annotators to a substantial to moderate agreement. Moreover, we trained predictive models to detect the annotated empathy structures and embedded them in an adaptive writing support system for students to receive individual empathy feedback independent of an instructor, time, and location. We evaluated our tool in a peer learning exercise with 58 students and found promising results for perceived empathy skill learning, perceived feedback accuracy, and intention to use. Finally, we present our freely available corpus of 500 empathy-annotated, student-written peer reviews on business models and our annotation guidelines to encourage future research on the design and development of empathy support systems.Type: journal articleJournal: The Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021)
-
PublicationWhen Truth Matters - Addressing Pragmatic Categories in Natural Language Inference (NLI) by Large Language Models (LLMs)( 2023-07)
;Kalouli, Aikaterini-LidaIn this paper, we focus on the ability of large language models (LLMs) to accommodate different pragmatic sentence types, such as questions, commands, as well as sentence fragments for natural language inference (NLI). On the commonly used notion of logical inference, nothing can be inferred from a question, a command, or an incomprehensible sentence fragment. We find MNLI, arguably the most important NLI dataset, and hence models fine-tuned on this dataset, insensitive to this fact. Using a symbolic semantic parser, we develop and make publicly available, fine-tuning datasets designed specifically to address this issue, with promising results. We also make a first exploration of ChatGPT's concept of entailment.Type: conference paperJournal: Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023) -
PublicationComputer Supported Argumentation Learning: Design of a Learning Scenario in Academic Writing by Means of a Conjecture Map( 2023)Patcharin PanjabureeIn academic writing, the competency to argue is important. However, first-year students often have difficulties to construct good arguments. Advances in natural language processing (NLP) have made it possible to better analyze the writing quality of texts. New tools have emerged which can give students individual feedback on their texts and the structure of their arguments. While the use of these argumentation learning support tools can help create better texts, using them in an academic context also carries risks. Learning scenarios are needed that promote argumentation competency using argumentation tools while also making students aware of their limitations. To address this issue, this paper investigates how a learning design with an argumentation learning support tool can be developed to increase the argumentation competency of first-year students. The conjecture-mapping technique was used, to visualize our assumptions and illustrate the developed learning design. As part of a fi rst design cycle, the learning design was tested with 80 students in seven academic writing classes at the University of St.Gallen in Switzerland. Preliminary findings suggest that the learning design might be helpful to improve the argumentation competency as well as the data-literacy of students (in relation to argumentation tools). However, further research is necessary to confirm or reject our hypotheses.Type: conference paperJournal: Proceedings of the 15th International Conference on Computer Supported EducationVolume: 1
-
PublicationMicro- and Macro-Level Features of NLP-Based Writing Tools in Higher Education( 2022-12-02)
;Panjaburee, Patcharin ;Pichitpornchai, ChailerdType: conference paper -
PublicationShallow Discourse Parsing for Open Information Extraction and Text Simplification(International Conference on Computational Linguistics, 2022-10)
;Freitas, AndréType: conference paper -
PublicationAL: An Adaptive Learning Support System for Argumentation Skills(ACM CHI Conference on Human Factors in Computing Systems, 2020-04)Recent advances in Natural Language Processing (NLP) bear the opportunity to analyze the argumentation quality of texts. This can be leveraged to provide students with individual and adaptive feedback in their personal learning journey. To test if individual feedback on students' argumentation will help them to write more convincing texts, we developed AL, an adaptive IT tool that provides students with feedback on the argumentation structure of a given text. We compared AL with 54 students to a proven argumentation support tool. We found students using AL wrote more convincing texts with better formal quality of argumentation compared to the ones using the traditional approach. The measured technology acceptance provided promising results to use this tool as a feedback application in different learning settings. The results suggest that learning applications based on NLP may have a beneficial use for developing better writing and reasoning for students in traditional learning settings.Type: conference paperJournal: ACM CHI Conference on Human Factors in Computing Systems
Scopus© Citations 39
- «
- 1 (current)
- 2
- 3
- »