Options
Thiemo Wambsganss
Former Member
Title
Prof. Dr.
Last Name
Wambsganss
First name
Thiemo
Email
thiemo.wambsganss@unisg.ch
Phone
+41 71 224 3234
Homepage
Google Scholar
Now showing
1 - 10 of 29
-
PublicationConversational Agents for Information Retrieval in the Education Domain: A User-Centered Design Investigation( 2022-11-11)Text-based conversational agents (CAs) are widely deployed across a number of daily tasks, including information retrieval. However, most existing agents follow a default design that disregards user needs and preferences, ultimately leading to a lack of usage and an unsatisfying user experience. To better understand how CAs can be designed in order to lead to effective system use, we deduced relevant design requirements from both literature and 13 user interviews. We built and tested a question-answering, text-based CA for an information retrieval task in an education scenario. Results from our experimental test with 41 students indicate that following a user-centered design has a significant positive effect on enjoyment and trust in a CA as opposed to deploying a default CA. If not designed with the user in mind, CAs are not necessarily more beneficial than traditional question-answering systems. Beyond practical implications for effective CA design, this paper points towards key challenges and potential research avenues when deploying social cues for CAs.Type: journal articleJournal: Proceedings of the ACM on Human-Computer Interaction (PACMHCI)Volume: 6Issue: CSCW2
-
PublicationImproving Students Argumentation Learning with Adaptive Self-Evaluation Nudging( 2022-11-11)
;Käser, TanjaRecent advantages from computational linguists can be leveraged to nudge students with adaptive self evaluation based on their argumentation skill level. To investigate how individual argumentation self evaluation will help students write more convincing texts, we designed an intelligent argumentation writing support system called ArgumentFeedback based on nudging theory and evaluated it in a series of three qualitative and quaxntitative studies with a total of 83 students. We found that students who received a self-evaluation nudge wrote more convincing texts with a better quality of formal and perceived argumentation compared to the control group. The measured self-efficacy and the technology acceptance provide promising results for embedding adaptive argumentation writing support tools in combination with digital nudging in traditional learning settings to foster self-regulated learning. Our results indicate that the design of nudging-based learning applications for self-regulated learning combined with computational methods for argumentation self-evaluation has a beneficial use to foster better writing skills of students.Type: journal articleJournal: Proceedings of the ACM on Human-Computer Interaction (PACMHCI)Volume: 6Issue: CSCW2DOI: 10.1145/3555633Scopus© Citations 2 -
PublicationEnhancing argumentative writing with automated feedback and social comparison nudgingThe advantages offered by natural language processing (NLP) and machine learning enable students to receive automated feedback on their argumentation skills, independent of educator, time, and location. Although there is a growing amount of literature on formative argumentation feedback, empirical evidence on the effects of adaptive feedback mechanisms and novel NLP approaches to enhance argumentative writing remains scarce. To help fill this gap, the aim of the present study is to investigate whether automated feedback and social comparison nudging enable students to internalize and improve logical argumentation writing abilities in an undergraduate business course. We conducted a mixed-methods study to investigate the impact of argumentative writing on 71 students in a field experiment. Students in treatment group 1 completed their assignment while receiving automated feedback, whereas students in treatment group 2 completed the same assignment while receiving automated feedback with a social comparison nudge that indicated how other students performed on the same assignment. Students in the control group received generalized feedback based on rules of syntax. We found that participants who received automated argumentation feedback with a social comparison nudge wrote more convincing texts with higher-quality argumentation compared to the two benchmark groups (p < 0.05). The measured self-efficacy, perceived ease of use, and qualitative data provide valuable insights that help explain this effect. The results suggest that embedding automated feedback in combination with social comparison nudges enables students to increase their argumentative writing skills by triggering psychological processes. Receiving only automated feedback in the form of in-text argumentative highlighting without any further guidance appears not to significantly influence students’ writing abilities when compared to syntactic feedback.Type: journal articleJournal: Computers & EducationVolume: 191Issue: 104644
Scopus© Citations 2 -
PublicationDesigning Conversational Evaluation Tools: A Comparison of Text and Voice Modalities to Improve Response Quality in Course Evaluations(Association for Computing Machinery, 2022-11-11)
;Käser, Tanja ;Koedinger, Kenneth R.Conversational agents (CAs) provide opportunities for improving the interaction in evaluation surveys. To investigate if and how a user-centered conversational evaluation tool impacts users' response quality and their experience, we build EVA - a novel conversational course evaluation tool for educational scenarios. In a field experiment with 128 students, we compared EVA against a static web survey. Our results confirm prior findings from literature about the positive effect of conversational evaluation tools in the domain of education. Second, we then investigate the differences between a voice-based and text-based conversational human-computer interaction of EVA in the same experimental set-up. Against our prior expectation, the students of the voice-based interaction answered with higher information quality but with lower quantity of information compared to the text-based modality. Our findings indicate that using a conversational CA (voice and text-based) results in a higher response quality and user experience compared to a static web survey interface.Type: journal articleJournal: Proceedings of the ACM on Human-Computer Interaction (PACMHCI)Volume: 6Issue: CSCW2DOI: 10.1145/3555619 -
PublicationA Taxonomy for Deep Learning in Natural Language Processing(Hawaii International Conference on System Sciences, 2021)
;Landolt, SeverinSöllner, MatthiasType: journal article -
PublicationImproving Explainability and Accuracy through Feature Engineering: A Taxonomy of Features in NLP-based Machine Learning( 2021)Fromm, HansjörgType: journal articleJournal: Forty-Second International Conference on Information Systems
-
PublicationArgueTutor: An Adaptive Dialog-Based Learning System for Argumentation Skills(ACM CHI Conference on Human Factors in Computing Systems, 2021-04)
;Küng, Tobias ;Matthias, SöllnerType: journal article -
PublicationSupporting Cognitive and Emotional Empathic Writing of StudentsWe present an annotation approach to capturing emotional and cognitive empathy in student-written peer reviews on business models in German. We propose an annotation scheme that allows us to model emotional and cognitive empathy scores based on three types of review components. Also, we conducted an annotation study with three annotators based on 92 student essays to evaluate our annotation scheme. The obtained inter-rater agreement of α = 0.79 for the components and the π = 0.41 for the empathy scores indicate that the proposed annotation scheme successfully guides annotators to a substantial to moderate agreement. Moreover, we trained predictive models to detect the annotated empathy structures and embedded them in an adaptive writing support system for students to receive individual empathy feedback independent of an instructor, time, and location. We evaluated our tool in a peer learning exercise with 58 students and found promising results for perceived empathy skill learning, perceived feedback accuracy, and intention to use. Finally, we present our freely available corpus of 500 empathy-annotated, student-written peer reviews on business models and our annotation guidelines to encourage future research on the design and development of empathy support systems.Type: journal articleJournal: The Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021)
-
PublicationFrom Data to Dollar – Using the Wisdom of an Online Tipster Community to Improve Sports Betting Returns( 2020)Type: journal articleVolume: European Journal of International Management (EJIM) - Special Issue on: "International Sports Management"
-
PublicationUnlocking Transfer Learning in Argumentation Mining: A Domain-Independent Modelling Approach( 2020-03)
;Molyndris, NikolaosType: journal articleJournal: 15th International Conference on Wirtschaftsinformatik
- «
- 1 (current)
- 2
- 3
- »