Now showing 1 - 3 of 3
  • Publication
    Conversational Agents for Information Retrieval in the Education Domain: A User-Centered Design Investigation
    Text-based conversational agents (CAs) are widely deployed across a number of daily tasks, including information retrieval. However, most existing agents follow a default design that disregards user needs and preferences, ultimately leading to a lack of usage and an unsatisfying user experience. To better understand how CAs can be designed in order to lead to effective system use, we deduced relevant design requirements from both literature and 13 user interviews. We built and tested a question-answering, text-based CA for an information retrieval task in an education scenario. Results from our experimental test with 41 students indicate that following a user-centered design has a significant positive effect on enjoyment and trust in a CA as opposed to deploying a default CA. If not designed with the user in mind, CAs are not necessarily more beneficial than traditional question-answering systems. Beyond practical implications for effective CA design, this paper points towards key challenges and potential research avenues when deploying social cues for CAs.
    Type:
    Journal:
    Volume:
    Issue:
  • Publication
    Examining Trust in Conversational Systems: Conceptual and Empirical Findings on User Trust, Related Behavior, and System Trustworthiness
    ( 2022-12-12)
    Machine learning (ML)-based conversational systems represent a value enabler for human-machine interaction. Simultaneously, the opacity, complexity, and humanness accompanied by such systems introduce their own issues, including trust misalignment. While trust is viewed as a prerequisite for effective system use, few studies have considered calibrating for appropriate trust, and empirically testing the relationship between trust and related behavior. Moreover, the desired implications of transparency-enhancing design cues are ambiguous. My research aims to explore the impact of system performance on trust, the dichotomy between trust and behavior, and how transparency might help attenuate the effects caused by low system performance in the specific context of decision-making tasks assisted by ML-based conversational systems.
    Type:
    Journal:
  • Publication
    Towards a Trust Reliance Paradox? Exploring the Gap Between Perceived Trust in and Reliance on Algorithmic Advice
    Beyond AI-based systems’ potential to augment decision-making, reduce organizational resources, and counter human biases, unintended consequences of such systems have been largely neglected so far. Researchers are undecided on whether erroneous advice acts as an impediment to system use or is blindly relied upon. As part of an experimental study, we turn towards the impact of incorrect system advice and how to design for failure-prone AI. In an experiment with 156 subjects we find that, although incorrect algorithmic advice is trusted less, users adapt their answers to a system’s incorrect recommendations. While transparency on a system’s accuracy levels fosters trust and reliance in the context of incorrect advice, an opposite effect is found for users exposed to correct advice. Our findings point towards a paradoxical gap between stated trust and actual behavior. Furthermore, transparency mechanisms should be deployed with caution as their effectiveness is intertwined with system performance.
    Type:
    Journal: