Now showing 1 - 10 of 12
  • Publication
    The Role of AI-Based Artifacts’ Voice Capabilities for Agency Attribution
    The pervasiveness and increasing sophistication of artificial intelligence (AI)-based artifacts within private, organizational, and social realms change how humans interact with machines. Theorizing about the way humans perceive AI-based artifacts is crucial to understanding why and to what extent humans deem these as competent for, i.e., decision-making, yet has traditionally taken a modality-agnostic view. In this paper, we theorize about a particular case of interaction, namely that of voice-based interaction with AI-based artifacts. The capabilities and perceived naturalness of such artifacts, fueled by continuous advances in natural language processing, induce users to deem an artifact as able to act autonomously in a goal-oriented manner. We argue that there is a positive direct relationship between the voice capabilities of an artifact and users’ agency attribution, ultimately obscuring the artifact’s true nature and competencies. This relationship is further moderated by an artifact’s actual agency, uncertainty, and user characteristics.
    Type:
    Journal:
    Volume:
    Issue:
  • Publication
    Charting the Evolution and Future of Conversational Agents: A Research Agenda Along Five Waves and New Frontiers
    (Springer Nature, 2023-04-20)
    Schöbel, Sofia
    ;
    ;
    Benner, Dennis
    ;
    Saqr, Mohammed
    ;
    ;
    Conversational agents (CAs) have come a long way from their first appearance in the 1960s to today's generative models. Continuous technological advancements such as statistical computing and large language models allow for an increasingly natural and effortless interaction, as well as domain-agnostic deployment opportunities. Ultimately, this evolution begs multiple questions: How have technical capabilities developed? How is the nature of work changed through humans' interaction with conversational agents? How has research framed dominant perceptions and depictions of such agents? And what is the path forward? To address these questions, we conducted a bibliometric study including over 5000 research articles on CAs. Based on a systematic analysis of keywords, topics, and author networks, we derive "five waves of CA research" that describe the past, present, and potential future of research on CAs. Our results highlight fundamental technical evolutions and theoretical paradigms in CA research. Therefore, we discuss the moderating role of big technologies, and novel technological advancements like OpenAI GPT or BLOOM NLU that mark the next frontier of CA research. We contribute to theory by laying out central research streams in CA research, and offer practical implications by highlighting the design and deployment opportunities of CAs.
    Type:
    Journal:
    Scopus© Citations 9
  • Publication
    Conversational Agents for Information Retrieval in the Education Domain: A User-Centered Design Investigation
    Text-based conversational agents (CAs) are widely deployed across a number of daily tasks, including information retrieval. However, most existing agents follow a default design that disregards user needs and preferences, ultimately leading to a lack of usage and an unsatisfying user experience. To better understand how CAs can be designed in order to lead to effective system use, we deduced relevant design requirements from both literature and 13 user interviews. We built and tested a question-answering, text-based CA for an information retrieval task in an education scenario. Results from our experimental test with 41 students indicate that following a user-centered design has a significant positive effect on enjoyment and trust in a CA as opposed to deploying a default CA. If not designed with the user in mind, CAs are not necessarily more beneficial than traditional question-answering systems. Beyond practical implications for effective CA design, this paper points towards key challenges and potential research avenues when deploying social cues for CAs.
    Type:
    Journal:
    Volume:
    Issue:
  • Publication
    Voice bots on the frontline: Voice-based interfaces enhance flow-like consumer experiences & boost service outcomes
    Voice-based interfaces provide new opportunities for firms to interact with consumers along the customer journey. The current work demonstrates across four studies that voice-based (as opposed to text-based) interfaces promote more flow-like user experiences, resulting in more positively-valenced service experiences, and ultimately more favorable behavioral firm outcomes (i.e., contract renewal, conversion rates, and consumer sentiment). Moreover, we also provide evidence for two important boundary conditions that reduce such flow-like user experiences in voice-based interfaces (i.e., semantic disfluency and the amount of conversational turns). The findings of this research highlight how fundamental theories of human communication can be harnessed to create more experiential service experiences with positive downstream consequences for consumers and firms. These findings have important practical implications for firms that aim at leveraging the potential of voice-based interfaces to improve consumers' service experiences and the theory-driven ''conversational design'' of voice-based interfaces.
    Type:
    Journal:
    Volume:
    Issue:
    Scopus© Citations 13
  • Publication
    Exploring the Synergies in Human-AI Hybrids: A Longitudinal Analysis in Sales Forecasting
    Despite the promised potential of artificial intelligence (AI), insights into real-life human-AI hybrids and their dynamics remain obscure. Based on digital trace data of over 1.4 million forecasting decisions over a 69-month period, we study the implications of an AI sales forecasting system's introduction in a bakery enterprise on decision-makers' overriding of the AI system and resulting hybrid performance. Decisionmakers quickly started to rely on AI forecasts, leading to lower forecast errors. Overall, human intervention deteriorated forecasting performance as overriding resulted in greater forecast error. The results confirm the notion that AI systems outperform humans in forecasting tasks. However, the results also indicate previously neglected, domain-specific implications: As the AI system aimed to reduce forecast error and thus overproduction, forecasting numbers decreased over time, and thereby also sales. We conclude that minimal forecast errors do not inevitably yield optimal business outcomes when detrimental human factors in decision-making are ignored.
  • Publication
    Unleashing Process Mining for Education: Designing an IT-Tool for Students to Self-Monitor their Personal Learning Paths
    The ability of students to self-monitor their learning paths is in demand as never before due to the recent rise of online education formats, which entails less interaction with lecturers. Recent advantages in educational process mining (EPM) offer new opportunities to monitor students’ learning paths by processing log data captured by technology-mediated learning environments. However, current literature falls short on providing user-centered design principles for IT-tools which can monitor learning paths using EPM. Hence, in this paper, we examine how to design a self-monitoring tool that supports students to evaluate their learning paths. Based on theoretical insights of 66 papers and nine user interviews, we propose seven design principles for an IT-tool which facilitates self-monitoring for students based on EPM. Further, we evaluate the design principles with seven potential users. Our results demonstrate a promising approach to help students improve their self-efficacy in their individual learning process using EPM.
    Type:
    Journal:
  • Publication
    Examining Trust in Conversational Systems: Conceptual and Empirical Findings on User Trust, Related Behavior, and System Trustworthiness
    ( 2022-12-12)
    Machine learning (ML)-based conversational systems represent a value enabler for human-machine interaction. Simultaneously, the opacity, complexity, and humanness accompanied by such systems introduce their own issues, including trust misalignment. While trust is viewed as a prerequisite for effective system use, few studies have considered calibrating for appropriate trust, and empirically testing the relationship between trust and related behavior. Moreover, the desired implications of transparency-enhancing design cues are ambiguous. My research aims to explore the impact of system performance on trust, the dichotomy between trust and behavior, and how transparency might help attenuate the effects caused by low system performance in the specific context of decision-making tasks assisted by ML-based conversational systems.
    Type:
    Journal:
  • Publication
    Conceptual Foundations on Debiasing for Machine Learning-Based Software
    ( 2022-12-12) ; ;
    Fahse, Tobias Benjamin
    Machine learning (ML)-based software’s deployment has raised serious concerns about its pervasive and harmful consequences for users, business, and society inflicted through bias. While approaches to address bias are increasingly recognized and developed, our understanding of debiasing remains nascent. Research has yet to provide a comprehensive coverage of this vast growing field, much of which is not embedded in theoretical understanding. Conceptualizing and structuring the nature, effect, and implementation of debiasing instruments could provide necessary guidance for practitioners investing in debiasing efforts. We develop a taxonomy that classifies debiasing instrument characteristics into seven key dimensions. We evaluate and refine our taxonomy through nine experts and apply our taxonomy to three actual debiasing instruments, drawing lessons for the design and choice of appropriate instruments. Bridging the gaps between our conceptual understanding of debiasing for ML-based software and its organizational implementation, we discuss contributions and future research.
    Type:
    Journal:
  • Publication
    Designing for Conversational System Trustworthiness: The Impact of Model Transparency on Trust and Task Performance
    Designing for system trustworthiness promises to address challenges of opaqueness and uncertainty introduced through Machine Learning (ML)-based systems by allowing users to understand and interpret systems’ underlying working mechanisms. However, empirical exploration of trustworthiness measures and their effectiveness is scarce and inconclusive. We investigated how varying model confidence (70% versus 90%) and making confidence levels transparent to the user (explanatory statement versus no explanatory statement) may influence perceptions of trust and performance in an information retrieval task assisted by a conversational system. In a field experiment with 104 users, our findings indicate that neither model confidence nor transparency seem to impact trust in the conversational system. However, users’ task performance is positively influenced by both transparency and trust in the system. While this study considers the complex interplay of system trustworthiness, trust, and subsequent behavioral outcomes, our results call into question the relation between system trustworthiness and user trust.
    Type:
    Journal:
  • Publication
    Towards a Trust Reliance Paradox? Exploring the Gap Between Perceived Trust in and Reliance on Algorithmic Advice
    Beyond AI-based systems’ potential to augment decision-making, reduce organizational resources, and counter human biases, unintended consequences of such systems have been largely neglected so far. Researchers are undecided on whether erroneous advice acts as an impediment to system use or is blindly relied upon. As part of an experimental study, we turn towards the impact of incorrect system advice and how to design for failure-prone AI. In an experiment with 156 subjects we find that, although incorrect algorithmic advice is trusted less, users adapt their answers to a system’s incorrect recommendations. While transparency on a system’s accuracy levels fosters trust and reliance in the context of incorrect advice, an opposite effect is found for users exposed to correct advice. Our findings point towards a paradoxical gap between stated trust and actual behavior. Furthermore, transparency mechanisms should be deployed with caution as their effectiveness is intertwined with system performance.
    Type:
    Journal: