Now showing 1 - 10 of 14
  • Publication
    Designing Pedagogical Conversational Agents for Achieving Common Ground
    ( 2023)
    Antonia Tolzin
    ;
    Anita Körner
    ;
    Ernestine Dickhaut
    ;
    ;
    Ralf Rummer
    ;
    As educational organizations face difficulties in providing personalized learning material or individual learning support, pedagogical conversational agents (PCAs) promise individualized learning for students. However, the problem of conversational breakdowns of PCAs and consequently poor learning outcomes still exist. Hence, effective and grounded communication between learners and PCAs is fundamental to improving learning processes and out-comes. As understanding each other and the conversational grounding is crucial for conversations between humans and PCAs, we propose common ground theory as a foundation for designing a PCA. Conducting a design science research project, we propose theory-motivated design principles and instantiate them in a PCA. We evaluate the utility of the artifact with an experimental study in higher education to inform the subsequent design iterations. We contribute design knowledge on conversational agents in learning settings, enabling researchers and practitioners to develop PCAs based on common ground research in education and providing avenues for future research. Thereby, we can secure further understanding of learning processes based on grounding communication.
    Type:
    Journal:
    Scopus© Citations 1
  • Publication
    Disentangling Trust in Voice Assistants - A Configurational View on Conversational AI Ecosystems
    ( 2023) ; ;
    Bevilacqua, Tatjana
    Voice assistants’ (VAs) increasingly nuanced and natural communication via artificial intelligence (AI) opens up new opportunities for the experience of users, providing task assistance and automation possibilities, and also offer an easy interface to digital services and ecosystems. However, VAs and according ecosystems face various problems, such as low adoption and satisfaction rates as well as other negative reactions from users. Companies, therefore, need to consider what contributes to user satisfaction of VAs and related conversational AI ecosystems. Key for conversational AI ecosystems is the consideration of trust due to their agentic and pervasive nature. Nonetheless, due to the complexity of conversational AI ecosystems and different trust sources involved, we argue that we need a more detailed understanding about trust. Thus, we propose a configurational view on conversational AI ecosystems that allows us to disentangle the complex and interrelated factors that contribute to trust in VAs. We examine with a configurational approach and a survey study, how different trust sources contribute to the outcomes of conversational AI ecosystems, i.e., in our case user satisfaction. The results of our study show four distinct patterns of trust source configurations. Vice versa, we show how trust sources contribute to the absence of the outcome, i.e., user satisfaction. The derived implications provide a configurative theoretical understanding for the role of trust sources for user satisfaction that provides practitioners useful guidance for more trustworthy conversational AI ecosystems.
    Type:
    Journal:
  • Publication
    Mechanisms of Common Ground in Human-Agent Interaction: A Systematic Review of Conversational Agent Research
    ( 2023-01-06)
    Tolzin, Antonia
    ;
    Human-agent interaction is increasingly influencing our personal and work lives through the proliferation of conversational agents in various domains. As such, these agents combine intuitive natural language interactions by also delivering personalization through artificial intelligence capabilities. However, research on CAs as well as practical failures indicate that CA interaction oftentimes fails miserably. To reduce these failures, this paper introduces the concept of building common ground for more successful human-agent interactions. Based on a systematic review our analysis reveals five mechanisms for achieving common ground: (1) Embodiment, (2) Social Features, (3) Joint Action, (4) Knowledge Base, and (5) Mental Model of Conversational Agents. On this basis, we offer insights into grounding mechanisms and highlight the potentials when considering common ground in different human-agent interaction processes. Consequently, we secure further understanding and deeper insights of possible mechanisms of common ground in human-agent interaction in the future.
    Type:
    Journal:
  • Publication
    How Conversational Agents Relieve Teams from Innovation Blockages
    Innovation is one of the most important antecedents of a company's competitive advantage and long-term survival. Prior research has alluded to teamwork being a primary driver of a firm's innovation capacity. Still, many firms struggle with providing an environment that supports innovation teams in working efficiently together. Thereby, a team's failure can be attributed to several factors, such as inefficient working methods or a lack of internal communication that leads to so-called innovation blockages. There are a number of approaches that are targeted at supporting teams to overcome innovation blockages, but they mainly focus on the collaboration process and rarely consider the needs and potentials of individual team members. In this paper, we argue that Conversational Agents (CAs) can efficiently support teams in overcoming innovation blockages by enhancing collaborative work practices and, specifically, by facilitating the contribution of each individual team member. To that end, we design a CA as a team facilitator that provides nudges to reduce innovation blocking actions according to requirements we systematically derived from scientific literature and practice. Based on a rigorous evaluation, we demonstrate the potential of CAs to reduce the frequency of innovation blockages. The research implications for the development and deployment of CAs as team facilitators are explored.
    Type:
    Journal:
  • Publication
    Exploring the Dynamics of Affordance Actualization – A Configurational View on Voice Assistants
    ( 2022-08-09)
    Voice assistants’ (VAs) increasingly nuanced and natural communication opens up new opportunities for the experience of users, providing task assistance and automation possibilities, and also offer an easy interface to digital services and ecosystems. However, VAs face various problems, such as low adoption and satisfaction rates as well as other negative reactions from users. Companies, therefore, need to consider how individuals utilize VAs and what contributes to user satisfaction. Key for the design of VAs are their unique affordances and their agentic nature that distinguish these IT artifacts from non-agentic IS. A configurative and dynamic approach enables to shed light on the complex causalities underlying user outcomes with these novel systems. Consequently, we examine in this study how individuals actualize the affordances of VAs during the initial adoption stage. For this purpose, we draw on a diary study research design that examines affordance actualization processes with new VA users. We examine with a configurational approach, how the actualization of VA affordances contributes to the outcomes of VAs, i.e., in our case user satisfaction. The results of our diary study show distinct patterns of functional affordance configurations. In addition, we show that affordances unfold and evolve over time. The derived implications provide a configurative theoretical understanding for the role of VAs affordances for user satisfaction that provides practitioners useful guidance to actualize the potential of VAs.
  • Publication
    Designing for Conversational System Trustworthiness: The Impact of Model Transparency on Trust and Task Performance
    Designing for system trustworthiness promises to address challenges of opaqueness and uncertainty introduced through Machine Learning (ML)-based systems by allowing users to understand and interpret systems’ underlying working mechanisms. However, empirical exploration of trustworthiness measures and their effectiveness is scarce and inconclusive. We investigated how varying model confidence (70% versus 90%) and making confidence levels transparent to the user (explanatory statement versus no explanatory statement) may influence perceptions of trust and performance in an information retrieval task assisted by a conversational system. In a field experiment with 104 users, our findings indicate that neither model confidence nor transparency seem to impact trust in the conversational system. However, users’ task performance is positively influenced by both transparency and trust in the system. While this study considers the complex interplay of system trustworthiness, trust, and subsequent behavioral outcomes, our results call into question the relation between system trustworthiness and user trust.
    Type:
    Journal:
  • Publication
    “I Will Follow You!” – How Recommendation Modality Impacts Processing Fluency and Purchase Intention
    ( 2022-12-09)
    Schwede, Melanie
    ;
    ; ;
    Hammerschmidt, Maik
    ;
    Although conversational agents (CA) are increasingly used for providing purchase recommendations, important design questions remain. Across two experiments we examine with a novel fluency mechanism how recommendation modality (speech vs. text) shapes recommendation evaluation (persuasiveness and risk), the intention to follow the recommendation, and how modality interacts with the style of recommendation explanation (verbal vs. numerical). Findings provide robust evidence that text-based CAs outperform speech-based CAs in terms of processing fluency and consumer responses. They show that numerical explanations increase processing fluency and purchase intention of both recommendation modalities. The results underline the importance of processing fluency for the decision to follow a recommendation and highlight that processing fluency can be actively shaped through design decisions in terms of implementing the right modality and aligning it with the optimal explanation style. For practice, we offer actionable implications on how to make effective sales agents out of CAs.
    Type:
    Journal:
  • Publication
    What do you mean? A Review on Recovery Strategies to Overcome Conversational Breakdowns of Conversational Agents
    ( 2021-12)
    Benner, Dennis
    ;
    ;
    Schöbel, Sofia
    ;
    Since the emergence of conversational agents, this technology has seen continuous development and research. Today, advanced conversational agents are virtually omnipresent in our everyday lives. Albeit the numerous improvements in their conversational capabilities, breakdowns are still a persistent issue. Such breakdowns can result in a very unpleasant experience for users and impair the future success of conversational agents. This issue has been acknowledged by many researchers recently. However, the research on strategies to overcome conversational breakdowns is still inconclusive, and further research is needed. Therefore, we conduct a systematic literature analysis to derive conceptual conversational breakdown recovery strategies from literature and highlight future research avenues to address potential gaps. Thus, we contribute to theory of human-agent interaction by deriving and assessing recovery strategies and suggesting leads for novel recovery strategies.
    Type:
    Journal:
  • Publication
    Alexa, are you still there? Understanding the Habitual Use of AI-Based Voice Assistants
    ( 2021-12)
    Grünenfelder, Janay Ilya
    ;
    ;
    Voice assistants are a novel class of information systems that fundamentally change human–computer interaction. Although these assistants are widespread, the utilization of these information systems is oftentimes only considered on a surface level by individuals. In addition, prior research has focused predominantly on initial use instead of looking deeper into post-adoption and habit formation. In consequence, this paper reviews how the notion of habit has been conceptualized in relation to biographical utilization of voice assistants and presents findings based on a qualitative study approach. From a perspective of post-adoption users, the study suggests that existing habits persist, and new habits hardly ever form in the context of voice assistant utilization. This paper outlines four key factors that help explain voice assistant utilization behavior and furthermore provides practical implications that help to ensure continued voice assistant use in the future.
    Type:
    Journal:
  • Publication
    Towards a Trust Reliance Paradox? Exploring the Gap Between Perceived Trust in and Reliance on Algorithmic Advice
    Beyond AI-based systems’ potential to augment decision-making, reduce organizational resources, and counter human biases, unintended consequences of such systems have been largely neglected so far. Researchers are undecided on whether erroneous advice acts as an impediment to system use or is blindly relied upon. As part of an experimental study, we turn towards the impact of incorrect system advice and how to design for failure-prone AI. In an experiment with 156 subjects we find that, although incorrect algorithmic advice is trusted less, users adapt their answers to a system’s incorrect recommendations. While transparency on a system’s accuracy levels fosters trust and reliance in the context of incorrect advice, an opposite effect is found for users exposed to correct advice. Our findings point towards a paradoxical gap between stated trust and actual behavior. Furthermore, transparency mechanisms should be deployed with caution as their effectiveness is intertwined with system performance.
    Type:
    Journal: