Now showing 1 - 10 of 23
  • Publication
    Mechanisms of Common Ground in Human-Agent Interaction: A Systematic Review of Conversational Agent Research
    ( 2023-01-06)
    Tolzin, Antonia
    ;
    Human-agent interaction is increasingly influencing our personal and work lives through the proliferation of conversational agents in various domains. As such, these agents combine intuitive natural language interactions by also delivering personalization through artificial intelligence capabilities. However, research on CAs as well as practical failures indicate that CA interaction oftentimes fails miserably. To reduce these failures, this paper introduces the concept of building common ground for more successful human-agent interactions. Based on a systematic review our analysis reveals five mechanisms for achieving common ground: (1) Embodiment, (2) Social Features, (3) Joint Action, (4) Knowledge Base, and (5) Mental Model of Conversational Agents. On this basis, we offer insights into grounding mechanisms and highlight the potentials when considering common ground in different human-agent interaction processes. Consequently, we secure further understanding and deeper insights of possible mechanisms of common ground in human-agent interaction in the future.
    Type:
    Journal:
  • Publication
    Towards a Trust Reliance Paradox? Exploring the Gap Between Perceived Trust in and Reliance on Algorithmic Advice
    Beyond AI-based systems’ potential to augment decision-making, reduce organizational resources, and counter human biases, unintended consequences of such systems have been largely neglected so far. Researchers are undecided on whether erroneous advice acts as an impediment to system use or is blindly relied upon. As part of an experimental study, we turn towards the impact of incorrect system advice and how to design for failure-prone AI. In an experiment with 156 subjects we find that, although incorrect algorithmic advice is trusted less, users adapt their answers to a system’s incorrect recommendations. While transparency on a system’s accuracy levels fosters trust and reliance in the context of incorrect advice, an opposite effect is found for users exposed to correct advice. Our findings point towards a paradoxical gap between stated trust and actual behavior. Furthermore, transparency mechanisms should be deployed with caution as their effectiveness is intertwined with system performance.
    Type:
    Journal:
  • Publication
    “I Will Follow You!” – How Recommendation Modality Impacts Processing Fluency and Purchase Intention
    ( 2022-12-09)
    Schwede, Melanie
    ;
    ; ;
    Hammerschmidt, Maik
    ;
    Although conversational agents (CA) are increasingly used for providing purchase recommendations, important design questions remain. Across two experiments we examine with a novel fluency mechanism how recommendation modality (speech vs. text) shapes recommendation evaluation (persuasiveness and risk), the intention to follow the recommendation, and how modality interacts with the style of recommendation explanation (verbal vs. numerical). Findings provide robust evidence that text-based CAs outperform speech-based CAs in terms of processing fluency and consumer responses. They show that numerical explanations increase processing fluency and purchase intention of both recommendation modalities. The results underline the importance of processing fluency for the decision to follow a recommendation and highlight that processing fluency can be actively shaped through design decisions in terms of implementing the right modality and aligning it with the optimal explanation style. For practice, we offer actionable implications on how to make effective sales agents out of CAs.
    Type:
    Journal:
  • Publication
    Designing for Conversational System Trustworthiness: The Impact of Model Transparency on Trust and Task Performance
    Designing for system trustworthiness promises to address challenges of opaqueness and uncertainty introduced through Machine Learning (ML)-based systems by allowing users to understand and interpret systems’ underlying working mechanisms. However, empirical exploration of trustworthiness measures and their effectiveness is scarce and inconclusive. We investigated how varying model confidence (70% versus 90%) and making confidence levels transparent to the user (explanatory statement versus no explanatory statement) may influence perceptions of trust and performance in an information retrieval task assisted by a conversational system. In a field experiment with 104 users, our findings indicate that neither model confidence nor transparency seem to impact trust in the conversational system. However, users’ task performance is positively influenced by both transparency and trust in the system. While this study considers the complex interplay of system trustworthiness, trust, and subsequent behavioral outcomes, our results call into question the relation between system trustworthiness and user trust.
    Type:
    Journal:
  • Publication
    Disentangling Trust in Voice Assistants - A Configurational View on Conversational AI Ecosystems
    ( 2023) ; ;
    Bevilacqua, Tatjana
    Voice assistants’ (VAs) increasingly nuanced and natural communication via artificial intelligence (AI) opens up new opportunities for the experience of users, providing task assistance and automation possibilities, and also offer an easy interface to digital services and ecosystems. However, VAs and according ecosystems face various problems, such as low adoption and satisfaction rates as well as other negative reactions from users. Companies, therefore, need to consider what contributes to user satisfaction of VAs and related conversational AI ecosystems. Key for conversational AI ecosystems is the consideration of trust due to their agentic and pervasive nature. Nonetheless, due to the complexity of conversational AI ecosystems and different trust sources involved, we argue that we need a more detailed understanding about trust. Thus, we propose a configurational view on conversational AI ecosystems that allows us to disentangle the complex and interrelated factors that contribute to trust in VAs. We examine with a configurational approach and a survey study, how different trust sources contribute to the outcomes of conversational AI ecosystems, i.e., in our case user satisfaction. The results of our study show four distinct patterns of trust source configurations. Vice versa, we show how trust sources contribute to the absence of the outcome, i.e., user satisfaction. The derived implications provide a configurative theoretical understanding for the role of trust sources for user satisfaction that provides practitioners useful guidance for more trustworthy conversational AI ecosystems.
    Type:
    Journal:
  • Publication
    Voice as a Contemporary Frontier of Interaction Design
    Voice assistants’ increasingly nuanced and natural communication bears new opportunities for user experiences and task automation, while challenging existing patterns of human-computer interaction. A fragmented research field, as well as constant technological advancements, impede a common apprehension of prevalent design features of voice-based interfaces. As part of this study, 86 papers across domains are systematically identified and analysed to arrive at a common understanding of voice assistants. The review highlights perceptual differences to other human-computer interfaces and points out relevant auditory cues. Key findings regarding those cues’ impact on user perception and behaviour are discussed along with the three design strategies 1) personification, 2) individualization and 3) contextualization. Avenues for future research are lastly deducted. Our results provide relevant opportunities to researchers and designers alike to advance the design and deployment of voice assistants.
    Type:
    Journal:
  • Publication
    Alexa, are you still there? Understanding the Habitual Use of AI-Based Voice Assistants
    ( 2021-12)
    Grünenfelder, Janay Ilya
    ;
    ;
    Voice assistants are a novel class of information systems that fundamentally change human–computer interaction. Although these assistants are widespread, the utilization of these information systems is oftentimes only considered on a surface level by individuals. In addition, prior research has focused predominantly on initial use instead of looking deeper into post-adoption and habit formation. In consequence, this paper reviews how the notion of habit has been conceptualized in relation to biographical utilization of voice assistants and presents findings based on a qualitative study approach. From a perspective of post-adoption users, the study suggests that existing habits persist, and new habits hardly ever form in the context of voice assistant utilization. This paper outlines four key factors that help explain voice assistant utilization behavior and furthermore provides practical implications that help to ensure continued voice assistant use in the future.
    Type:
    Journal:
  • Publication
    What do you mean? A Review on Recovery Strategies to Overcome Conversational Breakdowns of Conversational Agents
    ( 2021-12)
    Benner, Dennis
    ;
    ;
    Schöbel, Sofia
    ;
    Since the emergence of conversational agents, this technology has seen continuous development and research. Today, advanced conversational agents are virtually omnipresent in our everyday lives. Albeit the numerous improvements in their conversational capabilities, breakdowns are still a persistent issue. Such breakdowns can result in a very unpleasant experience for users and impair the future success of conversational agents. This issue has been acknowledged by many researchers recently. However, the research on strategies to overcome conversational breakdowns is still inconclusive, and further research is needed. Therefore, we conduct a systematic literature analysis to derive conceptual conversational breakdown recovery strategies from literature and highlight future research avenues to address potential gaps. Thus, we contribute to theory of human-agent interaction by deriving and assessing recovery strategies and suggesting leads for novel recovery strategies.
    Type:
    Journal:
  • Publication
    The Role of AI-Based Artifacts’ Voice Capabilities for Agency Attribution
    The pervasiveness and increasing sophistication of artificial intelligence (AI)-based artifacts within private, organizational, and social realms change how humans interact with machines. Theorizing about the way humans perceive AI-based artifacts is crucial to understanding why and to what extent humans deem these as competent for, i.e., decision-making, yet has traditionally taken a modality-agnostic view. In this paper, we theorize about a particular case of interaction, namely that of voice-based interaction with AI-based artifacts. The capabilities and perceived naturalness of such artifacts, fueled by continuous advances in natural language processing, induce users to deem an artifact as able to act autonomously in a goal-oriented manner. We argue that there is a positive direct relationship between the voice capabilities of an artifact and users’ agency attribution, ultimately obscuring the artifact’s true nature and competencies. This relationship is further moderated by an artifact’s actual agency, uncertainty, and user characteristics.
    Type:
    Journal:
    Volume:
    Issue: