Now showing 1 - 4 of 4
  • Publication
    Disentangling Trust in Voice Assistants - A Configurational View on Conversational AI Ecosystems
    ( 2023) ; ;
    Bevilacqua, Tatjana
    Voice assistants’ (VAs) increasingly nuanced and natural communication via artificial intelligence (AI) opens up new opportunities for the experience of users, providing task assistance and automation possibilities, and also offer an easy interface to digital services and ecosystems. However, VAs and according ecosystems face various problems, such as low adoption and satisfaction rates as well as other negative reactions from users. Companies, therefore, need to consider what contributes to user satisfaction of VAs and related conversational AI ecosystems. Key for conversational AI ecosystems is the consideration of trust due to their agentic and pervasive nature. Nonetheless, due to the complexity of conversational AI ecosystems and different trust sources involved, we argue that we need a more detailed understanding about trust. Thus, we propose a configurational view on conversational AI ecosystems that allows us to disentangle the complex and interrelated factors that contribute to trust in VAs. We examine with a configurational approach and a survey study, how different trust sources contribute to the outcomes of conversational AI ecosystems, i.e., in our case user satisfaction. The results of our study show four distinct patterns of trust source configurations. Vice versa, we show how trust sources contribute to the absence of the outcome, i.e., user satisfaction. The derived implications provide a configurative theoretical understanding for the role of trust sources for user satisfaction that provides practitioners useful guidance for more trustworthy conversational AI ecosystems.
    Type:
    Journal:
  • Publication
    Designing for Conversational System Trustworthiness: The Impact of Model Transparency on Trust and Task Performance
    Designing for system trustworthiness promises to address challenges of opaqueness and uncertainty introduced through Machine Learning (ML)-based systems by allowing users to understand and interpret systems’ underlying working mechanisms. However, empirical exploration of trustworthiness measures and their effectiveness is scarce and inconclusive. We investigated how varying model confidence (70% versus 90%) and making confidence levels transparent to the user (explanatory statement versus no explanatory statement) may influence perceptions of trust and performance in an information retrieval task assisted by a conversational system. In a field experiment with 104 users, our findings indicate that neither model confidence nor transparency seem to impact trust in the conversational system. However, users’ task performance is positively influenced by both transparency and trust in the system. While this study considers the complex interplay of system trustworthiness, trust, and subsequent behavioral outcomes, our results call into question the relation between system trustworthiness and user trust.
    Type:
    Journal:
  • Publication
    Towards a Trust Reliance Paradox? Exploring the Gap Between Perceived Trust in and Reliance on Algorithmic Advice
    Beyond AI-based systems’ potential to augment decision-making, reduce organizational resources, and counter human biases, unintended consequences of such systems have been largely neglected so far. Researchers are undecided on whether erroneous advice acts as an impediment to system use or is blindly relied upon. As part of an experimental study, we turn towards the impact of incorrect system advice and how to design for failure-prone AI. In an experiment with 156 subjects we find that, although incorrect algorithmic advice is trusted less, users adapt their answers to a system’s incorrect recommendations. While transparency on a system’s accuracy levels fosters trust and reliance in the context of incorrect advice, an opposite effect is found for users exposed to correct advice. Our findings point towards a paradoxical gap between stated trust and actual behavior. Furthermore, transparency mechanisms should be deployed with caution as their effectiveness is intertwined with system performance.
    Type:
    Journal:
  • Publication
    Voice as a Contemporary Frontier of Interaction Design
    Voice assistants’ increasingly nuanced and natural communication bears new opportunities for user experiences and task automation, while challenging existing patterns of human-computer interaction. A fragmented research field, as well as constant technological advancements, impede a common apprehension of prevalent design features of voice-based interfaces. As part of this study, 86 papers across domains are systematically identified and analysed to arrive at a common understanding of voice assistants. The review highlights perceptual differences to other human-computer interfaces and points out relevant auditory cues. Key findings regarding those cues’ impact on user perception and behaviour are discussed along with the three design strategies 1) personification, 2) individualization and 3) contextualization. Avenues for future research are lastly deducted. Our results provide relevant opportunities to researchers and designers alike to advance the design and deployment of voice assistants.
    Type:
    Journal: