Now showing 1 - 7 of 7
  • Publication
    The Role of AI-Based Artifacts’ Voice Capabilities for Agency Attribution
    The pervasiveness and increasing sophistication of artificial intelligence (AI)-based artifacts within private, organizational, and social realms change how humans interact with machines. Theorizing about the way humans perceive AI-based artifacts is crucial to understanding why and to what extent humans deem these as competent for, i.e., decision-making, yet has traditionally taken a modality-agnostic view. In this paper, we theorize about a particular case of interaction, namely that of voice-based interaction with AI-based artifacts. The capabilities and perceived naturalness of such artifacts, fueled by continuous advances in natural language processing, induce users to deem an artifact as able to act autonomously in a goal-oriented manner. We argue that there is a positive direct relationship between the voice capabilities of an artifact and users’ agency attribution, ultimately obscuring the artifact’s true nature and competencies. This relationship is further moderated by an artifact’s actual agency, uncertainty, and user characteristics.
    Type:
    Journal:
    Volume:
    Issue:
  • Publication
    Charting the Evolution and Future of Conversational Agents: A Research Agenda Along Five Waves and New Frontiers
    (Springer Nature, 2023-04-20)
    Schöbel, Sofia
    ;
    ;
    Benner, Dennis
    ;
    Saqr, Mohammed
    ;
    ;
    Conversational agents (CAs) have come a long way from their first appearance in the 1960s to today's generative models. Continuous technological advancements such as statistical computing and large language models allow for an increasingly natural and effortless interaction, as well as domain-agnostic deployment opportunities. Ultimately, this evolution begs multiple questions: How have technical capabilities developed? How is the nature of work changed through humans' interaction with conversational agents? How has research framed dominant perceptions and depictions of such agents? And what is the path forward? To address these questions, we conducted a bibliometric study including over 5000 research articles on CAs. Based on a systematic analysis of keywords, topics, and author networks, we derive "five waves of CA research" that describe the past, present, and potential future of research on CAs. Our results highlight fundamental technical evolutions and theoretical paradigms in CA research. Therefore, we discuss the moderating role of big technologies, and novel technological advancements like OpenAI GPT or BLOOM NLU that mark the next frontier of CA research. We contribute to theory by laying out central research streams in CA research, and offer practical implications by highlighting the design and deployment opportunities of CAs.
    Type:
    Journal:
    Scopus© Citations 9
  • Publication
    Disentangling Trust in Voice Assistants - A Configurational View on Conversational AI Ecosystems
    ( 2023) ; ;
    Bevilacqua, Tatjana
    Voice assistants’ (VAs) increasingly nuanced and natural communication via artificial intelligence (AI) opens up new opportunities for the experience of users, providing task assistance and automation possibilities, and also offer an easy interface to digital services and ecosystems. However, VAs and according ecosystems face various problems, such as low adoption and satisfaction rates as well as other negative reactions from users. Companies, therefore, need to consider what contributes to user satisfaction of VAs and related conversational AI ecosystems. Key for conversational AI ecosystems is the consideration of trust due to their agentic and pervasive nature. Nonetheless, due to the complexity of conversational AI ecosystems and different trust sources involved, we argue that we need a more detailed understanding about trust. Thus, we propose a configurational view on conversational AI ecosystems that allows us to disentangle the complex and interrelated factors that contribute to trust in VAs. We examine with a configurational approach and a survey study, how different trust sources contribute to the outcomes of conversational AI ecosystems, i.e., in our case user satisfaction. The results of our study show four distinct patterns of trust source configurations. Vice versa, we show how trust sources contribute to the absence of the outcome, i.e., user satisfaction. The derived implications provide a configurative theoretical understanding for the role of trust sources for user satisfaction that provides practitioners useful guidance for more trustworthy conversational AI ecosystems.
    Type:
    Journal:
  • Publication
    Social Audio: Conceptualizing Voice-Based Online Social Networks and their Privacy Implications
    ( 2023) ;
    Rentsch, Stefanie
    ;
    This paper explores the nature and implications of social audio: online social networks (OSNs) that enable users to interact via voice. The paper contributes to basic science by offering a precise conceptualization of voice-based OSNs and their design features. We posit that the defining characteristics of traditional OSNs also hold for social audio, yet that novel features (i.e., creating rooms) and modifications of traditional features (i.e., like) through voice idiosyncrasies can be found. This work also shows how social audio introduces novel privacy implications, particularly driven by the richness and risks of voice as an interaction modality. Using three illustrative cases, we demonstrate applications of social audio and how privacy implications remain largely unaddressed. Specifically, we find that the networks considered show very few specific features addressing the risks of voice-based interaction and that current privacy policies do not reflect these risks nor offer mitigation measures. We bridge our findings and extensions for future research by discussing potential approaches in terms of network architecture features for social audio providers to pursue in order to ensure user privacy.
    Type:
    Journal:
  • Publication
    Designing for Conversational System Trustworthiness: The Impact of Model Transparency on Trust and Task Performance
    Designing for system trustworthiness promises to address challenges of opaqueness and uncertainty introduced through Machine Learning (ML)-based systems by allowing users to understand and interpret systems’ underlying working mechanisms. However, empirical exploration of trustworthiness measures and their effectiveness is scarce and inconclusive. We investigated how varying model confidence (70% versus 90%) and making confidence levels transparent to the user (explanatory statement versus no explanatory statement) may influence perceptions of trust and performance in an information retrieval task assisted by a conversational system. In a field experiment with 104 users, our findings indicate that neither model confidence nor transparency seem to impact trust in the conversational system. However, users’ task performance is positively influenced by both transparency and trust in the system. While this study considers the complex interplay of system trustworthiness, trust, and subsequent behavioral outcomes, our results call into question the relation between system trustworthiness and user trust.
    Type:
    Journal:
  • Publication
    Towards a Trust Reliance Paradox? Exploring the Gap Between Perceived Trust in and Reliance on Algorithmic Advice
    Beyond AI-based systems’ potential to augment decision-making, reduce organizational resources, and counter human biases, unintended consequences of such systems have been largely neglected so far. Researchers are undecided on whether erroneous advice acts as an impediment to system use or is blindly relied upon. As part of an experimental study, we turn towards the impact of incorrect system advice and how to design for failure-prone AI. In an experiment with 156 subjects we find that, although incorrect algorithmic advice is trusted less, users adapt their answers to a system’s incorrect recommendations. While transparency on a system’s accuracy levels fosters trust and reliance in the context of incorrect advice, an opposite effect is found for users exposed to correct advice. Our findings point towards a paradoxical gap between stated trust and actual behavior. Furthermore, transparency mechanisms should be deployed with caution as their effectiveness is intertwined with system performance.
    Type:
    Journal:
  • Publication
    Voice as a Contemporary Frontier of Interaction Design
    Voice assistants’ increasingly nuanced and natural communication bears new opportunities for user experiences and task automation, while challenging existing patterns of human-computer interaction. A fragmented research field, as well as constant technological advancements, impede a common apprehension of prevalent design features of voice-based interfaces. As part of this study, 86 papers across domains are systematically identified and analysed to arrive at a common understanding of voice assistants. The review highlights perceptual differences to other human-computer interfaces and points out relevant auditory cues. Key findings regarding those cues’ impact on user perception and behaviour are discussed along with the three design strategies 1) personification, 2) individualization and 3) contextualization. Avenues for future research are lastly deducted. Our results provide relevant opportunities to researchers and designers alike to advance the design and deployment of voice assistants.
    Type:
    Journal: