Now showing 1 - 5 of 5
  • Publication
    Disentangling Trust in Voice Assistants - A Configurational View on Conversational AI Ecosystems
    ( 2023) ; ;
    Bevilacqua, Tatjana
    Voice assistants’ (VAs) increasingly nuanced and natural communication via artificial intelligence (AI) opens up new opportunities for the experience of users, providing task assistance and automation possibilities, and also offer an easy interface to digital services and ecosystems. However, VAs and according ecosystems face various problems, such as low adoption and satisfaction rates as well as other negative reactions from users. Companies, therefore, need to consider what contributes to user satisfaction of VAs and related conversational AI ecosystems. Key for conversational AI ecosystems is the consideration of trust due to their agentic and pervasive nature. Nonetheless, due to the complexity of conversational AI ecosystems and different trust sources involved, we argue that we need a more detailed understanding about trust. Thus, we propose a configurational view on conversational AI ecosystems that allows us to disentangle the complex and interrelated factors that contribute to trust in VAs. We examine with a configurational approach and a survey study, how different trust sources contribute to the outcomes of conversational AI ecosystems, i.e., in our case user satisfaction. The results of our study show four distinct patterns of trust source configurations. Vice versa, we show how trust sources contribute to the absence of the outcome, i.e., user satisfaction. The derived implications provide a configurative theoretical understanding for the role of trust sources for user satisfaction that provides practitioners useful guidance for more trustworthy conversational AI ecosystems.
    Type:
    Journal:
  • Publication
    Designing for Conversational System Trustworthiness: The Impact of Model Transparency on Trust and Task Performance
    Designing for system trustworthiness promises to address challenges of opaqueness and uncertainty introduced through Machine Learning (ML)-based systems by allowing users to understand and interpret systems’ underlying working mechanisms. However, empirical exploration of trustworthiness measures and their effectiveness is scarce and inconclusive. We investigated how varying model confidence (70% versus 90%) and making confidence levels transparent to the user (explanatory statement versus no explanatory statement) may influence perceptions of trust and performance in an information retrieval task assisted by a conversational system. In a field experiment with 104 users, our findings indicate that neither model confidence nor transparency seem to impact trust in the conversational system. However, users’ task performance is positively influenced by both transparency and trust in the system. While this study considers the complex interplay of system trustworthiness, trust, and subsequent behavioral outcomes, our results call into question the relation between system trustworthiness and user trust.
    Type:
    Journal:
  • Publication
    Designing for Social Presence and Leveraging the Outcomes of Customer Service Chatbots
    ( 2019) ;
    Degen, Oliver
    ;
    Schwede, Melanie
    Chatbots are innovative text-based dialogue systems that are oftentimes used in digital customer service encounters, since chatbots offer a lot of potential to optimize the relationship with the customer. However, chatbots face various problems, such as low adoption and satisfaction rates as well as other negative reactions from the customer. Companies therefore need to consider different design features when developing their chatbots. Key for designing chatbots is the use of anthropomorphic design elements. In this study, we examine the two anthropomorphic design elements of personification, which includes a human like appearance of the chatbot, and socially oriented communication, which means a more sensitive and extensive communication behavior. We tested the influence of the two design elements on social presence, satisfaction, trusting beliefs and empathy. The results of our experiment support a significant influence of the two anthropomorphic design elements on social presence. In addition, our findings show the central role of social presence concerning chatbot perceptions. First, social presence has a strong direct influence on trusting beliefs, empathy and satisfaction. Second, social presence acts as a mediator for both anthropomorphic design elements on satisfaction with a chatbot. The derived implications provide a theory for understanding anthropomorphic design elements related to chatbots and we offer design principles for practitioners that enable more effective chatbot implementations.
    Type:
    Journal:
    Issue:
  • Publication
    How Digital Nudges Influence Consumers – The Role of Social and Privacy Nudges in Retargeting
    ( 2018)
    Eigenbrod, Laura
    ;
    ;
    Retargeting is an innovative online marketing technique because it can provide consumer-specific advertising content based on consumers’ browsing behavior that meet consumers' prefer-ences and interests. Although this advertising form offers great opportunities of bringing back customers who have left an online store without to complete a purchase, retargeting is risky be-cause the necessary data collection leads to strong privacy concerns which in turn, trigger con-sumer reactance and decreasing trust. Digital nudges – small design modifications in digital choice environments which guide peoples’ behavior – present a promising concept to bypass these negative consequences of retargeting. In order to explore the positive effects of digital nudges in retargeting banners, we conducted a between-subject experiment with a subsequent survey which examines the impacts of social nudges (likes of friends) and privacy nudges (disclosure of privacy policy and purpose of retargeting banners). Whereas the social nudge led to a negative impact on consumers’ privacy concerns and a positive impact on consumers’ booking behavior, the privacy nudge did not have any significant impact. A combination of social nudge and privacy nudge showed that the privacy nudge negatively moderated the positive relationship between social nudge and consumers’ booking behavior. The derived implications provide a theory for under-standing nudges in digital environments and we offer design principles for practitioners that enable better retargeting outcomes.
  • Publication
    Nudging Privacy in Digital Work Systems – Towards the Development of a Design Theory
    ( 2018) ;
    Schöbel, Sofia
    Digitization changes our society, the way we work and the way we generate value in work systems. More precisely, it equally affects how business and employees organize their work. While there is a tremendous innovation potential for businesses such as more flexible and efficient work arrangements, there are also tremendous privacy risks. Such privacy risks especially relate to issues that employees leave data traces in every working step while being oftentimes not aware of their generated data. Employees leave data traces on intranet platforms such as Wikis and on external work tools such as Slack and reveal not only personal information but also company insights. At the same time companies must handle new privacy regulations such as the European General Data Protection Regulation (GDPR) which do not only affect companies in the European countries but also providers of information systems (IS) all around the world that offer their services in Europe. Thus, innovative approaches for considering challenges that exist because of privacy related issues are needed. A promising approach to overcome the aforementioned privacy issues of digital working environments is the application of digital nudges in IS – small design modifications in digital choice environments which guide peoples’ behavior (Weinmann et al. 2016). The nudge theory – originally derived from behavioral economics– is based on the irrational behavior of human beings. A nudge “is any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives” (Thaler and Sunstein 2008, p. 6) and is typically applied in offline environments, e.g., to nudge individuals for being organ donors through a default opt-in. By relying on the concept of nudging, individuals in digital work system could be nudged to improve their decisions when disclosing data. This is especially prevalent, since many individuals are not aware about how their data is used. As such, there is a lack of research regarding nudging in context of digital work systems and privacy which should be addressed in a socio-technical research approach. By examining the impact and perception of digital nudges in work systems, we want to shed light on the application, design and effects of nudges to overcome challenges that arise because of privacy related issues. We therefore first examine use cases and requirements for nudging in digital work system under the consideration of users, companies and other stakeholders. Since we draw on a socio-technical view, we incorporate requirements that relate to a legal as well as an ethical perspective for nudging privacy. By this means, we develop a design theory for privacy nudging by an iterative design and evaluation process in the field. The ultimate goal is to improve decisions concerning data disclosure and, secondary, increase by this means trust and acceptance of digital work systems by lowering privacy concerns. By developing a design theory, we contribute to the body of knowledge with a more profound understanding of digital nudging in the context of privacy as well as with prescriptive design knowledge for nudging in digital work systems.
    Type:
    Journal: