Machine learning (ML)-based conversational systems represent a value enabler for human-machine interaction. Simultaneously, the opacity, complexity, and humanness accompanied by such systems introduce their own issues, including trust misalignment. While trust is viewed as a prerequisite for effective system use, few studies have considered calibrating for appropriate trust, and empirically testing the relationship between trust and related behavior. Moreover, the desired implications of transparency-enhancing design cues are ambiguous. My research aims to explore the impact of system performance on trust, the dichotomy between trust and behavior, and how transparency might help attenuate the effects caused by low system performance in the specific context of decision-making tasks assisted by ML-based conversational systems.
Language
English
Keywords
HCI
conversational
systems
trust
trustworthiness
HSG Classification
contribution to scientific community
Publisher place
Oxford, United Kingdom
Start page
912
End page
912
Pages
1
Event Title
AAAI/ACM Conference on AI, Ethics, and Society (AIES)