Reinhard, PhilippPhilippReinhardMahei LiFina, MatteoMatteoFinaJan Marco Leimeister2025-04-072025-04-072025-04-26https://www.alexandria.unisg.ch/handle/20.500.14171/12230510.1145/3706599.3720249The adoption of generative artificial intelligence (GenAI) and large language models (LLMs) in society and business is growing rapidly. While these systems often generate convincing and coherent responses, they risk producing incorrect or non-factual information, known as confabulations or hallucinations. Consequently, users must critically assess the reliability of these outputs when interacting with LLM-based agents. Although advancements such as retrieval-augmented generation (RAG) have improved the technical performance of these systems, there is a lack of empirical models that explain how humans detect confabulations. Building on the explainable AI (XAI) literature, we examine the role of reasoning-based explanations in helping users identify confabulations in LLM systems. An online experiment (n = 97) reveals that analogical and factual explanations improve detection accuracy but require more time and cognitive effort than the no explanation baseline.enGenerative AIRAGLarge Language ModelsConfabulationsHallucinationsGenXAIFact or Fiction? Exploring Explanations to Identify Factual Confabulations in RAG-Based LLM Systemsconference paper