On the Semantic Interpretability of Artificial Intelligence Models

Item Type Journal paper
Abstract Artificial Intelligence models are becoming increasingly more powerful and accurate, supporting or even replacing humans' decision making. But with increased power and accuracy also comes higher complexity, making it hard for users to understand how the model works and what the reasons behind its predictions are. Humans must explain and justify their decisions, and so do the AI models supporting them in this process, making semantic interpretability an emerging field of study. In this work, we look at interpretability from a broader point of view, going beyond the machine learning scope and covering different AI fields such as distributional semantics and fuzzy logic, among others. We examine and classify the models according to their nature and also based on how they introduce interpretability features, analyzing how each approach affects the final users and pointing to gaps that still need to be addressed to provide more human-centered interpretability solutions.
Authors Silva, Vivian; Freitas, Andre & Handschuh, Siegfried
Language English
Keywords Artificial Intelligence, Explainable AI, Semantic Interpretability
Subjects computer science
HSG Classification contribution to scientific community
HSG Profile Area None
Refereed No
Date July 2019
Publisher Computing Research Repository (CoRR)
Place of Publication St.Gallen
Official URL https://arxiv.org/pdf/1907.04105.pdf
Contact Email Address siegfried.handschuh@unisg.ch
Depositing User Prof. Dr. Siegfried Handschuh
Date Deposited 31 Oct 2019 12:03
Last Modified 20 Jul 2022 17:39
URI: https://www.alexandria.unisg.ch/publications/258231

Download

[img] Text
semantic interpretability of AI models.pdf - Published Version

Download (420kB)

Citation

Silva, Vivian; Freitas, Andre & Handschuh, Siegfried (2019) On the Semantic Interpretability of Artificial Intelligence Models.

Statistics

https://www.alexandria.unisg.ch/id/eprint/258231
Edit item Edit item
Feedback?