Abstract
When it comes to complex machine learning models, commonly referred to as black boxes, understanding the underlying decision making process is crucial for domains such as healthcare and financial services, as well as when they are used in connection with safety critical systems such as autonomous vehicles. As a result, interest in explainable artificial intelligence (xAI) tools and techniques has increased in recent years. However, the user experience (UX) effectiveness of existing xAI frameworks, especially concerning algorithms that work with data as opposed to images, is still an open research question. In order to address this gap, we examine the UX effectiveness of the Local Interpretable Model-Agnostic Explanations (LIME) xAI framework, one of the most popular model agnostic frameworks found in the literature, with a specific focus on its performance in terms of making tabular models more interpretable. In particular, we apply several state of the art machine learning algorithms on a tabular dataset, and demonstrate how LIME can be used to supplement conventional performance assessment methods. Based on this experience, we evaluate the understandability of the output produced by LIME both via a usability study, involving participants who are not familiar with LIME, and its overall usability via a custom made assessment framework, called Model Usability Evaluation (MUsE), which is derived from the International Organisation for Standardisation 9241-11:2018 standard.
Originalsprache | Englisch |
---|---|
Seiten (von - bis) | 143 - 153 |
Fachzeitschrift | Information Fusion |
Jahrgang | 81 |
DOIs | |
Publikationsstatus | Veröffentlicht - 2022 |
Österreichische Systematik der Wissenschaftszweige (ÖFOS)
- 102
- 102001 Artificial Intelligence
- 102015 Informationssysteme
- 502050 Wirtschaftsinformatik
- 505002 Datenschutz