A novel model usability evaluation framework (MUsE) for explainable artificial intelligence

Jürgen Dieber, Sabrina Kirrane

Publication: Scientific journalJournal articlepeer-review


When it comes to complex machine learning models, commonly referred to as black boxes, understanding the underlying decision making process is crucial for domains such as healthcare and financial services, as well as when they are used in connection with safety critical systems such as autonomous vehicles. As a result, interest in explainable artificial intelligence (xAI) tools and techniques has increased in recent years. However, the user experience (UX) effectiveness of existing xAI frameworks, especially concerning algorithms that work with data as opposed to images, is still an open research question. In order to address this gap, we examine the UX effectiveness of the Local Interpretable Model-Agnostic Explanations (LIME) xAI framework, one of the most popular model agnostic frameworks found in the literature, with a specific focus on its performance in terms of making tabular models more interpretable. In particular, we apply several state of the art machine learning algorithms on a tabular dataset, and demonstrate how LIME can be used to supplement conventional performance assessment methods. Based on this experience, we evaluate the understandability of the output produced by LIME both via a usability study, involving participants who are not familiar with LIME, and its overall usability via a custom made assessment framework, called Model Usability Evaluation (MUsE), which is derived from the International Organisation for Standardisation 9241-11:2018 standard.
Original languageEnglish
Pages (from-to)143 - 153
JournalInformation Fusion
Publication statusPublished - 2022

Austrian Classification of Fields of Science and Technology (ÖFOS)

  • 102
  • 102001 Artificial intelligence
  • 102015 Information systems
  • 502050 Business informatics
  • 505002 Data protection

Cite this