Predictive business process monitoring (PBPM) is a class of techniques designed to predict behaviour, such as next activities, in running traces. PBPM techniques aim to improve process performance by providing predictions to process analysts, supporting them in their decision making. However, the PBPM techniques` limited predictive quality was considered as the essential obstacle for establishing such techniques in practice. With the use of deep neural networks (DNNs), the techniques` predictive quality could be improved for tasks like the next activity prediction. While DNNs achieve a promising predictive quality, they still lack comprehensibility due to their hierarchical approach of learning representations. Nevertheless, process analysts need to comprehend the cause of a prediction to identify intervention mechanisms that might affect the decision making to secure process performance. In this paper, we propose XNAP, the first explainable, DNN-based PBPM technique for the next activity prediction. XNAP integrates a layer-wise relevance propagation method from the field of explainable artificial intelligence to make predictions of a long short-term memory DNN explainable by providing relevance values for activities. We show the benefit of our approach through two real-life event logs.
|Titel des Sammelwerks||XNAP: Making LSTM-based Next ActivityPredictions Explainable by Using LRP|
|Herausgeber*innen||Chiara Di Francescomarino, Fabrizio Maria Maggi, Andrea Marrella, Arik Senderovich, Emilio Sulis|
|Erscheinungsort||International Conference on Business Process Management 2020 - 4th International Workshop in Artifi|
|Publikationsstatus||Veröffentlicht - 2020|