Blockchain for explainable and trustworthy artificial intelligence

Mohamed Nassar, Khaled Salah, Muhammad Habib Ur Rehman, Davor Svetinovic

Publikation: Wissenschaftliche FachzeitschriftOriginalbeitrag in FachzeitschriftBegutachtung


The increasing computational power and proliferation of big data are now empowering Artificial Intelligence (AI) to achieve massive adoption and applicability in many fields. The lack of explanation when it comes to the decisions made by today's AI algorithms is a major drawback in critical decision-making systems. For example, deep learning does not offer control or reasoning over its internal processes or outputs. More importantly, current black-box AI implementations are subject to bias and adversarial attacks that may poison the learning or the inference processes. Explainable AI (XAI) is a new trend of AI algorithms that provide explanations of their AI decisions. In this paper, we propose a framework for achieving a more trustworthy and XAI by leveraging features of blockchain, smart contracts, trusted oracles, and decentralized storage. We specify a framework for complex AI systems in which the decision outcomes are reached based on decentralized consensuses of multiple AI and XAI predictors. The paper discusses how our proposed framework can be utilized in key application areas with practical use cases.
FachzeitschriftWIREs Data Mining and Knowledge Discovery
PublikationsstatusVeröffentlicht - 2019