Machine learning explanations by design : a case study explaining the predicted degradation of a roto-dynamic pump

Amin, Omnia and Brown, Blair and Stephen, Bruce and McArthur, Stephen and Livina, Valerie (2023) Machine learning explanations by design : a case study explaining the predicted degradation of a roto-dynamic pump. In: NDT 2023 - 60th Annual British Conference on NDT, 2023-09-12 - 2023-09-14, Northampton Town Centre Hotel.

[thumbnail of Amin-etal-NDT2023-Machine-learning-explanations-by-design]
Preview
Text. Filename: Amin-etal-NDT2023-Machine-learning-explanations-by-design.pdf
Accepted Author Manuscript
License: Strathprints license 1.0

Download (4MB)| Preview

Abstract

The field of explainable Artificial Intelligence (AI) has gained growing attention over the last few years due to the potential for making accurate data-based predictions of asset health. One of the current research aims in AI is to address challenges associated with adopting machine learning (ML) (i.e., data-driven) AI – that is, understanding how and why ML predictions are made. Despite ML models successfully providing accurate predictions in many applications, such as condition monitoring, there are still concerns about the transparency of the prediction-making process. Therefore, ensuring that the models used are explainable to human users is essential to build trust in the approaches proposed. Consequently, AI and ML practitioners need to be able to evaluate any available eXplainable AI (XAI) tools’ suitability for their intended domain and end users, while simultaneously being aware of the tools’ limitations. This paper provides insight into various existing XAI approaches and their limitations to be considered by practitioners in condition monitoring applications during the design process for an MLbased prediction. The aim is to assist practitioners in engineering applications in building interpretable and explainable models intended for end users who wish to improve a system’s reliability and help users make better-informed decisions based upon a predictive ML algorithm output. It also emphasizes the importance of explainability in AI. The paper applies some of these tools to an explainability use case in which real condition monitoring data is used to predict the degradation of a roto-dynamic pump. Additionally, potential avenues are explored to enhance the credibility of explanations generated by XAI tools in condition monitoring applications, aiming to offer more reliable explanations to domain experts.

ORCID iDs

Amin, Omnia, Brown, Blair, Stephen, Bruce ORCID logoORCID: https://orcid.org/0000-0001-7502-8129, McArthur, Stephen ORCID logoORCID: https://orcid.org/0000-0003-1312-8874 and Livina, Valerie;