A case-study led investigation of explainable AI (XAI) to support deployment of prognostics in industry

Amin, Omnia and Brown, Blair and Stephen, Bruce and McArthur, Stephen; Do, Phuc and Michau, Gabriel and Ezhilarasu, Cordelia, eds. (2022) A case-study led investigation of explainable AI (XAI) to support deployment of prognostics in industry. In: Proceedings of the European Conference Of The PHM Society 2022. PHM Society, Pennsylvania, pp. 9-20. ISBN 9781936263363 (https://doi.org/10.36001/phme.2022.v7i1.3336)

[thumbnail of Amin-etal-PHME2022-A-case-study-led-investigation-of-explainable-AI]
Preview
Text. Filename: Amin_etal_PHME2022_A_case_study_led_investigation_of_explainable_AI.pdf
Final Published Version
License: Creative Commons Attribution 3.0 logo

Download (2MB)| Preview

Abstract

Civil nuclear generation plant must maximise it’s operational uptime in order to maintain it’s viability. With aging plant and heavily regulated operating constraints, monitoring is commonplace, but identifying health indicators to pre-empt disruptive faults is challenging owing to the volumes of data involved. Machine learning (ML) models are increasingly deployed in prognostics and health management (PHM) systems in various industrial applications, however, many of these are black box models that provide good performance but little or no insight into how predictions are reached. In nuclear generation, there is significant regulatory oversight and therefore a necessity to explain decisions based on outputs from predictive models. These explanations can then enable stakeholders to trust these outputs, satisfy regulatory bodies and subsequently make more effective operational decisions. How ML model outputs convey explanations to stakeholders is important, so these explanations must be in human (and technical domain related) understandable terms. Consequently, stakeholders can rapidly interpret, then trust predictions better, and will be able to act on them more effectively. The main contributions of this paper are: 1. introduce XAI into the PHM of industrial assets and provide a novel set of algorithms that translate the explanations produced by SHAP to text-based human-interpretable explanations; and, 2. consider the context of these explanations as intended for application to prognostics of critical assets in industrial applications. The use of XAI will not only help in understanding how these ML models work, but also describe the most important features contributing to predicted degradation of the nuclear generation asset.

ORCID iDs

Amin, Omnia, Brown, Blair, Stephen, Bruce ORCID logoORCID: https://orcid.org/0000-0001-7502-8129 and McArthur, Stephen ORCID logoORCID: https://orcid.org/0000-0003-1312-8874; Do, Phuc, Michau, Gabriel and Ezhilarasu, Cordelia