Interpretable and explainable machine learning for ultrasonic defect sizing

Pyle, Richard J. and Hughes, Robert R. and Wilcox, Paul D. (2023) Interpretable and explainable machine learning for ultrasonic defect sizing. IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, 70 (4). pp. 277-290. ISSN 1525-8955 (https://doi.org/10.1109/TUFFC.2023.3248968)

[thumbnail of Pyle-etal-IEEE-TUFFC-2023-Interpretable-and-explainable-machine-learning]
Preview
Text. Filename: Pyle_etal_IEEE_TUFFC_2023_Interpretable_and_explainable_machine_learning.pdf
Accepted Author Manuscript
License: Strathprints license 1.0

Download (2MB)| Preview

Abstract

Despite its popularity in literature, there are few examples of machine learning (ML) being used for industrial nondestructive evaluation (NDE) applications. A significant barrier is the ‘black box’ nature of most ML algorithms. This paper aims to improve the interpretability and explainability of ML for ultrasonic NDE by presenting a novel dimensionality reduction method: Gaussian feature approximation (GFA). GFA involves fitting a 2D elliptical Gaussian function an ultrasonic image and storing the seven parameters that describe each Gaussian. These seven parameters can then be used as inputs to data analysis methods such as the defect sizing neural network presented in this paper. GFA is applied to ultrasonic defect sizing for inline pipe inspection as an example application. This approach is compared to sizing with the same neural network, and two other dimensionality reduction methods (the parameters of 6 dB drop boxes and principal component analysis), as well as a convolutional neural network applied to raw ultrasonic images. Of the dimensionality reduction methods tested, GFA features produce the closest sizing accuracy to sizing from the raw images, with only a 23% increase in RMSE, despite a 96.5% reduction in the dimensionality of the input data. Implementing ML with GFA is implicitly more interpretable than doing so with principal component analysis or raw images as inputs, and gives significantly more sizing accuracy than 6 dB drop boxes. Shapley additive explanations (SHAP) are used to calculate how each feature contributes to the prediction of an individual defect’s length. Analysis of SHAP values demonstrates that the GFA-based neural network proposed displays many of the same relationships between defect indications and their predicted size as occur in traditional NDE sizing methods.

ORCID iDs

Pyle, Richard J. ORCID logoORCID: https://orcid.org/0000-0002-5236-7467, Hughes, Robert R. and Wilcox, Paul D.;