Explainable NILM Networks

Murray, David and Stankovic, Lina and Stankovic, Vladimir (2020) Explainable NILM Networks. In: 5th International Workshop on Non Intrusive Load Monitoring, 2020-11-18 - 2020-11-18, Virtual. (https://doi.org/10.1145/3427771.3427855)

[thumbnail of Murray-etal-NILM-2020-Explainable-NILM-networks]
Preview
Text. Filename: Murray_etal_NILM_2020_Explainable_NILM_networks.pdf
Accepted Author Manuscript

Download (2MB)| Preview

Abstract

There has been an explosion in the literature recently on Nonintrusive load monitoring (NILM) approaches based on neural networks and other advanced machine learning methods. However, though these methods provide competitive accuracy, the inner workings of these models is less clear. Understanding the outputs of the networks help in improving the designs, highlights the relevant features and aspects of the data used for making the decision, provides a better picture of the accuracy of the models (since a single accuracy number is often insufficient), and also inherently provides a level of trust in the value of the provided consumption feedback to the NILM end-user. Explainable Artificial Intelligence (XAI) aims to address this issue by explaining these “black-boxes”. XAI methods, developed for image and text-based methods, can in many cases interpret well the outputs of complex models, making them transparent. However, explaining time-series data inference remains a challenge. In this paper, we show how some XAI-based approaches can be used to explain NILM deep learning-based autoencoders inner workings, and examine why the network performs or does not perform well in certain cases.

ORCID iDs

Murray, David ORCID logoORCID: https://orcid.org/0000-0002-5040-9862, Stankovic, Lina ORCID logoORCID: https://orcid.org/0000-0002-8112-1976 and Stankovic, Vladimir ORCID logoORCID: https://orcid.org/0000-0002-1075-2420;