Transparent AI : explainability of deep learning based load disaggregation

Murray, David and Stankovic, Lina and Stankovic, Vladimir (2021) Transparent AI : explainability of deep learning based load disaggregation. In: The 1st ACM SIGEnergy Workshop of Fair, Accountable, Transparent, and Ethical AI for Smart Environments and Energy Systems, 2021-11-17 - 2021-11-18. (https://doi.org/10.1145/3486611.3492410)

[thumbnail of Murray-etal-FATEsys-2021-Transparent-AI-explainability-of-deep-learning-based-load-disaggregation]
Preview
Text. Filename: Murray_etal_FATEsys_2021_Transparent_AI_explainability_of_deep_learning_based_load_disaggregation.pdf
Accepted Author Manuscript

Download (412kB)| Preview

Abstract

The paper focuses on explaining the outputs of deep-learning based non-intrusive load monitoring (NILM). Explainability of NILM networks is needed for a range of stakeholders: (i) technology developers to understand why a model is under/over predicting energy usage, missing appliances or false positives, (ii) businesses offering energy advice based on NILM as part of a broader energy home management recommender system, and (iii) end-users who need to understand the outcomes of the NILM inference.

ORCID iDs

Murray, David ORCID logoORCID: https://orcid.org/0000-0002-5040-9862, Stankovic, Lina ORCID logoORCID: https://orcid.org/0000-0002-8112-1976 and Stankovic, Vladimir ORCID logoORCID: https://orcid.org/0000-0002-1075-2420;