Towards transparent load disaggregation – a framework for quantitative evaluation of explainability using explainable AI

Batic, Djordje and Stankovic, Vladimir and Stankovic, Lina (2023) Towards transparent load disaggregation – a framework for quantitative evaluation of explainability using explainable AI. IEEE Transactions on Consumer Electronics. ISSN 0098-3063 (https://doi.org/10.1109/TCE.2023.3300530)

[thumbnail of Batic-etal-IEEE-TOCE-2023-Towards-transparent-load-disaggregation-a-framework]
Preview
Text. Filename: Batic_etal_IEEE_TOCE_2023_Towards_transparent_load_disaggregation_a_framework.pdf
Accepted Author Manuscript
License: Creative Commons Attribution 4.0 logo

Download (620kB)| Preview

Abstract

Load Disaggregation, or Non-intrusive Load Monitoring (NILM), refers to the process of estimating energy consumption of individual domestic appliances from aggregated household consumption. Recently, Deep Learning (DL) approaches have seen increased adoption in NILM community. However, DL NILM models are often treated as black-box algorithms, which introduces algorithmic transparency and explainability concerns, hindering wider adoption. Recent works have investigated explainability of DL NILM, however they are limited to computationally expensive methods or simple classification problems. In this work, we present a methodology for explainability of regression-based DL NILM with visual explanations, using explainable AI (XAI). Two explainability levels are provided. Sequence-level explanations highlight important features of predicted time-series sequence of interest, while point-level explanations enable visualising explanations at a point in time. To facilitate wider adoption of XAI, we define desirable properties of NILM explanations -faithfulness, robustness and effective complexity. Addressing the limitation of existing XAI NILM approaches that don’t assess the quality of explanations, desirable properties of explanations are used for quantitative evaluation of explainability. We show that proposed framework enables better understanding of NILM outputs and helps improve design by providing a visualization strategy and rigorous evaluation of quality of XAI methods, leading to transparency of outcomes.