Towards transparent load disaggregation – a framework for quantitative evaluation of explainability using explainable AI
Batic, Djordje and Stankovic, Vladimir and Stankovic, Lina (2024) Towards transparent load disaggregation – a framework for quantitative evaluation of explainability using explainable AI. IEEE Transactions on Consumer Electronics, 70 (1). pp. 4345-4356. ISSN 0098-3063 (https://doi.org/10.1109/TCE.2023.3300530)
Preview |
Text.
Filename: Batic-etal-IEEE-TOCE-2023-Towards-transparent-load-disaggregation-a-framework.pdf
Final Published Version License: Download (1MB)| Preview |
Abstract
Load Disaggregation, or Non-intrusive Load Monitoring (NILM), refers to the process of estimating energy consumption of individual domestic appliances from aggregated household consumption. Recently, Deep Learning (DL) approaches have seen increased adoption in NILM community. However, DL NILM models are often treated as black-box algorithms, which introduces algorithmic transparency and explainability concerns, hindering wider adoption. Recent works have investigated explainability of DL NILM, however they are limited to computationally expensive methods or simple classification problems. In this work, we present a methodology for explainability of regression-based DL NILM with visual explanations, using explainable AI (XAI). Two explainability levels are provided. Sequence-level explanations highlight important features of predicted time-series sequence of interest, while point-level explanations enable visualising explanations at a point in time. To facilitate wider adoption of XAI, we define desirable properties of NILM explanations -faithfulness, robustness and effective complexity. Addressing the limitation of existing XAI NILM approaches that don’t assess the quality of explanations, desirable properties of explanations are used for quantitative evaluation of explainability. We show that proposed framework enables better understanding of NILM outputs and helps improve design by providing a visualization strategy and rigorous evaluation of quality of XAI methods, leading to transparency of outcomes.
ORCID iDs
Batic, Djordje ORCID: https://orcid.org/0000-0002-7647-6641, Stankovic, Vladimir ORCID: https://orcid.org/0000-0002-1075-2420 and Stankovic, Lina ORCID: https://orcid.org/0000-0002-8112-1976;-
-
Item type: Article ID code: 86445 Dates: DateEvent1 February 2024Published1 August 2023Published Online28 July 2023AcceptedSubjects: Science > Mathematics > Electronic computers. Computer science Department: Faculty of Engineering > Electronic and Electrical Engineering Depositing user: Pure Administrator Date deposited: 10 Aug 2023 15:07 Last modified: 21 Nov 2024 01:24 URI: https://strathprints.strath.ac.uk/id/eprint/86445