XNILMBoost : explainability-informed load disaggregation training enhancement using attribution priors

Batic, Djordje and Stankovic, Vladimir and Stankovic, Lina (2025) XNILMBoost : explainability-informed load disaggregation training enhancement using attribution priors. Engineering Applications of Artificial Intelligence, 141. 109766. ISSN 0952-1976 (https://doi.org/10.1016/j.engappai.2024.109766)

[thumbnail of Batic-etal-EAAI-2024-XNILMBoost-explainability-informed-load-disaggregation-training]
Preview
Text. Filename: Batic-etal-EAAI-2024-XNILMBoost-explainability-informed-load-disaggregation-training.pdf
Final Published Version
License: Creative Commons Attribution 4.0 logo

Download (1MB)| Preview

Abstract

In the ongoing energy transition, characterised by increased reliance on distributed renewable sources and smart grid technologies, the need for advanced and trustworthy artificial intelligence (AI) in energy management systems is crucial. Non-intrusive load monitoring (NILM), a method for inferring individual appliance energy consumption from aggregate smart meter data, has gained prominence for enhancing energy efficiency. However, advanced deep neural network models used in NILM, while effective, raise transparency and trust concerns due to their complexity. This paper introduces a novel explainability-informed NILM training framework, specifically designed for low-frequency NILM. Our approach aligns with principles for trustworthy AI, focusing on human agency and oversight, technical robustness, and transparency, incorporating explainability directly into the training phase of a NILM model. We propose a novel iterative, explainability-informed NILM training algorithm that uses attribution priors to guide model optimization, including implementation and evaluation of the framework across multiple state-of-the-art NILM architectures, namely, convolutional, recurrent, and dilated causal layers. We introduce a novel Robustness-Trust metric to measure joint improvement in predictive and explainability performance, utilizing explainability metrics of faithfulness, robustness and effective complexity while analyzing model predictive performance against NILM-specific regression and classification metrics. Results broadly show that robust models achieve better explainability, while explainability-enhanced models can lead to improved model robustness. Together, our results demonstrate significant improvements in robustness and transparency of NILM systems across various appliances, model architectures, measurement scales, types of buildings, and energy usage patterns. This work paves the way for more transparent and trustworthy deployments in AI-driven energy systems.

ORCID iDs

Batic, Djordje ORCID logoORCID: https://orcid.org/0000-0002-7647-6641, Stankovic, Vladimir ORCID logoORCID: https://orcid.org/0000-0002-1075-2420 and Stankovic, Lina ORCID logoORCID: https://orcid.org/0000-0002-8112-1976;