Explainability-informed feature selection and performance prediction for nonintrusive load monitoring
Mollel, Rachel Stephen and Stankovic, Lina and Stankovic, Vladimir (2023) Explainability-informed feature selection and performance prediction for nonintrusive load monitoring. Sensors, 23 (10). 4845. ISSN 1424-8220 (https://doi.org/10.3390/s23104845)
Preview |
Text.
Filename: Mollel_etal_Sensors_2023_Explainability_informed_feature_selection_and_performance_prediction.pdf
Final Published Version License: Download (3MB)| Preview |
Abstract
With the massive, worldwide, smart metering roll-out , both energy suppliers and users are starting to tap into the potential of higher resolution energy readings for accurate billing, improved demand response, improved tariffs better tuned to users and the grid, and empowering end-users to know how much their individual appliances contribute to their electricity bills via nonintrusive load monitoring (NILM). A number of NILM approaches, based on machine learning (ML), have been proposed over the years, focusing on improving the NILM model performance. However, the trustworthiness of the NILM model itself has hardly been addressed. It is important to explain the underlying model and its reasoning to understand why the model underperforms in order to satisfy user curiosity and to enable model improvement. This can be done by leveraging naturally interpretable or explainable models as well as explainability tools. This paper adopts a naturally interpretable decision tree (DT)-based approach for a NILM multiclass classifier. Furthermore, this paper leverages explainability tools to determine local and global feature importance, and design a methodology that informs feature selection for each appliance class, which can determine how well a trained model will predict an appliance on any unseen test data, minimising testing time on target datasets. We explain how one or more appliances can negatively impact classification of other appliances and predict appliance and model performance of the REFIT-data trained models on unseen data of the same house and on unseen houses on the UK-DALE dataset. Experimental results confirm that models trained with the explainability-informed local feature importance can improve toaster classification performance from 65% to 80%. Additionally, instead of one five-classifier approach incorporating all five appliances, a three-classifier approach comprising a kettle, microwave, and dishwasher and a two-classifier comprising a toaster and washing machine improves classification performance for the dishwasher from 72% to 94% and the washing machine from 56% to 80%.
ORCID iDs
Mollel, Rachel Stephen ORCID: https://orcid.org/0000-0001-8591-9830, Stankovic, Lina ORCID: https://orcid.org/0000-0002-8112-1976 and Stankovic, Vladimir ORCID: https://orcid.org/0000-0002-1075-2420;-
-
Item type: Article ID code: 85548 Dates: DateEvent17 May 2023Published15 May 2023AcceptedSubjects: Technology > Electrical engineering. Electronics Nuclear engineering Department: Faculty of Engineering > Electronic and Electrical Engineering Depositing user: Pure Administrator Date deposited: 18 May 2023 08:38 Last modified: 21 Dec 2024 01:27 URI: https://strathprints.strath.ac.uk/id/eprint/85548