Transparent AI : explainability of deep learning based load disaggregation
Murray, David and Stankovic, Lina and Stankovic, Vladimir (2021) Transparent AI : explainability of deep learning based load disaggregation. In: The 1st ACM SIGEnergy Workshop of Fair, Accountable, Transparent, and Ethical AI for Smart Environments and Energy Systems, 2021-11-17 - 2021-11-18. (https://doi.org/10.1145/3486611.3492410)
Preview |
Text.
Filename: Murray_etal_FATEsys_2021_Transparent_AI_explainability_of_deep_learning_based_load_disaggregation.pdf
Accepted Author Manuscript Download (412kB)| Preview |
Abstract
The paper focuses on explaining the outputs of deep-learning based non-intrusive load monitoring (NILM). Explainability of NILM networks is needed for a range of stakeholders: (i) technology developers to understand why a model is under/over predicting energy usage, missing appliances or false positives, (ii) businesses offering energy advice based on NILM as part of a broader energy home management recommender system, and (iii) end-users who need to understand the outcomes of the NILM inference.
ORCID iDs
Murray, David ORCID: https://orcid.org/0000-0002-5040-9862, Stankovic, Lina ORCID: https://orcid.org/0000-0002-8112-1976 and Stankovic, Vladimir ORCID: https://orcid.org/0000-0002-1075-2420;-
-
Item type: Conference or Workshop Item(Paper) ID code: 78228 Dates: DateEvent17 November 2021Published12 October 2021AcceptedSubjects: Technology > Electrical engineering. Electronics Nuclear engineering Department: Faculty of Engineering > Electronic and Electrical Engineering Depositing user: Pure Administrator Date deposited: 21 Oct 2021 09:23 Last modified: 11 Nov 2024 17:04 Related URLs: URI: https://strathprints.strath.ac.uk/id/eprint/78228