Performance-aware NILM model optimization for edge deployment

Sykiotis, Stavros and Athanasoulias, Sotirios and Kaselimi, Maria and Doulamis, Anastasios and Doulamis, Nikolaos and Stankovic, Lina and Stankovic, Vladimir (2023) Performance-aware NILM model optimization for edge deployment. IEEE Transactions on Green Communications and Networking, 7 (3). pp. 1434-1446. ISSN 2473-2400 (

[thumbnail of Sykiotis-etal-IEEE-TGCN-2023-Performance-aware-NILM-model-optimization]
Text. Filename: Sykiotis_etal_IEEE_TGCN_2023_Performance_aware_NILM_model_optimization.pdf
Final Published Version
License: Creative Commons Attribution 4.0 logo

Download (5MB)| Preview


Non-Intrusive Load Monitoring (NILM) describes the extraction of the individual consumption pattern of a domestic appliance from the aggregated household consumption. Nowadays, the NILM research focus is shifted towards practical NILM applications, such as edge deployment, to accelerate the transition towards a greener energy future. NILM applications at the edge eliminate privacy concerns and data transmission-related problems. However, edge resource restrictions pose additional challenges to NILM. NILM approaches are usually not designed to run on edge devices with limited computational capacity and therefore model optimization is required for better resource management. Recent works have started investigating NILM model optimization, but they utilize compression approaches arbitrarily, without considering the trade-off between model performance and computational cost. In this work, we present a NILM model optimization framework for edge deployment. The proposed edge optimization engine optimizes a NILM model for edge deployment depending on the edge device’s limitations and includes a novel performance-aware algorithm to reduce the model’s computational complexity. We validate our methodology on three edge application scenarios for four domestic appliances and four model architectures. Experimental results demonstrate that the proposed optimization approach can lead up to 36.3% average reduction of model computational complexity and 75% reduction of storage requirements.