Combined heat and power system intelligent economic dispatch : a deep reinforcement learning approach

Zhou, Suyang and Hu, Zijian and Gu, Wei and Jiang, Meng and Chen, Meng and Hong, Qiteng and Booth, Campbell (2020) Combined heat and power system intelligent economic dispatch : a deep reinforcement learning approach. International Journal of Electrical Power and Energy Systems, 120. 106016. ISSN 0142-0615 (https://doi.org/10.1016/j.ijepes.2020.106016)

[thumbnail of Zhou-etal-IJEPES-2020-Combined-heat-and-power-system-intelligent-economic-dispatch]
Preview
Text. Filename: Zhou_etal_IJEPES_2020_Combined_heat_and_power_system_intelligent_economic_dispatch.pdf
Accepted Author Manuscript
License: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 logo

Download (1MB)| Preview

Abstract

This paper proposed a Deep Reinforcement learning (DRL) approach for Combined Heat and Power (CHP) system economic dispatch which obtain adaptability for different operating scenarios and significantly decrease the computational complexity without affecting accuracy. In the respect of problem description, a vast of Combined Heat and Power (CHP) economic dispatch problems are modeled as a high-dimensional and non-smooth objective function with a large number of non-linear constraints for which powerful optimization algorithms and considerable time are required to solve it. In order to reduce the solution time, most engineering applications choose to linearize the optimization target and devices model. To avoid complicated linearization process, this paper models CHP economic dispatch problems as Markov Decision Process (MDP) that making the model highly encapsulated to preserve the input and output characteristics of various devices. Furthermore, we improve an advanced deep reinforcement learning algorithm: distributed proximal policy optimization (DPPO), to make it applicable to CHP economic dispatch problem. Based on this algorithm, the agent will be trained to explore optimal dispatch strategies for different operation scenarios and respond to system emergencies efficiently. In the utility phase, the trained agent will generate optimal control strategy in real time based on current system state. Compared with existing optimization methods, advantages of DRL methods are mainly reflected in the following three aspects: 1) Adaptability: under the premise of the same network topology, the trained agent can handle the economic scheduling problem in various operating scenarios without recalculation. 2) High encapsulation: The user only needs to input the operating state to get the control strategy, while the optimization algorithm needs to re-write the constraints and other formulas for different situations. 3) Time scale flexibility: It can be applied to both the day-ahead optimized scheduling and the real-time control. The proposed method is applied to two test system with different characteristics. The results demonstrate that the DRL method could handle with varieties of operating situations while get better optimization performance than most of other algorithms.

ORCID iDs

Zhou, Suyang, Hu, Zijian, Gu, Wei, Jiang, Meng, Chen, Meng, Hong, Qiteng ORCID logoORCID: https://orcid.org/0000-0001-9122-1981 and Booth, Campbell ORCID logoORCID: https://orcid.org/0000-0003-3869-4477;