Comparing policy gradient and value function based reinforcement learning methods in simulated electrical power trade

Lincoln, Richard and Galloway, Stuart and Stephen, Bruce and Burt, Graeme (2012) Comparing policy gradient and value function based reinforcement learning methods in simulated electrical power trade. IEEE Transactions on Power Systems, 27 (1). pp. 373-380. ISSN 0885-8950 (https://doi.org/10.1109/TPWRS.2011.2166091)

[thumbnail of tps_policy_paper_submit_final_040711.pdf] PDF. Filename: tps_policy_paper_submit_final_040711.pdf
Preprint

Download (200kB)

Abstract

In electrical power engineering, reinforcement learning algorithms can be used to model the strategies of electricity market participants. However, traditional value function based reinforcement learning algorithms suffer from convergence issues when used with value function approximators. Function approximation is required in this domain to capture the characteristics of the complex and continuous multivariate problem space. The contribution of this paper is the comparison of policy gradient reinforcement learning methods, using artificial neural networks for policy function approximation, with traditional value function based methods in simulations of electricity trade. The methods are compared using an AC optimal power flow based power exchange auction market model and a reference electric power system model.

ORCID iDs

Lincoln, Richard, Galloway, Stuart ORCID logoORCID: https://orcid.org/0000-0003-1978-993X, Stephen, Bruce ORCID logoORCID: https://orcid.org/0000-0001-7502-8129 and Burt, Graeme ORCID logoORCID: https://orcid.org/0000-0002-0315-5919;