EVCAR : Multi-agent reinforcement learning for electric vehicle charging network recovery

Abdelfattah, Mohamed and Li, Nan and Hartmann, Timo; Moreno-Rangel, Alejandro and Kumar, Bimal, eds. (2025) EVCAR : Multi-agent reinforcement learning for electric vehicle charging network recovery. In: EG-ICE 2025. University of Strathclyde Publishing, GBR. ISBN 9781914241826 (https://doi.org/10.17868/strath.00093305)

[thumbnail of Abdelfattah-etal-EG-ICE-2025-EVCAR-Multi-agent-reinforcement-learning-for-electric]
Preview
Text. Filename: Abdelfattah-etal-EG-ICE-2025-EVCAR-Multi-agent-reinforcement-learning-for-electric.pdf
Final Published Version
License: Creative Commons Attribution 4.0 logo

Download (2MB)| Preview

Abstract

We introduce EVCAR, a system for Electric Vehicle charging Quality of Service (QoS) Recovery post prolonged service disruption, utilizing Twin Delayed Deep Deterministic Policy Gradient with Multi-Agent learning (TD3-MADDPG). The system combines specialized agent types: district-level agents manage local charging strategies, while a central redistribution agent allocates EVs across districts to prevent congestion and balance grid load. Using TD3-MADDPG, the framework learns spatially adaptive policies that improve voltage stability, reduce waiting times, and enhance charging success rates. Compared to traditional fuzzy logic and fixed partial charging methods, EVCAR achieves faster and more reliable recovery. While computationally more intensive, the framework’s performance suggests strong potential for real-world applications with scalable abstractions.