"Superstition" in the network : Deep reinforcement learning plays deceptive games

Bontrager, Philip and Khalifa, Ahmed and Anderson, Damien and Stephenson, Matthew and Salge, Christoph and Togelius, Julian (2019) "Superstition" in the network : Deep reinforcement learning plays deceptive games. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 15 (1). pp. 10-16. ISSN 2334-0924 (https://ojs.aaai.org/index.php/AIIDE/article/view/...)

[thumbnail of Botrager-etal-AIIDE-2019-Superstition-in-the-network-deep-reinforcement-learning-plays-deceptive-gamespdf]
Preview
Text. Filename: Botrager_etal_AIIDE_2019_Superstition_in_the_network_deep_reinforcement_learning_plays_deceptive_gamespdf.pdf
Accepted Author Manuscript
License: Strathprints license 1.0

Download (1MB)| Preview

Abstract

Deep reinforcement learning has learned to play many games well, but failed on others. To better characterize the modes and reasons of failure of deep reinforcement learners, we test the widely used Asynchronous Actor-Critic (A2C) algorithm on four deceptive games, which are specially designed to provide challenges to game-playing agents. These games are implemented in the General Video Game AI framework, which allows us to compare the behavior of reinforcement learningbased agents with planning agents based on tree search. We find that several of these games reliably deceive deep reinforcement learners, and that the resulting behavior highlights the shortcomings of the learning algorithm. The particular ways in which agents fail differ from how planning-based agents fail, further illuminating the character of these algorithms. We propose an initial typology of deceptions which could help us better understand pitfalls and failure modes of (deep) reinforcement learning.

ORCID iDs

Bontrager, Philip, Khalifa, Ahmed, Anderson, Damien ORCID logoORCID: https://orcid.org/0000-0002-8554-3068, Stephenson, Matthew, Salge, Christoph and Togelius, Julian;