Deep reinforcement learning control of hand-eye coordination with a software retina

Boyd, Lewis Campbell and Popovic, Vanja and Siebert, Jan Paul; (2020) Deep reinforcement learning control of hand-eye coordination with a software retina. In: 2020 International Joint Conference on Neural Networks (IJCNN). Proceedings of the International Joint Conference on Neural Networks . IEEE, GBR. ISBN 9781728169262 (https://doi.org/10.1109/IJCNN48605.2020.9207332)

[thumbnail of Boyd-etal-IJCNN-2020-Deep-reinforcement-learning-control-of-hand-eye-coordination-with-a-software-retina]
Preview
Text. Filename: Boyd_etal_IJCNN_2020_Deep_reinforcement_learning_control_of_hand_eye_coordination_with_a_software_retina.pdf
Accepted Author Manuscript

Download (1MB)| Preview

Abstract

Deep Reinforcement Learning (DRL) has gained much attention for solving robotic hand-eye coordination tasks from raw pixel values. Despite promising results, training agents using images is hardware intensive often requiring millions of training steps to converge incurring long training times and increased risk of wear and tear on the robot. To speed up training, images are often cropped and downscaled resulting in a smaller field of view and loss of valuable high-frequency data. In this paper, we propose training the vision system using supervised learning prior to training robotic actuation using Deep Deterministic Policy Gradient (DDPG). The vision system uses a software retina, based on the mammalian retino-cortical transform, to preprocess full-size images to compress image data while preserving the full field of view and high-frequency visual information around the fixation point prior to processing by a Deep Convolutional Neural Network (DCNN) to extract visual state information. Using the vision system to preprocess the environment improves the agent's sample complexity and network update speed leading to significantly faster training with reduced image data loss. Our method is used to train a DRL system to control a real Baxter robot's arm, processing full-size images captured by an in-wrist camera to locate an object on a table and centre the camera over it by actuating the robot arm.