Picture of virus under microscope

Research under the microscope...

The Strathprints institutional repository is a digital archive of University of Strathclyde research outputs.

Strathprints serves world leading Open Access research by the University of Strathclyde, including research by the Strathclyde Institute of Pharmacy and Biomedical Sciences (SIPBS), where research centres such as the Industrial Biotechnology Innovation Centre (IBioIC), the Cancer Research UK Formulation Unit, SeaBioTech and the Centre for Biophotonics are based.

Explore SIPBS research

Towards emerging complex cooperative behaviours in flatland: rewarding over mimicking

Yannakakis, G. and Levine, J.M. and Hallam, J. (2007) Towards emerging complex cooperative behaviours in flatland: rewarding over mimicking. IEEE Transactions on Evolutionary Computation, 11 (3). pp. 382-396. ISSN 1089-778X

Full text not available in this repository. (Request a copy from the Strathclyde author)

Abstract

This paper compares supervised and unsupervised learning mechanisms for the emergence of cooperative multiagent spatial coordination using a top-down approach. By observing the global performance of a group of homogeneous agents-supported by a nonglobal knowledge of their environment-we attempt to extract information about the minimum size of the agent neurocontroller and the type of learning mechanism that collectively generate high-performing and robust behaviors with minimal computational effort. Consequently, a methodology for obtaining controllers of minimal size is introduced and a comparative study between supervised and unsupervised learning mechanisms for the generation of successful collective behaviors is presented. We have developed a prototype simulated world for our studies. This case study is primarily a computer games inspired world but its main features are also biologically plausible. The two specific tasks that the agents are tested in are the competing strategies of obstacle-avoidance and target-achievement. We demonstrate that cooperative behavior among agents, which is supported only by limited communication, appears to be necessary for the problem's efficient solution and that learning by rewarding the behavior of agent groups constitutes a more efficient and computationally preferred generic approach than supervised learning approaches in such complex multiagent worlds.