Picture of Open Access badges

Discover Open Access research at Strathprints

It's International Open Access Week, 24-30 October 2016. This year's theme is "Open in Action" and is all about taking meaningful steps towards opening up research and scholarship. The Strathprints institutional repository is a digital archive of University of Strathclyde research outputs. Explore recent world leading Open Access research content by University of Strathclyde researchers and see how Strathclyde researchers are committing to putting "Open in Action".


Image: h_pampel, CC-BY

Towards emerging complex cooperative behaviours in flatland: rewarding over mimicking

Yannakakis, G. and Levine, J.M. and Hallam, J. (2007) Towards emerging complex cooperative behaviours in flatland: rewarding over mimicking. IEEE Transactions on Evolutionary Computation, 11 (3). pp. 382-396. ISSN 1089-778X

Full text not available in this repository. (Request a copy from the Strathclyde author)


This paper compares supervised and unsupervised learning mechanisms for the emergence of cooperative multiagent spatial coordination using a top-down approach. By observing the global performance of a group of homogeneous agents-supported by a nonglobal knowledge of their environment-we attempt to extract information about the minimum size of the agent neurocontroller and the type of learning mechanism that collectively generate high-performing and robust behaviors with minimal computational effort. Consequently, a methodology for obtaining controllers of minimal size is introduced and a comparative study between supervised and unsupervised learning mechanisms for the generation of successful collective behaviors is presented. We have developed a prototype simulated world for our studies. This case study is primarily a computer games inspired world but its main features are also biologically plausible. The two specific tasks that the agents are tested in are the competing strategies of obstacle-avoidance and target-achievement. We demonstrate that cooperative behavior among agents, which is supported only by limited communication, appears to be necessary for the problem's efficient solution and that learning by rewarding the behavior of agent groups constitutes a more efficient and computationally preferred generic approach than supervised learning approaches in such complex multiagent worlds.