Advances in optimisation algorithms and techniques for deep learning

Nwankpa, Chigozie Enyinna (2020) Advances in optimisation algorithms and techniques for deep learning. Advances in Science, Technology and Engineering Systems Journal, 5 (5). 563 - 577. ISSN 2415-6698 (https://doi.org/10.25046/aj050570)

[thumbnail of Nwankpa-ASTESJ-2020-Advances-in-optimisation-algorithms-and-techniques-for-deep]
Preview
Text. Filename: Nwankpa_ASTESJ_2020_Advances_in_optimisation_algorithms_and_techniques_for_deep.pdf
Final Published Version
License: Creative Commons ShareAlike 4.0 logo

Download (310kB)| Preview

Abstract

In the last decade, deep learning(DL) has witnessed excellent performances on a variety of problems, including speech recognition, object recognition, detection, and natural language processing (NLP) among many others. Of these applications, one common challenge is to obtain ideal parameters during the training of the deep neural networks (DNN). These typical parameters are obtained by some optimisation techniques which have been studied extensively. These research have produced state-of-art(SOTA) results on speed and memory improvements for deep neural networks(NN) architectures. However, the SOTA optimisers have continued to be an active research area with no compilations of the existing optimisers reported in the literature. This paper provides an overview of the recent advances in optimisation algorithms and techniques used in DNN, highlighting the current SOTA optimisers, improvements made on these optimisation algorithms and techniques, alongside the trends in the development of optimisers used in training DL based models. The results of the search of the Scopus database for the optimisers in DL provides the articles reported as the summary of the DL optimisers. From what we can tell, there is no comprehensive compilation of the optimisation algorithms and techniques so far developed and used in DL research and applications, and this paper summarises these facts.