Terminal multiple surface sliding guidance for planetary landing : Development, tuning and optimization via reinforcement learning

Furfaro, Roberto and Wibben, Daniel and Gaudet, Brian and Simo, Jules (2015) Terminal multiple surface sliding guidance for planetary landing : Development, tuning and optimization via reinforcement learning. Journal of the Astronautical Sciences. ISSN 0021-9142 (https://doi.org/10.1007/s40295-015-0045-1)

[thumbnail of Simo-etal-JAS-2015-Terminal-multiple-surface-sliding-planetary-landing-reinforcement-learning-Feb-2015] PDF. Filename: Simo_etal_JAS_2015_Terminal_multiple_surface_sliding_planetary_landing_reinforcement_learning_Feb_2015.pdf
Accepted Author Manuscript

Download (1MB)

Abstract

The problem of achieving pinpoint landing accuracy in future space missions to planetary bodies such as the Moon or Mars presents many challenges, including the requirements of higher accuracy and degree of flexibility. These new challenges may require the development of a new class of guidance algorithms. In this paper, a non-linear guidance algorithm for planetary landing is proposed and analyzed. Based on Higher-Order Sliding Control (HOSC) theory, the Multiple Sliding Surface Guidance (MSSG) algorithm has been specifically designed to take advantage of the ability of the system to reach multiple sliding surfaces in a finite time. As a result, a guidance law that is both globally stable and robust against unknown, but bounded perturbations is devised. The proposed MSSG does not require any off-line trajectory generation, but the acceleration command is instead generated directly as function of the current and final (target) state. However, after initial analysis, it has been noted that the performance of MSSG critically depends on the choice in guidance gains. MSSG-guided trajectories have been compared to an open-loop fuel-efficient solution to investigate the relationship between the MSSG fuel performance and the selection of the guidance parameters. A full study has been executed to investigate and tune the parameters of MSSG utilizing reinforcement learning in order to truly optimize the performance of the MSSG algorithm in powered descent scenarios. Results show that the MSSG algorithm can indeed generate closed-loop trajectories that come very close to the optimal solution in terms of fuel usage. A full comparison of the trajectories is included, as well as a further Monte Carlo analysis examining the guidance errors of the MSSG algorithm under perturbed conditions using the optimized set of parameters.

ORCID iDs

Furfaro, Roberto, Wibben, Daniel, Gaudet, Brian and Simo, Jules ORCID logoORCID: https://orcid.org/0000-0002-1489-5920;