Enabling intelligent onboard guidance, navigation, and control using near-term flight hardware

Wilson, Callum and Riccardi, Annalisa (2021) Enabling intelligent onboard guidance, navigation, and control using near-term flight hardware. In: 72nd International Astronautical Congress, 2021-10-25 - 2021-10-29, Dubai World Trade Centre.

[thumbnail of Wilson-Riccardi-IAC-2021-Enabling-intelligent-onboard-guidance-navigation-and-control-using-near-term-flight-hardware]
Preview
Text. Filename: Wilson_Riccardi_IAC_2021_Enabling_intelligent_onboard_guidance_navigation_and_control_using_near_term_flight_hardware.pdf
Accepted Author Manuscript
License: Strathprints license 1.0

Download (874kB)| Preview

Abstract

Future space missions will require various technological advancements to meet more stringent and challenging requirements. Next generation guidance, navigation, and control systems must safely operate autonomously in hazardous and uncertain environments. While the focus of these developments is often on flight software, spacecraft hardware also creates computational limitations for the algorithms which run onboard. Here we look at the feasibility of implementing intelligent control onboard a spacecraft. Intelligent control methods combine theories from automatic control, artificial intelligence, and operations research to derive control systems capable of handling substantial uncertainties. This has clear benefits for operating in unknown space environments. However, most modern intelligent systems require substantial computational power which is not available onboard spacecraft. Recent advancements in single board computers have created much physically lighter and less power-intensive processors which are suitable for spaceflight and purpose built for machine learning. In this study we implement a reinforcement learning based controller on NVIDIA Jetson Nano hardware. We apply this controller to a powered descent guidance problem using a simulated Mars landing environment. The initial offline approach used to derive the controller has two steps. First, optimal trajectories and guidance laws are calculated using nominal environment conditions. These are then used to initialise a reinforcement learning agent which learns a control policy that copes with environmental uncertainties. In this case the control policy is parameterised as a neural network which can also update its weights online from real time observations. Online updates use a novel method of weight updates called Extreme Q-Learning Machine to tune the output weights of the neural network in operation. We show that this control system can be deployed on hardware which is sufficiently light, in terms of power and mass, to be used onboard spacecraft. This demonstrates the potential for intelligent controllers to be used on flight suitable hardware.