Natural language processing for explainable satellite scheduling

Powell, Cheyenne and Berquand, Audrey and Riccardi, Annalisa (2023) Natural language processing for explainable satellite scheduling. In: SPACEOPS 2023, 2023-03-06 - 2023-03-10.

[thumbnail of Powell-etal-SpaceOps-2023-Natural-language-processing-for-explainable-satellite-scheduling]
Preview
Text. Filename: Powell_etal_SpaceOps_2023_Natural_language_processing_for_explainable_satellite_scheduling.pdf
Final Published Version
License: Creative Commons Attribution 4.0 logo

Download (360kB)| Preview

Abstract

Facilitating the interactions between humans and Artificial Intelligence (AI) in automated systems is becoming central with the advancements in technology and their more widespread adoption in practical applications. Mathematical programming scheduling techniques are a driving factor to assist ground station operators both on board the satellite, for autonomous decision making, and on ground, for supporting mid-term operations scheduling. When communication to ground is limited, scheduling algorithms require a level of autonomy and robustness able to respond to issues arising on board the satellite in the absence of communication with a ground operator. Moreover, explanations must be generated, along side schedules, for the operator to build and gain trust in the autonomous system. Explainable Artificial Intelligence (XAI) is an emerging topic in AI. Explanations are a necessary layer to effectively deploy autonomous trustworthy systems in practical applications. Queries may arise from operators such as why, what, when and how the scheduled actions were selected autonomously on board for a specific time. Explanations are provided based on the definition of the problem with its respective constraints. Autonomous decision making algorithms can be explained in several ways. Computational Argumentation (CA) and Natural Language Processing (NLP)) are some techniques, belonging to the domains of formal logic and machine learning, that can be used to generate explanations and communicate them back to the user in the form of textual output. An Argumentation Framework (AF) was created to assist in answering questions raised by the end user. The AF encodes, in its lower level, all the necessary information on when conflicts may occur between actions, as well as, environmental conditions inhibiting the occurrence of the actions within a schedule. This database of information is used to construct arguments in support or negation of user submitted queries or to provide an explanation of the complete derived schedule. NLP is then used as a bridge to communicate the relevant arguments to the user. The queries received revolved around three main areas: the subject, the time of interest and the intent. Following the interpretation, the queries were mapped to the AF database, returning a list of conflicts, agreements and neutral outcomes. The chosen NLP method for this architecture, GPT-3 was used to then deduce the answer to the query and justify it with a textual explanation.