Towards explainability of on-board satellite scheduling for end user interactions

Powell, Cheyenne and Riccardi, Annalisa (2021) Towards explainability of on-board satellite scheduling for end user interactions. In: 72nd International Astronautical Congress, 2021-10-25 - 2021-10-29, Dubai World Trade Centre. (https://iafastro.directory/iac/paper/id/63954/abst...)

[thumbnail of Powell-Riccardi-IAC-2021-Towards-explainability-of-on-board-satellite-scheduling-for-end-user-interactions]
Preview
Text. Filename: Powell_Riccardi_IAC_2021_Towards_explainability_of_on_board_satellite_scheduling_for_end_user_interactions.pdf
Final Published Version
License: Strathprints license 1.0

Download (1MB)| Preview

Abstract

Satellite scheduling is a requirement for automating routines and tasks prior to execution on given satellite/s. Various techniques and tools are used with the optional incorporation of AI depending on the schedule constraints and available resources for memory allocation. Regardless of the technique and approach taken, most autonomous scheduling systems experience challenges enabling an interaction between the user and the system. This affects trust in the system that can lead to manual handling of data that wastes time and resources. Therefore, to reduce these situations from occurring and save costs, the user needs explanations on decisions made autonomously by the system. An optimal scheduling approach was taken with the use of Constraint Programming (CP) for allocating on-board tasks for a single satellite's schedule. A schedule was derived for short-term planning where tasks were evaluated on duration, cost and resource requirements These results were analysed for their feasibility and optimality; and in doing so, an Computational Argumentation (CA) layer was developed to provide explanations on whether the tasks scheduled, supported, or conflicted with the temporal and/or resource constraints. To depict the stages and relationships of these internal arguments, an entity relationship graph was created containing the proposed schedule solutions that were evaluated based on their corresponding conflicts/agreements. Due to the nature of these arguments and their respective constraints, another argumentation approach was used to derive basic causalities to provide information on reason of failure and impact on schedule. For end user interactions, the design of explanation layer was investigated allowing the user to select different parts of the proposed schedule enabling a basic output description displayed to assist and enhance the users understanding. This approach will also give the user the possibility to propose changes in the solution and evaluate its feasibility/optimality as well as deriving conflicts with the current schedule. This will allow for growth to build more advanced explainable techniques for sophisticated and complex schedules.