A logic-based explanation generation framework for classical and hybrid planning problems
Vasileiou, Stylianos Loukas and Yeoh, William and Son, Tran Cao and Kumar, Ashwin and Cashmore, Michael and Magazzeni, Dianele (2022) A logic-based explanation generation framework for classical and hybrid planning problems. Journal of Artificial Intelligence Research, 73. pp. 1473-1534. ISSN 1076-9757 (https://doi.org/10.1613/jair.1.13431)
Preview |
Text.
Filename: Vasileiou_etal_JAIR_2022_A_logic_based_explanation_generation_framework_for_classical.pdf
Final Published Version Download (592kB)| Preview |
Abstract
In human-aware planning systems, a planning agent might need to explain its plan to a human user when that plan appears to be non-feasible or sub-optimal. A popular approach, called model reconciliation, has been proposed as a way to bring the model of the human user closer to the agent’s model. To do so, the agent provides an explanation that can be used to update the model of human such that the agent’s plan is feasible or optimal to the human user. Existing approaches to solve this problem have been based on automated planning methods and have been limited to classical planning problems only. In this paper, we approach the model reconciliation problem from a different perspective, that of knowledge representation and reasoning, and demonstrate that our approach can be applied not only to classical planning problems but also hybrid systems planning problems with durative actions and events/processes. In particular, we propose a logic-based framework for explanation generation, where given a knowledge base KBa (of an agent) and a knowledge base KBh (of a human user), each encoding their knowledge of a planning problem, and that KBa entails a query q (e.g., that a proposed plan of the agent is valid), the goal is to identify an explanation ε ⊆ KBa such that when it is used to update KBh, then the updated KBh also entails q. More specifically, we make the following contributions in this paper: (1) We formally define the notion of logic-based explanations in the context of model reconciliation problems; (2) We introduce a number of cost functions that can be used to reflect preferences between explanations; (3) We present algorithms to compute explanations for both classical planning and hybrid systems planning problems; and (4) We empirically evaluate their performance on such problems. Our empirical results demonstrate that, on classical planning problems, our approach is faster than the state of the art when the explanations are long or when the size of the knowledge base is small (e.g., the plans to be explained are short). They also demonstrate that our approach is efficient for hybrid systems planning problems. Finally, we evaluate the real-world efficacy of explanations generated by our algorithms through a controlled human user study, where we develop a proof-of-concept visualization system and use it as a medium for explanation communication.
ORCID iDs
Vasileiou, Stylianos Loukas, Yeoh, William, Son, Tran Cao, Kumar, Ashwin, Cashmore, Michael ORCID: https://orcid.org/0000-0002-8334-4348 and Magazzeni, Dianele;-
-
Item type: Article ID code: 81642 Dates: DateEvent25 April 2022Published1 April 2022Accepted31 October 2021SubmittedSubjects: Science > Mathematics > Electronic computers. Computer science Department: Faculty of Science > Computer and Information Sciences Depositing user: Pure Administrator Date deposited: 29 Jul 2022 15:44 Last modified: 19 Sep 2024 03:36 URI: https://strathprints.strath.ac.uk/id/eprint/81642