TY - JOUR
T1 - A Logic-Based Explanation Generation Framework for Classical and Hybrid Planning Problems
AU - Vasileiou, Stylianos Loukas
AU - Yeoh, William
AU - Son, Tran Cao
AU - Kumar, Ashwin
AU - Cashmore, Michael
AU - Magazzeni, Daniele
N1 - Funding Information:
We thank the anonymous reviewers, whose suggestions improved the quality of our paper. Stylianos Loukas Vasileiou, William Yeoh, and Ashwin Kumar are partially supported by the National Science Foundation (NSF) under award 1812619. Tran Cao Son is partially supported by NSF under awards 1757207, 1812628, and 1914635. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the sponsoring organizations, agencies, or the United States government.
Publisher Copyright:
© 2022 AI Access Foundation. All rights reserved.
PY - 2022
Y1 - 2022
N2 - In human-aware planning systems, a planning agent might need to explain its plan to a human user when that plan appears to be non-feasible or sub-optimal. A popular approach, called model reconciliation, has been proposed as a way to bring the model of the human user closer to the agent's model. To do so, the agent provides an explanation that can be used to update the model of human such that the agent's plan is feasible or optimal to the human user. Existing approaches to solve this problem have been based on automated planning methods and have been limited to classical planning problems only. In this paper, we approach the model reconciliation problem from a different perspective, that of knowledge representation and reasoning, and demonstrate that our approach can be applied not only to classical planning problems but also hybrid systems planning problems with durative actions and events/processes. In particular, we propose a logicbased framework for explanation generation, where given a knowledge base KBa(of an agent) and a knowledge base KBh(of a human user), each encoding their knowledge of a planning problem, and that KBaentails a query q (e.g., that a proposed plan of the agent is valid), the goal is to identify an explanation ϵ ⊂ KBasuch that when it is used to update KBh, then the updated KBh also entails q. More specifically, we make the following contributions in this paper: (1) We formally define the notion of logic-based explanations in the context of model reconciliation problems; (2) We introduce a number of cost functions that can be used to reflect preferences between explanations; (3) We present algorithms to compute explanations for both classical planning and hybrid systems planning problems; and (4) We empirically evaluate their performance on such problems. Our empirical results demonstrate that, on classical planning problems, our approach is faster than the state of the art when the explanations are long or when the size of the knowledge base is small (e.g., the plans to be explained are short). They also demonstrate that our approach is efficient for hybrid systems planning problems. Finally, we evaluate the real-world efficacy of explanations generated by our algorithms through a controlled human user study, where we develop a proof-of-concept visualization system and use it as a medium for explanation communication.
AB - In human-aware planning systems, a planning agent might need to explain its plan to a human user when that plan appears to be non-feasible or sub-optimal. A popular approach, called model reconciliation, has been proposed as a way to bring the model of the human user closer to the agent's model. To do so, the agent provides an explanation that can be used to update the model of human such that the agent's plan is feasible or optimal to the human user. Existing approaches to solve this problem have been based on automated planning methods and have been limited to classical planning problems only. In this paper, we approach the model reconciliation problem from a different perspective, that of knowledge representation and reasoning, and demonstrate that our approach can be applied not only to classical planning problems but also hybrid systems planning problems with durative actions and events/processes. In particular, we propose a logicbased framework for explanation generation, where given a knowledge base KBa(of an agent) and a knowledge base KBh(of a human user), each encoding their knowledge of a planning problem, and that KBaentails a query q (e.g., that a proposed plan of the agent is valid), the goal is to identify an explanation ϵ ⊂ KBasuch that when it is used to update KBh, then the updated KBh also entails q. More specifically, we make the following contributions in this paper: (1) We formally define the notion of logic-based explanations in the context of model reconciliation problems; (2) We introduce a number of cost functions that can be used to reflect preferences between explanations; (3) We present algorithms to compute explanations for both classical planning and hybrid systems planning problems; and (4) We empirically evaluate their performance on such problems. Our empirical results demonstrate that, on classical planning problems, our approach is faster than the state of the art when the explanations are long or when the size of the knowledge base is small (e.g., the plans to be explained are short). They also demonstrate that our approach is efficient for hybrid systems planning problems. Finally, we evaluate the real-world efficacy of explanations generated by our algorithms through a controlled human user study, where we develop a proof-of-concept visualization system and use it as a medium for explanation communication.
UR - http://www.scopus.com/inward/record.url?scp=85129591478&partnerID=8YFLogxK
U2 - 10.1613/jair.1.13431
DO - 10.1613/jair.1.13431
M3 - Article
AN - SCOPUS:85129591478
SN - 1076-9757
VL - 73
SP - 1473
EP - 1534
JO - Journal of Artificial Intelligence Research
JF - Journal of Artificial Intelligence Research
ER -