TY - JOUR
T1 - Contrastive explanations of plans through model restrictions
AU - Krarup, Benjamin
AU - Krivic, Senka
AU - Magazzeni, Daniele
AU - Long, Derek
AU - Cashmore, Michael
AU - Smith, David E.
N1 - Funding Information:
This work was partially supported by EPSRC grant EP/R033722/1 for the project Trust in Human-Machine Partnership (THuMP) and AirForce Office of Scientific Research award number FA9550-18-1-0245.
Publisher Copyright:
© 2021 AI Access Foundation. All rights reserved.
PY - 2021/10/27
Y1 - 2021/10/27
N2 - In automated planning, the need for explanations arises when there is a mismatch between a proposed plan and the user's expectation. We frame Explainable AI Planning as an iterative plan exploration process, in which the user asks a succession of contrastive questions that lead to the generation and solution of hypothetical planning problems that are restrictions of the original problem. The object of the exploration is for the user to understand the constraints that govern the original plan and, ultimately, to arrive at a satisfactory plan. We present the results of a user study that demonstrates that when users ask questions about plans, those questions are usually contrastive, i.e. “why A rather than B?”. We use the data from this study to construct a taxonomy of user questions that often arise during plan exploration. Our approach to iterative plan exploration is a process of successive model restriction. Each contrastive user question imposes a set of constraints on the planning problem, leading to the construction of a new hypothetical planning problem as a restriction of the original. Solving this restricted problem results in a plan that can be compared with the original plan, admitting a contrastive explanation. We formally define model-based compilations in PDDL2.1 for each type of constraint derived from a contrastive user question in the taxonomy, and empirically evaluate the compilations in terms of computational complexity. The compilations were implemented as part of an explanation framework supporting iterative model restriction. We demonstrate its benefits in a second user study.
AB - In automated planning, the need for explanations arises when there is a mismatch between a proposed plan and the user's expectation. We frame Explainable AI Planning as an iterative plan exploration process, in which the user asks a succession of contrastive questions that lead to the generation and solution of hypothetical planning problems that are restrictions of the original problem. The object of the exploration is for the user to understand the constraints that govern the original plan and, ultimately, to arrive at a satisfactory plan. We present the results of a user study that demonstrates that when users ask questions about plans, those questions are usually contrastive, i.e. “why A rather than B?”. We use the data from this study to construct a taxonomy of user questions that often arise during plan exploration. Our approach to iterative plan exploration is a process of successive model restriction. Each contrastive user question imposes a set of constraints on the planning problem, leading to the construction of a new hypothetical planning problem as a restriction of the original. Solving this restricted problem results in a plan that can be compared with the original plan, admitting a contrastive explanation. We formally define model-based compilations in PDDL2.1 for each type of constraint derived from a contrastive user question in the taxonomy, and empirically evaluate the compilations in terms of computational complexity. The compilations were implemented as part of an explanation framework supporting iterative model restriction. We demonstrate its benefits in a second user study.
UR - http://www.scopus.com/inward/record.url?scp=85119508483&partnerID=8YFLogxK
U2 - 10.1613/JAIR.1.12813
DO - 10.1613/JAIR.1.12813
M3 - Article
AN - SCOPUS:85119508483
SN - 1076-9757
VL - 72
SP - 533
EP - 612
JO - Journal of Artificial Intelligence Research
JF - Journal of Artificial Intelligence Research
ER -