Explaining Image Classifiers

Hana Chockler, Joseph Y. Halpern

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

35 Downloads (Pure)

Abstract

We focus on explaining image classifiers, taking the work of Mothilal et al. (2021) (MMTS) as our point of departure. We observe that, although MMTS claim to be using the definition of explanation proposed by Halpern (2016), they do not quite do so. Roughly speaking, Halpern’s definition has a necessity clause and a sufficiency clause. MMTS replace the necessity clause by a requirement that, as we show, implies it. Halpern’s definition also allows agents to restrict the set of options considered. While these difference may seem minor, as we show, they can have a nontrivial impact on explanations. We also show that, essentially without change, Halpern’s definition can handle two issues that have proved difficult for other approaches: explanations of absence (when, for example, an image classifier for tumors outputs “no tumor”) and explanations of rare events (such as tumors).
Original languageEnglish
Title of host publication21st International Conference on Principles of Knowledge Representation and Reasoning (KR'2024)
Publication statusAccepted/In press - Jul 2024

Fingerprint

Dive into the research topics of 'Explaining Image Classifiers'. Together they form a unique fingerprint.

Cite this