Ethics & Coffee with Sherri Conklin: "Blame-worthy AI: Explainability and the Responsibility Gap"
Sherri Conklin will present her paper "Blame-worthy AI: Explainability and the Responsibility Gap."
Sponsored by the Ethics, Technology, and Human Interaction Center (ETHICx).
Abstract
This paper is concerned with the epistemic and normative roles of explainable AI in the context of machine responsibility. Questions about explainable AI typically deal with the "blackbox problem," which relates to epistemic accessibility of the internal workings of machine intelligences.
Conklin argues that overcoming the blackbox problem and creating explainable AI is essential to making sense of AI responsibility. Questions about machine responsibility typically take one of two forms, namely those having to do with the responsible use of machines like AI and those having to do with the responsibility gap – i.e., the problem of responsibility attributions for machines executing problematic and potentially harmful behavior. This paper is concerned with the role of explainable AI in the latter context. To progress her argument, Conklin applies Nomy Arpaly's account of moral-worthiness to identify the conditions under which an AI with moral status can be held accountable, especially with regards to blame. This account specifies certain epistemic and normative requirements on the success condition for AI explainability, at least from the standpoint of AI accountability. In particular, this account specifies the kind of information humans need access to in order to know that an AI is accountable for its behavior and to act on that knowledge. To count as explainable, that information must be available to us.
Related Link
Contact For More Information
Michael Hoffmann
m.hoffmann@gatech.edu