Explaining Reinforcement Learning Agents through Counterfactual Action Outcomes

Yotam Amitai, Yael Septon, Ofra Amir

Research output: Contribution to journalConference articlepeer-review

Abstract

Explainable reinforcement learning (XRL) methods aim to help elucidate agent policies and decision-making processes. The majority of XRL approaches focus on local explanations, seeking to shed light on the reasons an agent acts the way it does at a specific world state. While such explanations are both useful and necessary, they typically do not portray the outcomes of the agent's selected choice of action. In this work, we propose “COViz”, a new local explanation method that visually compares the outcome of an agent's chosen action to a counterfactual one. In contrast to most local explanations that provide state-limited observations of the agent's motivation, our method depicts alternative trajectories the agent could have taken from the given state and their outcomes. We evaluated the usefulness of COViz in supporting people's understanding of agents' preferences and compare it with reward decomposition, a local explanation method that describes an agent's expected utility for different actions by decomposing it into meaningful reward types. Furthermore, we examine the complementary benefits of integrating both methods. Our results show that such integration significantly improved participants' performance.

Original languageEnglish
Pages (from-to)10003-10011
Number of pages9
JournalProceedings of the AAAI Conference on Artificial Intelligence
Volume38
Issue number9
DOIs
StatePublished - 25 Mar 2024
Event38th AAAI Conference on Artificial Intelligence, AAAI 2024 - Vancouver, Canada
Duration: 20 Feb 202427 Feb 2024

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Explaining Reinforcement Learning Agents through Counterfactual Action Outcomes'. Together they form a unique fingerprint.

Cite this