no code implementations • 28 Jul 2021 • Upol Ehsan, Samir Passi, Q. Vera Liao, Larry Chan, I-Hsiang Lee, Michael Muller, Mark O. Riedl
Explainability of AI systems is critical for users to take informed actions.
no code implementations • 11 Jan 2019 • Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, Mark Riedl
The second study further explores user preferences between the generated rationales with regard to confidence in the autonomous agent, communicating failure and unexpected behavior.
no code implementations • 25 Feb 2017 • Upol Ehsan, Brent Harrison, Larry Chan, Mark O. Riedl
Results of these evaluations show that neural machine translation is able to accurately generate rationalizations that describe agent behavior, and that rationalizations are more satisfying to humans than other alternative methods of explanation.