no code implementations • 25 Apr 2024 • Sangwon Seo, Vaibhav Unhelkar
When faced with accomplishing a task, human experts exhibit intentional behavior.
1 code implementation • 19 Dec 2023 • Yao Rong, Peizhu Qian, Vaibhav Unhelkar, Enkelejda Kasneci
Informed by existing work, I-CEE explains the decisions of image classification models by providing the user with an informative subset of training data (i. e., example images), corresponding local explanations, and model decisions.
1 code implementation • 17 Dec 2023 • Abhinav Jain, Vaibhav Unhelkar
Offline imitation learning (IL) refers to learning expert behavior solely from demonstrations, without any additional interaction with the environment.
no code implementations • 1 Mar 2023 • Sangwon Seo, Bing Han, Vaibhav Unhelkar
To improve teamwork in these and other domains, we present TIC: an automated intervention approach for improving coordination between team members.
1 code implementation • 20 Oct 2022 • Yao Rong, Tobias Leemann, Thai-trang Nguyen, Lisa Fiedler, Peizhu Qian, Vaibhav Unhelkar, Tina Seidel, Gjergji Kasneci, Enkelejda Kasneci
A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge.
Explainable Artificial Intelligence (XAI) Explainable Models +2
no code implementations • 28 Mar 2021 • Ramya Ramakrishnan, Vaibhav Unhelkar, Ece Kamar, Julie Shah
Trained AI systems and expert decision makers can make errors that are often difficult to identify and understand.