no code implementations • 12 Nov 2021 • Yanou Ramon, Sandra C. Matz, R. A. Farrokhnia, David Martens
In this paper, we show how Explainable AI (XAI) can help domain experts and data subjects validate, question, and improve models that classify psychological traits from digital footprints.
no code implementations • 10 Mar 2020 • Yanou Ramon, David Martens, Theodoros Evgeniou, Stiene Praet
Machine learning models on behavioral and textual data can result in highly accurate prediction models, but are often very difficult to interpret.
3 code implementations • 4 Dec 2019 • Yanou Ramon, David Martens, Foster Provost, Theodoros Evgeniou
This study aligns the recently proposed Linear Interpretable Model-agnostic Explainer (LIME) and Shapley Additive Explanations (SHAP) with the notion of counterfactual explanations, and empirically benchmarks their effectiveness and efficiency against SEDC using a collection of 13 data sets.