Search Results for author: Christophe Marsala

Found 9 papers, 4 papers with code

A Robust Autoencoder Ensemble-Based Approach for Anomaly Detection in Text

no code implementations16 May 2024 Jeremie Pantin, Christophe Marsala

In this work, a robust autoencoder ensemble-based approach designed to address anomaly detection in text corpora is introduced.

Anomaly Detection Sentiment Analysis +1

Dynamic Interpretability for Model Comparison via Decision Rules

1 code implementation29 Sep 2023 Adam Rida, Marie-Jeanne Lesot, Xavier Renard, Christophe Marsala

Explainable AI (XAI) methods have mostly been built to investigate and shed light on single machine learning models and are not designed to capture and explain differences between multiple models effectively.

Management Model Selection

Achieving Diversity in Counterfactual Explanations: a Review and Discussion

no code implementations10 May 2023 Thibault Laugel, Adulam Jeyasothy, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki

In the field of Explainable Artificial Intelligence (XAI), counterfactual examples explain to a user the predictions of a trained decision model by indicating the modifications to be made to the instance so as to change its associated prediction.

counterfactual Explainable artificial intelligence +1

Integrating Prior Knowledge in Post-hoc Explanations

no code implementations25 Apr 2022 Adulam Jeyasothy, Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki

In the field of eXplainable Artificial Intelligence (XAI), post-hoc interpretability methods aim at explaining to a user the predictions of a trained decision model.

counterfactual Counterfactual Explanation +2

The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations

1 code implementation22 Jul 2019 Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, Marcin Detyniecki

Post-hoc interpretability approaches have been proven to be powerful tools to generate explanations for the predictions made by a trained black-box model.

counterfactual

Issues with post-hoc counterfactual explanations: a discussion

no code implementations11 Jun 2019 Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki

Counterfactual post-hoc interpretability approaches have been proven to be useful tools to generate explanations for the predictions of a trained blackbox classifier.

counterfactual

Defining Locality for Surrogates in Post-hoc Interpretablity

1 code implementation19 Jun 2018 Thibault Laugel, Xavier Renard, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki

Local surrogate models, to approximate the local decision boundary of a black-box classifier, constitute one approach to generate explanations for the rationale behind an individual prediction made by the back-box.

Inverse Classification for Comparison-based Interpretability in Machine Learning

6 code implementations22 Dec 2017 Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, Marcin Detyniecki

In the context of post-hoc interpretability, this paper addresses the task of explaining the prediction of a classifier, considering the case where no information is available, neither on the classifier itself, nor on the processed data (neither the training nor the test data).

BIG-bench Machine Learning Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.