1 code implementation • CVPR 2023 • Mehdi Zemni, Mickaël Chen, Éloi Zablocki, Hédi Ben-Younes, Patrick Pérez, Matthieu Cord
We conduct a set of experiments on counterfactual explanation benchmarks for driving scenes, and we show that our method can be adapted beyond classification, e. g., to explain semantic segmentation models.
1 code implementation • 17 Nov 2021 • Paul Jacob, Éloi Zablocki, Hédi Ben-Younes, Mickaël Chen, Patrick Pérez, Matthieu Cord
In this work, we address the problem of producing counterfactual explanations for high-quality images and complex scenes.
1 code implementation • 16 Sep 2021 • Hédi Ben-Younes, Éloi Zablocki, Mickaël Chen, Patrick Pérez, Matthieu Cord
Learning-based trajectory prediction models have encountered great success, with the promise of leveraging contextual information in addition to motion history.
no code implementations • 13 Jan 2021 • Éloi Zablocki, Hédi Ben-Younes, Patrick Pérez, Matthieu Cord
The concept of explainability has several facets and the need for explainability is strong in driving, a safety-critical application.
1 code implementation • 9 Dec 2020 • Hédi Ben-Younes, Éloi Zablocki, Patrick Pérez, Matthieu Cord
In this era of active development of autonomous vehicles, it becomes crucial to provide driving systems with the capacity to explain their decisions.