no code implementations • 3 Feb 2023 • Marco Bertolini, Van-Khoa Le, Jake Pencharz, Andreas Poehlmann, Djork-Arné Clevert, Santiago Villalba, Floriane Montanari
We validate quantitatively our methods by quantifying the agreements of our explanations' heatmaps with pathologists' annotations, as well as with predictions from a segmentation model trained on such annotations.
Explainable Artificial Intelligence (XAI) whole slide images
no code implementations • 18 Feb 2022 • Marco Bertolini, Djork-Arné Clevert, Floriane Montanari
Finally, we show that adopting our proposed scores as constraints during the training of a representation learning task improves the downstream performance of the model.
1 code implementation • 11 May 2021 • Ryan Henderson, Djork-Arné Clevert, Floriane Montanari
Rationalizing which parts of a molecule drive the predictions of a molecular graph convolutional neural network (GCNN) can be difficult.
no code implementations • 9 Oct 2020 • Ryan Henderson, Djork-Arné Clevert, Floriane Montanari
Due to the nature of deep learning approaches, it is inherently difficult to understand which aspects of a molecular graph drive the predictions of the network.
2 code implementations • journal 2018 • Robin Winter, Floriane Montanari, Frank Noe, and Djork-Arne Clevert
In this work, we propose to exploit the powerful ability of deep neural networks to learn a feature representation from low-level encodings of a huge corpus of chemical structures.