no code implementations • 29 Sep 2021 • Gabrielle Ras, Erdi Çallı, Marcel van Gerven
Perturbation methods are model-agnostic methods used to generate heatmaps to explain black-box algorithms such as deep neural networks.
Explainable Artificial Intelligence (XAI) Image Classification
no code implementations • 1 Jan 2021 • Gabrielle Ras, Luca Ambrogioni, Pim Haselager, Marcel van Gerven, Umut Güçlü
In a 3TConv the 3D convolutional filter is obtained by learning a 2D filter and a set of temporal transformation parameters, resulting in a sparse filter requiring less parameters.
no code implementations • 1 Jan 2021 • Thirza Dado, Yağmur Güçlütürk, Luca Ambrogioni, Gabrielle Ras, Sander E. Bosch, Marcel van Gerven, Umut Güçlü
We introduce a new framework for hyperrealistic reconstruction of perceived naturalistic stimuli from brain recordings.
no code implementations • 30 Apr 2020 • Gabrielle Ras, Ning Xie, Marcel van Gerven, Derek Doran
The field guide: i) Introduces three simple dimensions defining the space of foundational methods that contribute to explainable deep learning, ii) discusses the evaluations for model explanations, iii) places explainability in the context of other related deep learning research areas, and iv) finally elaborates on user-oriented explanation designing and potential future directions on explainable deep learning.
no code implementations • 20 Mar 2018 • Gabrielle Ras, Marcel van Gerven, Pim Haselager
Different kinds of users are identified and their concerns revealed, relevant statements from the General Data Protection Regulation are analyzed in the context of Deep Neural Networks (DNNs), a taxonomy for the classification of existing explanation methods is introduced, and finally, the various classes of explanation methods are analyzed to verify if user concerns are justified.