no code implementations • RepL4NLP (ACL) 2022 • Romain Bielawski, Benjamin Devillers, Tim Van De Cruys, Rufin VanRullen
We compare CLIP’s visual stream against two visually trained networks and CLIP’s textual stream against two linguistically trained networks, as well as multimodal combinations of these networks.
no code implementations • 18 Mar 2024 • Mitja Nikolaus, Milad Mozafari, Nicholas Asher, Leila Reddy, Rufin VanRullen
Previous studies have shown that it is possible to map brain activation data of subjects viewing images onto the feature representation space of not only vision models (modality-specific decoding) but also language models (cross-modal decoding).
no code implementations • 7 Mar 2024 • Léopold Maytié, Benjamin Devillers, Alexandre Arnold, Rufin VanRullen
First, we train a 'Global Workspace' to exploit information collected about the environment via two input modalities (a visual input, or an attribute vector representing the state of the agent and/or its environment).
no code implementations • 13 Feb 2024 • Colin Decourt, Rufin VanRullen, Didier Salle, Thomas Oberlin
In recent years, driven by the need for safer and more autonomous transport systems, the automotive industry has shifted toward integrating a growing number of Advanced Driver Assistance Systems (ADAS).
no code implementations • 17 Aug 2023 • Patrick Butlin, Robert Long, Eric Elmoznino, Yoshua Bengio, Jonathan Birch, Axel Constant, George Deane, Stephen M. Fleming, Chris Frith, Xu Ji, Ryota Kanai, Colin Klein, Grace Lindsay, Matthias Michel, Liad Mudrik, Megan A. K. Peters, Eric Schwitzgebel, Jonathan Simon, Rufin VanRullen
From these theories we derive "indicator properties" of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties.
no code implementations • 18 Jul 2023 • Sabine Muzellec, Thomas Fel, Victor Boutin, Léo Andéol, Rufin VanRullen, Thomas Serre
Attribution methods correspond to a class of explainability methods (XAI) that aim to assess how individual inputs contribute to a model's decision-making process.
1 code implementation • 27 Jun 2023 • Benjamin Devillers, Léopold Maytié, Rufin VanRullen
Recent deep learning models can efficiently combine inputs from different modalities (e. g., images and text) and learn to align their latent representations, or to translate signals from one domain to another (as in image captioning, or text-to-image generation).
no code implementations • 19 May 2023 • Matteo Ferrante, Furkan Ozcelik, Tommaso Boccato, Rufin VanRullen, Nicola Toschi
Our brain captioning approach outperforms existing methods, while our image reconstruction pipeline generates plausible images with improved spatial relationships.
no code implementations • 12 Apr 2023 • Grégory Faye, Guilhem Fouilhé, Rufin VanRullen
Similarly, it is possible to determine in which direction, and at what speed neural activity propagates in the system.
1 code implementation • 9 Mar 2023 • Furkan Ozcelik, Rufin VanRullen
In the second stage, we use the image-to-image framework of a latent diffusion model (Versatile Diffusion) conditioned on predicted multimodal (text and visual) features, to generate final reconstructed images.
1 code implementation • 21 Dec 2022 • Colin Decourt, Rufin VanRullen, Didier Salle, Thomas Oberlin
Exploiting the time information (e. g., multiple frames) has been shown to help to capture better the dynamics of objects and, therefore, the variation in the shape of objects.
1 code implementation • 2022 IEEE Intelligent Vehicles Symposium (IV) 2022 • Colin Decourt, Rufin VanRullen, Didier Salle, Thomas Oberlin
Due to the small number of raw data automotive radar datasets and the low resolution of such radar sensors, automotive radar object detection has been little explored with deep learning models in comparison to camera and lidar-based approaches.
1 code implementation • 25 Feb 2022 • Furkan Ozcelik, Bhavin Choksi, Milad Mozafari, Leila Reddy, Rufin VanRullen
Reconstructing perceived natural images from fMRI signals is one of the most engaging topics of neural decoding research.
no code implementations • 4 Feb 2022 • Mathieu Chalvidal, Thomas Serre, Rufin VanRullen
Deep Reinforcement Learning has demonstrated the potential of neural networks tuned with gradient descent for solving complex tasks in well-delimited environments.
1 code implementation • NeurIPS Workshop SVRHM 2021 • Bhavin Choksi, Milad Mozafari, Rufin VanRullen, Leila Reddy
The human hippocampus possesses "concept cells", neurons that fire when presented with stimuli belonging to a specific concept, regardless of the modality.
no code implementations • 8 Aug 2021 • Mohit Vaishnav, Remi Cadene, Andrea Alamia, Drew Linsley, Rufin VanRullen, Thomas Serre
Our analysis reveals a novel taxonomy of visual reasoning tasks, which can be primarily explained by both the type of relations (same-different vs. spatial-relation judgments) and the number of relations used to compose the underlying rules.
1 code implementation • 8 Jun 2021 • Andrea Alamia, Milad Mozafari, Bhavin Choksi, Rufin VanRullen
That is, we let the optimization process determine whether top-down connections and predictive coding dynamics are functionally beneficial.
2 code implementations • NeurIPS 2021 • Bhavin Choksi, Milad Mozafari, Callum Biggs O'May, Benjamin Ador, Andrea Alamia, Rufin VanRullen
The reconstruction errors are used to iteratively update the network's representations across timesteps, and to optimize the network's feedback weights over the natural image dataset-a form of unsupervised training.
1 code implementation • CoNLL (EMNLP) 2021 • Benjamin Devillers, Bhavin Choksi, Romain Bielawski, Rufin VanRullen
Vision models trained on multimodal datasets can benefit from the wide availability of large image-caption datasets.
no code implementations • 12 Apr 2021 • Rufin VanRullen, Andrea Alamia
We demonstrate the usefulness of this brain-inspired Global Attention Agreement network (GAttANet) for various convolutional backbones (from a simple 5-layer toy model to a standard ResNet50 architecture) and datasets (CIFAR10, CIFAR100, Imagenet-1k).
2 code implementations • NeurIPS Workshop SVRHM 2020 • Zhaoyang Pang, Callum Biggs O'May, Bhavin Choksi, Rufin VanRullen
Finally we validated our conclusions in a deeper network (VGG): adding the same predictive coding feedback dynamics again leads to the perception of illusory contours.
no code implementations • 4 Dec 2020 • Rufin VanRullen, Ryota Kanai
Recent advances in deep learning have allowed Artificial Intelligence (AI) to reach near human-level performance in many sensory, perceptual, linguistic or cognitive tasks.
1 code implementation • NeurIPS Workshop SVRHM 2020 • Bhavin Choksi, Milad Mozafari, Callum Biggs O'May, B. ADOR, Andrea Alamia, Rufin VanRullen
The reconstruction errors are used to iteratively update the network’s representations across timesteps, and to optimize the network's feedback weights over the natural image dataset--a form of unsupervised training.
no code implementations • ICLR 2021 • Mathieu Chalvidal, Matthew Ricci, Rufin VanRullen, Thomas Serre
Despite their elegant formulation and lightweight memory cost, neural ordinary differential equations (NODEs) suffer from known representational limitations.
no code implementations • 31 Jan 2020 • Milad Mozafari, Leila Reddy, Rufin VanRullen
Then, we applied this mapping to the fMRI activity patterns obtained from 50 new test images from 50 unseen categories in order to retrieve their latent vectors, and reconstruct the corresponding images.
no code implementations • 13 Feb 2019 • Andrea Alamia, Victor Gauducheau, Dimitri Paisios, Rufin VanRullen
Our results show that both architectures can 'learn' (via error back-propagation) the grammars after the same number of training sequences as humans do, but recurrent networks perform closer to humans than feedforward ones, irrespective of the grammar complexity level.
Neurons and Cognition Human-Computer Interaction
1 code implementation • 9 Oct 2018 • Rufin VanRullen, Leila Reddy
While objects from different categories can be reliably decoded from fMRI brain response patterns, it has proved more difficult to distinguish visually similar inputs, such as different instances of the same category.
Human-Computer Interaction Neurons and Cognition