1 code implementation • 25 Oct 2023 • Wei Fang, Yanqi Chen, Jianhao Ding, Zhaofei Yu, Timothée Masquelier, Ding Chen, Liwei Huang, Huihui Zhou, Guoqi Li, Yonghong Tian
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency by introducing neural dynamics and spike properties.
2 code implementations • 25 Sep 2023 • Ismail Khalfaoui-Hassani, Timothée Masquelier, Thomas Pellegrini
Dilated convolution with learnable spacings (DCLS) is a recent convolution method in which the positions of the kernel elements are learned throughout training by backpropagation.
1 code implementation • 30 Jun 2023 • Ilyass Hammouamri, Ismail Khalfaoui-Hassani, Timothée Masquelier
In SNNs, delays refer to the time needed for one spike to travel from one neuron to another.
Ranked #2 on Audio Classification on SSC
1 code implementation • 1 Jun 2023 • Ismail Khalfaoui-Hassani, Thomas Pellegrini, Timothée Masquelier
Dilated Convolution with Learnable Spacings (DCLS) is a recently proposed variation of the dilated convolution in which the spacings between the non-zero elements in the kernel, or equivalently their positions, are learnable.
1 code implementation • NeurIPS 2023 • Wei Fang, Zhaofei Yu, Zhaokun Zhou, Ding Chen, Yanqi Chen, Zhengyu Ma, Timothée Masquelier, Yonghong Tian
Vanilla spiking neurons in Spiking Neural Networks (SNNs) use charge-fire-reset neuronal dynamics, which can only be simulated serially and can hardly learn long-time dependencies.
1 code implementation • 13 Feb 2023 • Javier Cuadrado, Ulysse Rançon, Benoît Cottereau, Francisco Barranco, Timothée Masquelier
Event-based sensors are an excellent fit for Spiking Neural Networks (SNNs), since the coupling of an asynchronous sensor with neuromorphic hardware can yield real-time systems with minimal power requirements.
1 code implementation • 23 Oct 2022 • Alireza Azadbakht, Saeed Reza Kheradpisheh, Ismail Khalfaoui-Hassani, Timothée Masquelier
However, most SOTA networks are too large for edge computing.
2 code implementations • 7 Dec 2021 • Ismail Khalfaoui-Hassani, Thomas Pellegrini, Timothée Masquelier
We call this method "Dilated Convolution with Learnable Spacings" (DCLS) and generalize it to the n-dimensional convolution case.
1 code implementation • 28 Sep 2021 • Ulysse Rançon, Javier Cuadrado-Anibarro, Benoit R. Cottereau, Timothée Masquelier
Here we solved it using an end-to-end neuromorphic approach, combining two event-based cameras and a Spiking Neural Network (SNN) with a slightly modified U-Net-like encoder-decoder architecture, that we named StereoSpike.
1 code implementation • 27 Sep 2021 • Saeed Reza Kheradpisheh, Maryam Mirsadeghi, Timothée Masquelier
By assuming IF neuron with rate-coding as an approximation of ReLU, we backpropagate the error of the SNN in the proxy ANN to update the shared weights, simply by replacing the ANN final output with that of the SNN.
no code implementations • 31 Aug 2021 • Maryam Mirsadeghi, Majid Shalchian, Saeed Reza Kheradpisheh, Timothée Masquelier
To do so, we consider a convolutional SNN (CSNN) with two sets of weights: real-valued weights that are updated in the backward pass and their signs, binary weights, that are employed in the feedforward process.
Ranked #11 on Image Classification on Fashion-MNIST
1 code implementation • 1 Mar 2021 • Thomas Pellegrini, Timothée Masquelier
Multi-label audio tagging consists of assigning sets of tags to audio recordings.
1 code implementation • NeurIPS 2021 • Wei Fang, Zhaofei Yu, Yanqi Chen, Tiejun Huang, Timothée Masquelier, Yonghong Tian
Previous Spiking ResNet mimics the standard residual block in ANNs and simply replaces ReLU activation layers with spiking neurons, which suffers the degradation problem and can hardly implement residual learning.
no code implementations • 24 Jan 2021 • Ali Rasteh, Florian Delpech, Carlos Aguilar-Melchor, Romain Zimmer, Saeed Bagheri Shouraki, Timothée Masquelier
Internet traffic recognition is an essential tool for access providers since recognizing traffic categories related to different data packets transmitted on a network help them define adapted priorities.
1 code implementation • 13 Nov 2020 • Thomas Pellegrini, Romain Zimmer, Timothée Masquelier
Deep Neural Networks (DNNs) are the current state-of-the-art models in many speech related tasks.
1 code implementation • 8 Jul 2020 • Saeed Reza Kheradpisheh, Maryam Mirsadeghi, Timothée Masquelier
We recently proposed the S4NN algorithm, essentially an adaptation of backpropagation to multilayer spiking neural networks that use simple non-leaky integrate-and-fire neurons and a form of temporal coding known as time-to-first-spike coding.
2 code implementations • 22 Nov 2019 • Romain Zimmer, Thomas Pellegrini, Srisht Fateh Singh, Timothée Masquelier
Indeed, the most commonly used spiking neuron model, the leaky integrate-and-fire neuron, obeys a differential equation which can be approximated using discrete time steps, leading to a recurrent relation for the potential.
1 code implementation • 21 Oct 2019 • Saeed Reza Kheradpisheh, Timothée Masquelier
In particular, in the readout layer, the first neuron to fire determines the class of the stimulus.
1 code implementation • 6 Mar 2019 • Milad Mozafari, Mohammad Ganjtabesh, Abbas Nowzari-Dalini, Timothée Masquelier
Application of deep convolutional spiking neural networks (SNNs) to artificial intelligence (AI) tasks has recently gained a lot of interest since SNNs are hardware-friendly and energy-efficient.
1 code implementation • 31 Mar 2018 • Milad Mozafari, Mohammad Ganjtabesh, Abbas Nowzari-Dalini, Simon J. Thorpe, Timothée Masquelier
We trained it using a combination of spike-timing-dependent plasticity (STDP) for the lower layers and reward-modulated STDP (R-STDP) for the higher ones.
no code implementations • 1 Mar 2018 • Timothée Masquelier, Saeed Reza Kheradpisheh
Here we investigated how a single spiking neuron can optimally respond to one given pattern (localist coding), or to either one of several patterns (distributed coding, i. e. the neuron's response is ambiguous but the identity of the pattern could be inferred from the response of multiple neurons), but not to random inputs.
no code implementations • 25 May 2017 • Milad Mozafari, Saeed Reza Kheradpisheh, Timothée Masquelier, Abbas Nowzari-Dalini, Mohammad Ganjtabesh
In the highest layers, each neuron was assigned to an object category, and it was assumed that the stimulus category was the category of the first neuron to fire.
no code implementations • 29 Mar 2017 • Matin N. Ashtiani, Saeed Reza Kheradpisheh, Timothée Masquelier, Mohammad Ganjtabesh
This means that, low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels.
1 code implementation • 4 Nov 2016 • Saeed Reza Kheradpisheh, Mohammad Ganjtabesh, Simon J. Thorpe, Timothée Masquelier
Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron.
no code implementations • 24 Oct 2016 • Timothée Masquelier
Our results indicate that a relatively small $\tau$ (at most a few tens of ms) is usually optimal, even when the pattern is much longer.
no code implementations • 21 Apr 2016 • Saeed Reza Kheradpisheh, Masoud Ghodrati, Mohammad Ganjtabesh, Timothée Masquelier
This feed-forward architecture has inspired a new generation of bio-inspired computer vision systems called deep convolutional neural networks (DCNN), which are currently the best algorithms for object recognition in natural images.
no code implementations • 17 Aug 2015 • Saeed Reza Kheradpisheh, Masoud Ghodrati, Mohammad Ganjtabesh, Timothée Masquelier
Deep convolutional neural networks (DCNNs) have attracted much attention recently, and have shown to be able to recognize thousands of object categories in natural image databases.
no code implementations • 15 Apr 2015 • Saeed Reza Kheradpisheh, Mohammad Ganjtabesh, Timothée Masquelier
Retinal image of surrounding objects varies tremendously due to the changes in position, size, pose, illumination condition, background context, occlusion, noise, and nonrigid deformations.