1 code implementation • 1 Oct 2021 • Sayeed Shafayet Chowdhury, Nitin Rathi, Kaushik Roy
We achieve top-1 accuracy of 93. 05%, 70. 15% and 67. 71% on CIFAR-10, CIFAR-100 and ImageNet, respectively using VGG16, with just 1 timestep.
no code implementations • 1 Jan 2021 • Nitin Rathi, Kaushik Roy
The trained membrane leak controls the flow of input information and attenuates irrelevant inputs to increase the activation sparsity in the convolutional and linear layers of the network.
no code implementations • 9 Aug 2020 • Nitin Rathi, Kaushik Roy
The trained membrane leak controls the flow of input information and attenuates irrelevant inputs to increase the activation sparsity in the convolutional and dense layers of the network.
1 code implementation • ICLR 2020 • Nitin Rathi, Gopalakrishnan Srinivasan, Priyadarshini Panda, Kaushik Roy
We propose a hybrid training methodology: 1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time steps for input processing.
1 code implementation • ECCV 2020 • Saima Sharmin, Nitin Rathi, Priyadarshini Panda, Kaushik Roy
Our results suggest that SNNs trained with LIF neurons and smaller number of timesteps are more robust than the ones with IF (Integrate-Fire) neurons and larger number of timesteps.
no code implementations • 12 Oct 2017 • Nitin Rathi, Priyadarshini Panda, Kaushik Roy
We present a sparse SNN topology where non-critical connections are pruned to reduce the network size and the remaining critical synapses are weight quantized to accommodate for limited conductance levels.