Search Results for author: Nitin Rathi

Found 6 papers, 3 papers with code

One Timestep is All You Need: Training Spiking Neural Networks with Ultra Low Latency

1 code implementation1 Oct 2021 Sayeed Shafayet Chowdhury, Nitin Rathi, Kaushik Roy

We achieve top-1 accuracy of 93. 05%, 70. 15% and 67. 71% on CIFAR-10, CIFAR-100 and ImageNet, respectively using VGG16, with just 1 timestep.

DIET-SNN: A Low-Latency Spiking Neural Network with Direct Input Encoding & Leakage and Threshold Optimization

no code implementations1 Jan 2021 Nitin Rathi, Kaushik Roy

The trained membrane leak controls the flow of input information and attenuates irrelevant inputs to increase the activation sparsity in the convolutional and linear layers of the network.

Computational Efficiency Image Classification

DIET-SNN: Direct Input Encoding With Leakage and Threshold Optimization in Deep Spiking Neural Networks

no code implementations9 Aug 2020 Nitin Rathi, Kaushik Roy

The trained membrane leak controls the flow of input information and attenuates irrelevant inputs to increase the activation sparsity in the convolutional and dense layers of the network.

Computational Efficiency Image Classification

Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike Timing Dependent Backpropagation

1 code implementation ICLR 2020 Nitin Rathi, Gopalakrishnan Srinivasan, Priyadarshini Panda, Kaushik Roy

We propose a hybrid training methodology: 1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time steps for input processing.

Image Classification

Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations

1 code implementation ECCV 2020 Saima Sharmin, Nitin Rathi, Priyadarshini Panda, Kaushik Roy

Our results suggest that SNNs trained with LIF neurons and smaller number of timesteps are more robust than the ones with IF (Integrate-Fire) neurons and larger number of timesteps.

Adversarial Robustness Attribute

STDP Based Pruning of Connections and Weight Quantization in Spiking Neural Networks for Energy Efficient Recognition

no code implementations12 Oct 2017 Nitin Rathi, Priyadarshini Panda, Kaushik Roy

We present a sparse SNN topology where non-critical connections are pruned to reduce the network size and the remaining critical synapses are weight quantized to accommodate for limited conductance levels.

General Classification Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.