no code implementations • 21 Feb 2024 • Zihao Chen, Johannes Leugering, Gert Cauwenberghs, Shantanu Chakrabartty
In this paper, we derive new theoretical lower bounds on energy dissipation when training AI systems using different LIM approaches.
no code implementations • 27 Apr 2023 • Madhuvanthi Srivatsav R, Shantanu Chakrabartty, Chetan Singh Thakur
Address-Event-Representation (AER) is a spike-routing protocol that allows the scaling of neuromorphic and spiking neural network (SNN) architectures to a size that is comparable to that of digital neural network architectures.
no code implementations • 24 Apr 2023 • Abhishek Ramdas Nair, Pallab Kumar Nath, Shantanu Chakrabartty, Chetan Singh Thakur
Wildlife conservation using continuous monitoring of environmental factors and biomedical classification, which generate a vast amount of sensor data, is a challenge due to limited bandwidth in the case of remote monitoring.
no code implementations • 18 Apr 2023 • Zhili Xiao, Shantanu Chakrabartty
Precise estimation of cross-correlation or similarity between two random variables lies at the heart of signal detection, hyperdimensional computing, associative memories, and neural networks.
no code implementations • 27 Jun 2022 • Mustafizur Rahman, Subhankar Bose, Shantanu Chakrabartty
Synaptic memory consolidation has been heralded as one of the key mechanisms for supporting continual learning in neuromorphic Artificial Intelligence (AI) systems.
no code implementations • 11 May 2022 • Pratik Kumar, Ankita Nandi, Shantanu Chakrabartty, Chetan Singh Thakur
Analog computing is attractive compared to digital computing due to its potential for achieving higher computational density and higher energy efficiency.
no code implementations • 10 Feb 2022 • Pratik Kumar, Ankita Nandi, Shantanu Chakrabartty, Chetan Singh Thakur
In this paper, we demonstrate the implementation of bias-scalable approximate analog computing circuits using the generalization of the margin-propagation principle called shape-based analog computing (S-AC).
no code implementations • 11 Sep 2021 • Abhishek Ramdas Nair, Shantanu Chakrabartty, Chetan Singh Thakur
We present a novel in-filter computing framework that can be used for designing ultra-light acoustic classifiers for use in smart internet-of-things (IoTs).
no code implementations • 21 Aug 2021 • Oindrila Chatterjee, Shantanu Chakrabartty
Sonification, or encoding information in meaningful audio signatures, has several advantages in augmenting or replacing traditional visualization methods for human-in-the-loop decision-making.
no code implementations • 3 Jun 2021 • Abhishek Ramdas Nair, Pallab Kumar Nath, Shantanu Chakrabartty, Chetan Singh Thakur
We present a novel framework for designing multiplierless kernel machines that can be used on resource-constrained platforms like intelligent edge devices.
no code implementations • 13 Apr 2021 • Darshit Mehta, Kenji Aono, Shantanu Chakrabartty
In this paper we present a synaptic array that uses dynamical states to implement an analog memory for energy-efficient training of machine learning (ML) systems.
no code implementations • 5 Oct 2019 • Nazreen P. M., Shantanu Chakrabartty, Chetan Singh Thakur
This is because at the fundamental level, neural network and machine learning operations extensively use MVM operations and hardware compilers exploit the inherent parallelism in MVM operations to achieve hardware acceleration on GPUs and FPGAs.
no code implementations • 15 Aug 2019 • Oindrila Chatterjee, Shantanu Chakrabartty
Based on this approach, this paper introduces three novel concepts: (a) A learning framework where the network's active-power dissipation is used as a regularization for a learning objective function that is subjected to zero total reactive-power constraint; (b) A dynamical system based on complex-domain, continuous-time growth transforms which optimizes the learning objective function and drives the network towards electrical resonance under steady-state operation; and (c) An annealing procedure that controls the trade-off between active-power dissipation and the speed of convergence.
no code implementations • 5 Nov 2018 • Oindrila Chatterjee, Shantanu Chakrabartty
In this paper, we show that different types of evolutionary game dynamics are, in principle, special cases of a dynamical system model based on our previously reported framework of generalized growth transforms.