no code implementations • 6 May 2024 • Qinyu Chen, Kwantae Kim, Chang Gao, Sheng Zhou, Taekwang Jang, Tobi Delbruck, Shih-Chii Liu
This paper introduces, to the best of the authors' knowledge, the first fine-grained temporal sparsity-aware keyword spotting (KWS) IC leveraging temporal similarities between neighboring feature vectors extracted from input frames and network hidden states, eliminating unnecessary operations and memory accesses.
no code implementations • 14 Dec 2023 • Xi Chen, Chang Gao, Zuowen Wang, Longbiao Cheng, Sheng Zhou, Shih-Chii Liu, Tobi Delbruck
Implementing online training of RNNs on the edge calls for optimized algorithms for an efficient deployment on hardware.
no code implementations • 10 Apr 2023 • Rui Graça, Brian Mcreynolds, Tobi Delbruck
The operation of the DVS event camera is controlled by the user through adjusting different bias parameters.
no code implementations • 8 Apr 2023 • Rui Graca, Brian Mcreynolds, Tobi Delbruck
Under dim lighting conditions, the output of Dynamic Vision Sensor (DVS) event cameras is strongly affected by noise.
no code implementations • 7 Apr 2023 • Brian Mcreynolds, Rui Graca, Tobi Delbruck
Dynamic Vision Sensors (DVS) record "events" corresponding to pixel-level brightness changes, resulting in data-efficient representation of a dynamic visual scene.
1 code implementation • CVPR 2023 • Haiyang Mei, Zuowen Wang, Xin Yang, Xiaopeng Wei, Tobi Delbruck
The polarization event camera PDAVIS is a novel bio-inspired neuromorphic vision sensor that reports both conventional polarization frames and asynchronous, continuously per-pixel polarization brightness changes (polarization events) with fast temporal resolution and large dynamic range.
no code implementations • 26 Feb 2022 • Tobi Delbruck, Chenghan Li, Rui Graca, Brian Mcreynolds
Standard dynamic vision sensor (DVS) event cameras output a stream of spatially-independent log-intensity brightness change events so they cannot suppress spatial redundancy.
no code implementations • 2 Dec 2021 • Germain Haessig, Damien Joubert, Justin Haque, Yingkai Chen, Moritz Milde, Tobi Delbruck, Viktor Gruev
The stomatopod (mantis shrimp) visual system has recently provided a blueprint for the design of paradigm-shifting polarization and multispectral imaging sensors, enabling solutions to challenging medical and remote sensing problems.
no code implementations • 17 Sep 2021 • Rui Graca, Tobi Delbruck
While measurements of the logarithmic photoreceptor predict that the photoreceptor is approximately a first-order system with RMS noise voltage independent of the photocurrent, DVS output shows higher noise event rates at low light intensity.
no code implementations • 4 Aug 2021 • Chang Gao, Tobi Delbruck, Shih-Chii Liu
The pruned networks running on Spartus hardware achieve weight sparsity levels of up to 96% and 94% with negligible accuracy loss on the TIMIT and the Librispeech datasets.
no code implementations • 2 May 2021 • Tobi Delbruck, Rui Graca, Marcin Paluch
Dynamic vision sensor event cameras produce a variable data rate stream of brightness change events.
3 code implementations • 13 Jun 2020 • Yuhuang Hu, Shih-Chii Liu, Tobi Delbruck
The first experiment is object recognition with N-Caltech 101 dataset.
1 code implementation • 18 May 2020 • Yuhuang Hu, Jonathan Binas, Daniel Neil, Shih-Chii Liu, Tobi Delbruck
The dataset was captured with a DAVIS camera that concurrently streams both dynamic vision sensor (DVS) brightness change events and active pixel sensor (APS) intensity frames.
no code implementations • 29 Mar 2020 • Tobi Delbruck, Shih-Chii Liu
The energy consumed by running large deep neural networks (DNNs) on hardware accelerators is dominated by the need for lots of fast memory to store both states and weights.
no code implementations • ECCV 2020 • Yuhuang Hu, Tobi Delbruck, Shih-Chii Liu
This paper proposes a Network Grafting Algorithm (NGA), where a new front end network driven by unconventional visual inputs replaces the front end network of a pretrained deep network that processes intensity frames.
Ranked #4 on Event-based Object Segmentation on RGBE-SEG
no code implementations • 8 Feb 2020 • Chang Gao, Rachel Gehlhar, Aaron D. Ames, Shih-Chii Liu, Tobi Delbruck
Lower leg prostheses could improve the life quality of amputees by increasing comfort and reducing energy to locomote, but currently control methods are limited in modulating behaviors based upon the human's experience.
no code implementations • 22 Dec 2019 • Chang Gao, Antonio Rios-Navarro, Xi Chen, Tobi Delbruck, Shih-Chii Liu
This paper presents a Gated Recurrent Unit (GRU) based recurrent neural network (RNN) accelerator called EdgeDRNN designed for portable edge computing.
no code implementations • 17 May 2019 • Alejandro Linares-Barranco, Antonio Rios-Navarro, Ricardo Tapiador-Morales, Tobi Delbruck
The use of dynamic vision sensors (DVS) that emulate the behavior of a biological retina is taking an incremental importance to improve this applications due to its nature, where the information is represented by a continuous stream of spikes and the frames to be processed by the CNN are constructed collecting a fixed number of these spikes (called events).
no code implementations • 6 May 2019 • Bodo Rückauer, Nicolas Känzig, Shih-Chii Liu, Tobi Delbruck, Yulia Sandamirskaya
Mobile and embedded applications require neural networks-based pattern recognition systems to perform well under a tight computational budget.
1 code implementation • 17 Apr 2019 • Guillermo Gallego, Tobi Delbruck, Garrick Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi, Stefan Leutenegger, Andrew Davison, Joerg Conradt, Kostas Daniilidis, Davide Scaramuzza
Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur.
no code implementations • 18 Mar 2019 • Anton Mitrokhin, Chengxi Ye, Cornelia Fermuller, Yiannis Aloimonos, Tobi Delbruck
In addition to camera egomotion and a dense depth map, the network estimates pixel-wise independently moving object segmentation and computes per-object 3D translational velocities for moving objects.
no code implementations • 2 Jul 2018 • Diederik Paul Moeys, Daniel Neil, Federico Corradi, Emmett Kerr, Philip Vance, Gautham Das, Sonya A. Coleman, Thomas M. McGinnity, Dermot Kerr, Tobi Delbruck
Conventional vision CNNs are driven by camera frames at constant sample rate, thus achieving a fixed latency and power consumption tradeoff.
no code implementations • 10 May 2018 • Min Liu, Tobi Delbruck
The precise event timing, sparse output, and wide dynamic range of the events are well suited for optical flow, but conventional optical flow (OF) algorithms are not well matched to the event stream data.
no code implementations • 13 Nov 2017 • Moritz B. Milde, Daniel Neil, Alessandro Aimar, Tobi Delbruck, Giacomo Indiveri
Using the ADaPTION tools, we quantized several CNNs including VGG16 down to 16-bit weights and activations with only 0. 8% drop in Top-1 accuracy.
1 code implementation • 4 Nov 2017 • Jonathan Binas, Daniel Neil, Shih-Chii Liu, Tobi Delbruck
Event cameras, such as dynamic vision sensors (DVS), and dynamic and active-pixel vision sensors (DAVIS) can supplement other autonomous driving sensors by providing a concurrent stream of standard active pixel sensor (APS) images and DVS temporal contrast events.
no code implementations • CVPR 2017 • Arnon Amir, Brian Taba, David Berg, Timothy Melano, Jeffrey McKinstry, Carmelo Di Nolfo, Tapan Nayak, Alexander Andreopoulos, Guillaume Garreau, Marcela Mendoza, Jeff Kusnitz, Michael Debole, Steve Esser, Tobi Delbruck, Myron Flickner, Dharmendra Modha
We present the first gesture recognition system implemented end-to-end on event-based hardware, using a TrueNorth neurosynaptic processor to recognize hand gestures in real-time at low power from events streamed live by a Dynamic Vision Sensor (DVS).
no code implementations • 16 Jun 2017 • Min Liu, Tobi Delbruck
Rapid and low power computation of optical flow (OF) is potentially useful in robotics.
no code implementations • 5 Jun 2017 • Alessandro Aimar, Hesham Mostafa, Enrico Calabrese, Antonio Rios-Navarro, Ricardo Tapiador-Morales, Iulia-Alexandra Lungu, Moritz B. Milde, Federico Corradi, Alejandro Linares-Barranco, Shih-Chii Liu, Tobi Delbruck
By exploiting sparsity, NullHop achieves an efficiency of 368%, maintains over 98% utilization of the MAC units, and achieves a power efficiency of over 3TOp/s/W in a core area of 6. 3mm$^2$.
no code implementations • ICML 2017 • Daniel Neil, Jun Haeng Lee, Tobi Delbruck, Shih-Chii Liu
Similarly, on the large Wall Street Journal speech recognition benchmark even existing networks can be greatly accelerated as delta networks, and a 5. 7x improvement with negligible loss of accuracy can be obtained through training.
2 code implementations • 26 Oct 2016 • Elias Mueggler, Henri Rebecq, Guillermo Gallego, Tobi Delbruck, Davide Scaramuzza
New vision sensors, such as the Dynamic and Active-pixel Vision sensor (DAVIS), incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array.
no code implementations • 31 Aug 2016 • Jun Haeng Lee, Tobi Delbruck, Michael Pfeiffer
Deep spiking neural networks (SNNs) hold great potential for improving the latency and energy efficiency of deep neural networks through event-based computation.
1 code implementation • 12 Jul 2016 • Guillermo Gallego, Jon E. A. Lund, Elias Mueggler, Henri Rebecq, Tobi Delbruck, Davide Scaramuzza
Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames.
no code implementations • 30 Jun 2016 • Diederik Paul Moeys, Federico Corradi, Emmett Kerr, Philip Vance, Gautham Das, Daniel Neil, Dermot Kerr, Tobi Delbruck
The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey).