no code implementations • 12 Jul 2021 • Mathieu Xhonneux, Jérôme Louveaux, David Bol
In this work, we explore a software-defined radio architecture by demonstrating a LoRa transceiver running on custom ULP MCU codenamed SleepRider with an ARM Cortex-M4 CPU.
no code implementations • 2 Jun 2021 • Charlotte Frenkel, David Bol, Giacomo Indiveri
In this paper, we provide a comprehensive overview of the field, highlighting the different levels of granularity at which this paradigm shift is realized and comparing design approaches that focus on replicating natural intelligence (bottom-up) versus those that aim at solving practical artificial intelligence applications (top-down).
no code implementations • 13 May 2020 • Charlotte Frenkel, Jean-Didier Legat, David Bol
With an energy per classification of 313nJ at 0. 6V and a 0. 32-mm$^2$ area for accuracies of 95. 3% (on-chip training) and 97. 5% (off-chip training) on MNIST, we demonstrate that SPOON reaches the efficiency of conventional machine learning accelerators while embedding on-chip learning and being compatible with event-based sensors, a point that we further emphasize with N-MNIST benchmarking.
no code implementations • 24 Dec 2019 • Mathieu Xhonneux, Orion Afisiadis, David Bol, Jérôme Louveaux
Using this model, we propose a new estimator for the sampling time offset.
1 code implementation • 3 Sep 2019 • Charlotte Frenkel, Martin Lefebvre, David Bol
While the backpropagation of error algorithm enables deep neural network training, it implies (i) bidirectional synaptic weight transport and (ii) update locking until the forward and backward passes are completed.
no code implementations • 17 Apr 2019 • Charlotte Frenkel, Jean-Didier Legat, David Bol
Recent trends in the field of neural network accelerators investigate weight quantization as a means to increase the resource- and power-efficiency of hardware devices.