1 code implementation • NeurIPS 2023 • Benjamin Scellier, Maxence Ernoult, Jack Kendall, Suhas Kumar
Additionally, we establish new SOTA results with DCHNs on all five datasets, both in performance and speed.
1 code implementation • 31 Jan 2022 • Maxence Ernoult, Fabrice Normandin, Abhinav Moudgil, Sean Spinney, Eugene Belilovsky, Irina Rish, Blake Richards, Yoshua Bengio
As such, it is important to explore learning algorithms that come with strong theoretical guarantees and can match the performance of backpropagation (BP) on complex tasks.
1 code implementation • CVPR Workshop Binary Vision 2021 • Jérémie Laydevant, Maxence Ernoult, Damien Querlioz, Julie Grollier
We first train systems with binary weights and full-precision activations, achieving an accuracy equivalent to that of full-precision models trained by standard EP on MNIST, and losing only 1. 9% accuracy on CIFAR-10 with equal architecture.
2 code implementations • 19 Jan 2021 • Axel Laborieux, Maxence Ernoult, Tifenn Hirtzlin, Damien Querlioz
Unlike the brain, artificial neural networks, including state-of-the-art deep neural networks for computer vision, are subject to "catastrophic forgetting": they rapidly forget the previous task when trained on a new one.
no code implementations • 14 Jan 2021 • Axel Laborieux, Maxence Ernoult, Benjamin Scellier, Yoshua Bengio, Julie Grollier, Damien Querlioz
Equilibrium Propagation (EP) is a biologically-inspired counterpart of Backpropagation Through Time (BPTT) which, owing to its strong theoretical guarantees and the locality in space of its learning rule, fosters the design of energy-efficient hardware dedicated to learning.
no code implementations • 15 Oct 2020 • Erwann Martin, Maxence Ernoult, Jérémie Laydevant, Shuai Li, Damien Querlioz, Teodora Petrisor, Julie Grollier
Finding spike-based learning algorithms that can be implemented within the local constraints of neuromorphic systems, while achieving high accuracy, remains a formidable challenge.
1 code implementation • 6 Jun 2020 • Axel Laborieux, Maxence Ernoult, Benjamin Scellier, Yoshua Bengio, Julie Grollier, Damien Querlioz
In this work, we show that a bias in the gradient estimate of EP, inherent in the use of finite nudging, is responsible for this phenomenon and that cancelling it allows training deep ConvNets by EP.
no code implementations • 29 Apr 2020 • Maxence Ernoult, Julie Grollier, Damien Querlioz, Yoshua Bengio, Benjamin Scellier
However, in existing implementations of EP, the learning rule is not local in time: the weight update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically.
no code implementations • 29 Apr 2020 • Maxence Ernoult, Julie Grollier, Damien Querlioz, Yoshua Bengio, Benjamin Scellier
On the other hand, the biological plausibility of EP is limited by the fact that its learning rule is not local in time: the synapse update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically.
1 code implementation • 7 Mar 2020 • Axel Laborieux, Maxence Ernoult, Tifenn Hirtzlin, Damien Querlioz
In this work, we interpret the hidden weights used by binarized neural networks, a low-precision version of deep neural networks, as metaplastic variables, and modify their training technique to alleviate forgetting.
2 code implementations • NeurIPS 2019 • Maxence Ernoult, Julie Grollier, Damien Querlioz, Yoshua Bengio, Benjamin Scellier
Equilibrium Propagation (EP) is a biologically inspired learning algorithm for convergent recurrent neural networks, i. e. RNNs that are fed by a static input x and settle to a steady state.