Search Results for author: Stéphane Rivaud

Found 2 papers, 1 papers with code

PETRA: Parallel End-to-end Training with Reversible Architectures

no code implementations4 Jun 2024 Stéphane Rivaud, Louis Fournier, Thomas Pumir, Eugene Belilovsky, Michael Eickenberg, Edouard Oyallon

Reversible architectures have been shown to be capable of performing on par with their non-reversible architectures, being applied in deep learning for memory savings and generative modeling.

Can Forward Gradient Match Backpropagation?

1 code implementation12 Jun 2023 Louis Fournier, Stéphane Rivaud, Eugene Belilovsky, Michael Eickenberg, Edouard Oyallon

Forward Gradients - the idea of using directional derivatives in forward differentiation mode - have recently been shown to be utilizable for neural network training while avoiding problems generally associated with backpropagation gradient computation, such as locking and memorization requirements.

Memorization

Cannot find the paper you are looking for? You can Submit a new open access paper.