1 code implementation • 19 Jul 2022 • Andrea Bragagnolo, Enzo Tartaglione, Marco Grangetto
Recent advances in deep learning optimization showed that, with some a-posteriori information on fully-trained models, it is possible to match the same performance by simply training a subset of their parameters.
1 code implementation • 7 Feb 2021 • Enzo Tartaglione, Andrea Bragagnolo, Francesco Odierna, Attilio Fiandrotti, Marco Grangetto
Deep neural networks include millions of learnable parameters, making their deployment over resource-constrained devices problematic.
no code implementations • 16 Nov 2020 • Enzo Tartaglione, Andrea Bragagnolo, Attilio Fiandrotti, Marco Grangetto
LOBSTER (LOss-Based SensiTivity rEgulaRization) is a method for training neural networks having a sparse topology.
1 code implementation • 30 Apr 2020 • Enzo Tartaglione, Andrea Bragagnolo, Marco Grangetto
Recently, a race towards the simplification of deep networks has begun, showing that it is effectively possible to reduce the size of these models with minimal or no performance loss.