1 code implementation • 27 Aug 2018 • Dominik Marek Loroch, Franz-Josef Pfreundt, Norbert Wehn, Janis Keuper
Various approaches have been investigated to reduce the necessary resources, one of which is to leverage the sparsity occurring in deep neural networks due to the high levels of redundancy in the network parameters.
2 code implementations • 13 Oct 2017 • Dominik Marek Loroch, Norbert Wehn, Franz-Josef Pfreundt, Janis Keuper
While most related publications validate the proposed approach on a single DNN topology, it appears to be evident, that the optimal choice of the quantization method and number of coding bits is topology dependent.