no code implementations • 3 Apr 2021 • Xizi Chen, Jingyang Zhu, Jingbo Jiang, Chi-Ying Tsui
Through permutation, the optimal arrangement of the weight matrix is obtained, and the sparse weight matrix is further compressed to a small and dense format to make full use of the hardware resources.
no code implementations • 26 Feb 2021 • Jingbo Jiang, Xizi Chen, Chi-Ying Tsui
This work proposes a nested Winograd algorithm that iteratively decomposes a large kernel convolution into small kernel convolutions and proves it to be more effective than the linear decomposition Winograd transformation algorithm.
no code implementations • 25 Aug 2018 • Jingbo Jiang, Diego Legrand, Robert Severn, Risto Miikkulainen
Its performance is compared to that of the Taguchi method in several simulated conditions, including an orthogonal one designed to favor the Taguchi method, and two realistic conditions with dependences between variables.
no code implementations • 3 Nov 2017 • Jingyang Zhu, Jingbo Jiang, Xizi Chen, Chi-Ying Tsui
Furthermore, an energy-efficient hardware architecture, SparseNN, is proposed to exploit both the input and output sparsity.