1 code implementation • ICLR 2022 • Shaochen Zhong, Guanqun Zhang, Ningjia Huang, Shuai Xu
In this paper, we revisit the idea of kernel pruning (to only prune one or several $k \times k$ kernels out of a 3D-filter), a heavily overlooked approach under the context of structured pruning due to it will naturally introduce sparsity to filters within the same convolutional layer—thus, making the remaining network no longer dense.