Paper

A Frobenius norm regularization method for convolutional kernels to avoid unstable gradient problem

Convolutional neural network is a very important model of deep learning. It can help avoid the exploding/vanishing gradient problem and improve the generalizability of a neural network if the singular values of the Jacobian of a layer are bounded around $1$ in the training process. We propose a new penalty function for a convolutional kernel to let the singular values of the corresponding transformation matrix are bounded around $1$. We show how to carry out the gradient type methods. The penalty is about the structured transformation matrix corresponding to a convolutional kernel. This provides a new regularization method about the weights of convolutional layers.

Results in Papers With Code
(↓ scroll down to see all results)