R_INLINE_MATH_1 Regularization is a regularization technique and gradient penalty for training generative adversarial networks. It penalizes the discriminator from deviating from the Nash Equilibrium via penalizing the gradient on real data alone: when the generator distribution produces the true data distribution and the discriminator is equal to 0 on the data manifold, the gradient penalty ensures that the discriminator cannot create a non-zero gradient orthogonal to the data manifold without suffering a loss in the GAN game.
This leads to the following regularization term:
$$ R_{1}\left(\psi\right) = \frac{\gamma}{2}E_{p_{D}\left(x\right)}\left[||\nabla{D_{\psi}\left(x\right)}||^{2}\right] $$
Source: Which Training Methods for GANs do actually Converge?Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Generation | 114 | 16.94% |
Disentanglement | 44 | 6.54% |
Image Manipulation | 32 | 4.75% |
Face Generation | 29 | 4.31% |
Face Recognition | 23 | 3.42% |
Image-to-Image Translation | 18 | 2.67% |
Face Swapping | 17 | 2.53% |
Super-Resolution | 15 | 2.23% |
Translation | 14 | 2.08% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |