Initialization

Kaiming Initialization, or He Initialization, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as ReLU activations.

A proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:

$$\frac{1}{2}n_{l}\text{Var}\left[w_{l}\right] = 1 $$

This implies an initialization scheme of:

$$ w_{l} \sim \mathcal{N}\left(0, 2/n_{l}\right)$$

That is, a zero-centered Gaussian with standard deviation of $\sqrt{2/{n}_{l}}$ (variance shown in equation above). Biases are initialized at $0$.

Source: Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Image Classification 57 9.05%
Self-Supervised Learning 44 6.98%
Semantic Segmentation 33 5.24%
Classification 29 4.60%
Object Detection 13 2.06%
Quantization 12 1.90%
Image Segmentation 11 1.75%
Denoising 8 1.27%
Network Pruning 7 1.11%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories