A Focal Loss function addresses class imbalance during training in tasks like object detection. Focal loss applies a modulating term to the cross entropy loss in order to focus learning on hard misclassified examples. It is a dynamically scaled cross entropy loss, where the scaling factor decays to zero as confidence in the correct class increases. Intuitively, this scaling factor can automatically down-weight the contribution of easy examples during training and rapidly focus the model on hard examples.
Formally, the Focal Loss adds a factor $(1 - p_{t})^\gamma$ to the standard cross entropy criterion. Setting $\gamma>0$ reduces the relative loss for well-classified examples ($p_{t}>.5$), putting more focus on hard, misclassified examples. Here there is tunable focusing parameter $\gamma \ge 0$.
$$ {\text{FL}(p_{t}) = - (1 - p_{t})^\gamma \log\left(p_{t}\right)} $$
Source: Focal Loss for Dense Object DetectionPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Object Detection | 194 | 24.59% |
Semantic Segmentation | 45 | 5.70% |
Image Classification | 31 | 3.93% |
General Classification | 26 | 3.30% |
Instance Segmentation | 21 | 2.66% |
Classification | 19 | 2.41% |
Autonomous Driving | 14 | 1.77% |
Pedestrian Detection | 12 | 1.52% |
Image Segmentation | 12 | 1.52% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |