Rethinking the Value of Labels for Improving Class-Imbalanced Learning

NeurIPS 2020  ·  Yuzhe Yang, Zhi Xu ·

Real-world data often exhibits long-tailed distributions with heavy class imbalance, posing great challenges for deep recognition models. We identify a persisting dilemma on the value of labels in the context of imbalanced learning: on the one hand, supervision from labels typically leads to better results than its unsupervised counterparts; on the other hand, heavily imbalanced data naturally incurs "label bias" in the classifier, where the decision boundary can be drastically altered by the majority classes. In this work, we systematically investigate these two facets of labels. We demonstrate, theoretically and empirically, that class-imbalanced learning can significantly benefit in both semi-supervised and self-supervised manners. Specifically, we confirm that (1) positively, imbalanced labels are valuable: given more unlabeled data, the original labels can be leveraged with the extra data to reduce label bias in a semi-supervised manner, which greatly improves the final classifier; (2) negatively however, we argue that imbalanced labels are not useful always: classifiers that are first pre-trained in a self-supervised manner consistently outperform their corresponding baselines. Extensive experiments on large-scale imbalanced datasets verify our theoretically grounded strategies, showing superior performance over previous state-of-the-arts. Our intriguing findings highlight the need to rethink the usage of imbalanced labels in realistic long-tailed tasks. Code is available at https://github.com/YyzHarry/imbalanced-semi-self.

PDF Abstract NeurIPS 2020 PDF NeurIPS 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Long-tail Learning CIFAR-100-LT (ρ=10) LDAM-DRW + SSP Error Rate 41.09 # 27
Long-tail Learning CIFAR-100-LT (ρ=100) LDAM-DRW + SSP Error Rate 56.57 # 55
Long-tail Learning CIFAR-100-LT (ρ=50) LDAM-DRW + SSP Error Rate 52.89 # 24
Long-tail Learning CIFAR-10-LT (ρ=10) LDAM-DRW + SSP Error Rate 11.47 # 35
Long-tail Learning CIFAR-10-LT (ρ=100) LDAM-DRW + SSP Error Rate 22.17 # 24
Long-tail Learning ImageNet-LT cRT + SSP Top-1 Accuracy 51.3 # 49
Long-tail Learning iNaturalist 2018 cRT + SSP Top-1 Accuracy 68.1% # 35

Methods


No methods listed for this paper. Add relevant methods here