Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates

ICML 2020  ·  Yang Liu, Hongyi Guo ·

Learning with noisy labels is a common challenge in supervised learning. Existing approaches often require practitioners to specify noise rates, i.e., a set of parameters controlling the severity of label noises in the problem, and the specifications are either assumed to be given or estimated using additional steps. In this work, we introduce a new family of loss functions that we name as peer loss functions, which enables learning from noisy labels and does not require a priori specification of the noise rates. Peer loss functions work within the standard empirical risk minimization (ERM) framework. We show that, under mild conditions, performing ERM with peer loss functions on the noisy dataset leads to the optimal or a near-optimal classifier as if performing ERM over the clean training data, which we do not have access to. We pair our results with an extensive set of experiments. Peer loss provides a way to simplify model development when facing potentially noisy training labels, and can be promoted as a robust candidate loss function in such situations.

PDF Abstract ICML 2020 PDF
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Learning with noisy labels CIFAR-100N Peer Loss Accuracy (mean) 57.59 # 15
Learning with noisy labels CIFAR-10N-Aggregate Peer Loss Accuracy (mean) 90.75 # 18
Learning with noisy labels CIFAR-10N-Random1 Peer Loss Accuracy (mean) 89.06 # 18
Learning with noisy labels CIFAR-10N-Random2 Peer Loss Accuracy (mean) 88.76 # 16
Learning with noisy labels CIFAR-10N-Random3 Peer Loss Accuracy (mean) 88.57 # 16
Learning with noisy labels CIFAR-10N-Worst Peer Loss Accuracy (mean) 82.53 # 17

Methods


No methods listed for this paper. Add relevant methods here