Symmetric Loss and Its Applications to Learning under Label Noise
A surrogate loss function plays a crucial role when the target loss function is difficult to optimize. For example, in classification, it is common to minimize a surrogate loss, e.g., the cross-entropy loss, because the classification error is discontinuous and non-differentiable. A surrogate loss function that satisfies a symmetric condition has demonstrated its usefulness in learning from noisy labels and weakly-supervised learning. In this talk, I will introduce symmetric losses and their advantage. First, I will explain why using a symmetric loss is advantageous in the balanced error rate (BER) minimization and area under the receiver operating characteristic curve (AUC) maximization from corrupted labels. Then, I will briefly describe the general theoretical properties of symmetric losses, e.g., a classification-calibration condition and AUC-consistency condition. Finally, I will discuss the advantage of symmetric losses for learning from relevant keywords and unlabeled documents.
Related publications:
On Symmetric Losses for Learning from Corrupted Labels, ICML, 2019
Nontawat Charoenphakdee, Jongyeong Lee, and Masashi Sugiyama
Link: http://proceedings.mlr.press/v97/charoenphakdee19a/charoenphakdee19a.pdf
Learning from Relevant Keywords and Unlabeled Documents, EMNLP-IJCNLP, 2019
Nontawat Charoenphakdee, Jongyeong Lee, Yiping Jin, Dittaya Wanvarie, and Masashi Sugiyama
Link: https://www.aclweb.org/anthology/D19-1411.pdf
Speaker's website: https://nolfwin.github.io/