no code implementations • 30 Apr 2024 • Jeonghoon Park, Chaeyeon Chung, Juyoung Lee, Jaegul Choo
In the image classification task, deep neural networks frequently rely on bias attributes that are spuriously correlated with a target class in the presence of dataset bias, resulting in degraded performance when applied to data without bias attributes.
1 code implementation • 29 May 2022 • Jungsoo Lee, Jeonghoon Park, Daeyoung Kim, Juyoung Lee, Edward Choi, Jaegul Choo
$f_B$ is trained to focus on bias-aligned samples (i. e., overfitted to the bias) while $f_D$ is mainly trained with bias-conflicting samples by concentrating on samples which $f_B$ fails to learn, leading $f_D$ to be less susceptible to the dataset bias.
no code implementations • 18 Oct 2021 • Jeonghoon Park, Jimin Hong, Radhika Dua, Daehoon Gwak, Yixuan Li, Jaegul Choo, Edward Choi
Despite the impressive performance of deep networks in vision, language, and healthcare, unpredictable behaviors on samples from the distribution different than the training distribution cause severe problems in deployment.
1 code implementation • ICCV 2021 • Eungyeup Kim, Sanghyeon Lee, Jeonghoon Park, Somi Choi, Choonghyun Seo, Jaegul Choo
Deep neural networks for automatic image colorization often suffer from the color-bleeding artifact, a problematic color spreading near the boundaries between adjacent objects.
no code implementations • 26 Nov 2020 • Jeonghoon Park, Kyungmin Jo, Daehoon Gwak, Jimin Hong, Jaegul Choo, Edward Choi
We evaluate the out-of-distribution (OOD) detection performance of self-supervised learning (SSL) techniques with a new evaluation framework.
Out-of-Distribution Detection Out of Distribution (OOD) Detection +1