no code implementations • ICCV 2023 • Seunghee Koh, Hyounguk Shon, Janghyeon Lee, Hyeong Gwon Hong, Junmo Kim
Whether the model successfully unlearns the source task is measured by piggyback learning accuracy (PL accuracy).
no code implementations • 3 May 2023 • Yooshin Cho, Hanbyel Cho, Hyeong Gwon Hong, Jaesung Ahn, Dongmin Cho, JungWoo Chang, Junmo Kim
In our method, standard spatial attention and networks focus on unmasked regions, and extract mask-invariant features while minimizing the loss of the conventional Face Recognition (FR) performance.
no code implementations • 29 Nov 2022 • Gyojin Han, Jaehyun Choi, Hyeong Gwon Hong, Junmo Kim
Training data generated by the proposed attack causes performance degradation on a specific task targeted by the attacker.
no code implementations • 27 Jul 2022 • Yooshin Cho, Youngsoo Kim, Hanbyel Cho, Jaesung Ahn, Hyeong Gwon Hong, Junmo Kim
Attention maps normalized with softmax operation highly rely upon magnitude of key vectors, and performance is degenerated if the magnitude information is removed.
no code implementations • CVPR 2020 • Janghyeon Lee, Hyeong Gwon Hong, Donggyu Joo, Junmo Kim
We propose a quadratic penalty method for continual learning of neural networks that contain batch normalization (BN) layers.
1 code implementation • 17 Feb 2020 • Janghyeon Lee, Donggyu Joo, Hyeong Gwon Hong, Junmo Kim
We propose a novel continual learning method called Residual Continual Learning (ResCL).
no code implementations • 3 Dec 2019 • Hyeong Gwon Hong, Pyunghwan Ahn, Junmo Kim
Transferrable neural architecture search can be viewed as a binary optimization problem where a single optimal path should be selected among candidate paths in each edge within the repeated cell block of the directed a cyclic graph form.