Search Results for author: Yeongmin Kim

Found 8 papers, 6 papers with code

Reward-based Input Construction for Cross-document Relation Extraction

1 code implementation31 May 2024 Byeonghu Na, Suhyeon Jo, Yeongmin Kim, Il-Chul Moon

Relation extraction (RE) is a fundamental task in natural language processing, aiming to identify relations between target entities in text.

Diffusion Rejection Sampling

1 code implementation28 May 2024 Byeonghu Na, Yeongmin Kim, Minsang Park, DongHyeok Shin, Wanmo Kang, Il-Chul Moon

Recent advances in powerful pre-trained diffusion models encourage the development of methods to improve the sampling performance under well-trained diffusion models.

Diffusion Bridge AutoEncoders for Unsupervised Representation Learning

no code implementations27 May 2024 Yeongmin Kim, Kwanghyeon Lee, Minsang Park, Byeonghu Na, Il-Chul Moon

Recent studies have employed an auxiliary encoder to identify a corresponding representation from a sample and to adjust the dimensionality of a latent variable z.

Disentanglement

Training Unbiased Diffusion Models From Biased Dataset

1 code implementation2 Mar 2024 Yeongmin Kim, Byeonghu Na, Minsang Park, JoonHo Jang, Dongjun Kim, Wanmo Kang, Il-Chul Moon

While directly applying it to score-matching is intractable, we discover that using the time-dependent density ratio both for reweighting and score correction can lead to a tractable form of the objective function to regenerate the unbiased data density.

Label-Noise Robust Diffusion Models

1 code implementation27 Feb 2024 Byeonghu Na, Yeongmin Kim, HeeSun Bae, Jung Hyun Lee, Se Jung Kwon, Wanmo Kang, Il-Chul Moon

This paper proposes Transition-aware weighted Denoising Score Matching (TDSM) for training conditional diffusion models with noisy labels, which is the first study in the line of diffusion models.

Denoising

SAAL: Sharpness-Aware Active Learning

1 code implementation Proceedings of the 40th International Conference on Machine Learning 2023 Yoon-Yeong Kim, Youngjae Cho, JoonHo Jang, Byeonghu Na, Yeongmin Kim, Kyungwoo Song, Wanmo Kang, Il-Chul Moon

Specifically, our proposed method, Sharpness-Aware Active Learning (SAAL), constructs its acquisition function by selecting unlabeled instances whose perturbed loss becomes maximum.

Active Learning Image Classification +3

AltUB: Alternating Training Method to Update Base Distribution of Normalizing Flow for Anomaly Detection

no code implementations26 Oct 2022 Yeongmin Kim, Huiwon Jang, DongKeon Lee, Ho-Jin Choi

To break through these observations, we propose a simple solution AltUB which introduces alternating training to update the base distribution of normalizing flow for anomaly detection.

Ranked #2 on Anomaly Detection on BTAD (using extra training data)

Unsupervised Anomaly Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.