no code implementations • 1 Apr 2024 • Hyeongmin Lee, Kyoungkook Kang, Jungseul Ok, Sunghyun Cho
Recent image tone adjustment (or enhancement) approaches have predominantly adopted supervised learning for learning human-centric perceptual assessment.
no code implementations • 31 Dec 2023 • Hwayoon Lee, Kyoungkook Kang, Hyeongmin Lee, Seung-Hwan Baek, Sunghyun Cho
UGPNet first restores the image structure of a degraded input using a regression model and synthesizes a perceptually-realistic image with a generative model on top of the regressed output.
1 code implementation • 5 Aug 2022 • Chajin Shin, Hyeongmin Lee, Hanbin Son, Sangjin Lee, Dogyoon Lee, Sangyoun Lee
Then, we increase the receptive field to make the adaptive rescaling module consider the spatial correlation.
no code implementations • 3 Aug 2022 • MyeongAh Cho, Tae-young Chung, Hyeongmin Lee, Sangyoun Lee
The region proposal task is to generate a set of candidate regions that contain an object.
1 code implementation • CVPR 2023 • Sangjin Lee, Hyeongmin Lee, Chajin Shin, Hanbin Son, Sangyoun Lee
Lastly, we propose loss functions to give supervisions of the discontinuous motion areas which can be applied along with FTM and D-map.
1 code implementation • CVPR 2021 • Dogyoon Lee, Jaeha Lee, Junhyeop Lee, Hyeongmin Lee, Minhyeok Lee, Sungmin Woo, Sangyoun Lee
Data augmentation is an effective regularization strategy to alleviate the overfitting, which is an inherent drawback of the deep neural networks.
Ranked #3 on 3D Point Cloud Classification on ModelNet40-C
no code implementations • 5 Oct 2020 • Hyeongmin Lee, Taeoh Kim, Hanbin Son, Sangwook Baek, Minsu Cheon, Sangyoun Lee
Extensive results for various image processing tasks indicate that the performance of FTN is comparable in multiple continuous levels, and is significantly smoother and lighter than that of other frameworks.
no code implementations • 30 Sep 2020 • Hanbin Son, Taeoh Kim, Hyeongmin Lee, Sangyoun Lee
The postprocessing network increases the quality of decoded images using an example-based learning.
1 code implementation • 13 Aug 2020 • Taeoh Kim, Hyeongmin Lee, MyeongAh Cho, Ho Seong Lee, Dong Heon Cho, Sangyoun Lee
Based on our novel temporal data augmentation algorithms, video recognition performances are improved using only a limited amount of training data compared to the spatial-only data augmentation algorithms, including the 1st Visual Inductive Priors (VIPriors) for data-efficient action recognition challenge.
no code implementations • 27 May 2020 • Sangjin Lee, Hyeongmin Lee, Taeoh Kim, Sangyoun Lee
Unlike previous studies that usually have been focused on the design of modules or construction of networks, we propose a novel Extrapolative-Interpolative Cycle (EIC) loss using pre-trained frame interpolation module to improve extrapolation performance.
no code implementations • 11 Mar 2020 • Hyeongmin Lee, Taeoh Kim, Hanbin Son, Sangwook Baek, Minsu Cheon, Sangyoun Lee
In this paper, we propose a novel continuous-level learning framework using a Filter Transition Network (FTN) which is a non-linear module that easily adapt to new levels, and is regularized to prevent undesirable side-effects.
1 code implementation • CVPR 2020 • Hyeongmin Lee, Taeoh Kim, Tae-young Chung, Daehyun Pak, Yuseok Ban, Sangyoun Lee
Video frame interpolation is one of the most challenging tasks in video processing research.
Ranked #15 on Video Frame Interpolation on X4K1000FPS