no code implementations • EMNLP 2021 • Jeonghwan Kim, Giwon Hong, Kyung-Min Kim, Junmo Kang, Sung-Hyon Myaeng
Our work rigorously tests state-of-the-art models on DROP, a numerical MRC dataset, to see if they can handle passages that contain out-of-range numbers.
no code implementations • EACL (HCINLP) 2021 • Jeonghwan Kim, Junmo Kang, Suwon Shin, Sung-Hyon Myaeng
Customer reviews are useful in providing an indirect, secondhand experience of a product.
no code implementations • Findings (NAACL) 2022 • Jeonghwan Kim, Junmo Kang, Kyung-Min Kim, Giwon Hong, Sung-Hyon Myaeng
Numerical reasoning over text is a challenging subtask in question answering (QA) that requires both the understanding of texts and numbers.
no code implementations • 26 Feb 2024 • Jeonghwan Kim, Heng Ji
Recent advances in instruction-tuned Large Vision-Language Models (LVLMs) have imbued the models with the ability to generate high-level, image-grounded explanations with ease.
no code implementations • 18 Jan 2024 • Jeonghwan Kim, Jisoo Kim, Jeonghyeon Na, Hanbyul Joo
To address this challenge, we introduce the ParaHome system, designed to capture and parameterize dynamic 3D movements of humans and objects within a common home environment.
1 code implementation • 2 May 2023 • Giwon Hong, Jeonghwan Kim, Junmo Kang, Sung-Hyon Myaeng, Joyce Jiyoung Whang
Most existing retrieval-augmented language models (LMs) assume a naive dichotomy within a retrieved document set: query-relevance and irrelevance.
1 code implementation • CVPR 2023 • Jeonghwan Kim, Mi-Gyeong Gwon, Hyunwoo Park, Hyukmin Kwon, Gi-Mun Um, Wonjun Kim
Even though those approaches have shown the remarkable progress in 3D human mesh reconstruction, it is still difficult to directly infer the relationship between features, which are encoded from the 2D input image, and 3D coordinates of each vertex.
Ranked #19 on Monocular 3D Human Pose Estimation on Human3.6M (using extra training data)
no code implementations • 13 Oct 2021 • Junmo Kang, Suwon Shin, Jeonghwan Kim, Jaeyoung Jo, Sung-Hyon Myaeng
Moreover, we evaluate an initial approach to the problem that has not succeeded in maintaining the accuracy of the model while showing a promising compute efficiency by thoroughly investigating the necessity of the generator module of ELECTRA.
no code implementations • ICLR 2021 • Dongsu Zhang, Changwoon Choi, Jeonghwan Kim, Young Min Kim
We formulate the shape generation process as sampling from the transition kernel of a Markov chain, where the sampling chain eventually evolves to the full shape of the learned distribution.
no code implementations • EMNLP 2021 • Junmo Kang, Jeonghwan Kim, Suwon Shin, Sung-Hyon Myaeng
Tag recommendation relies on either a ranking function for top-$k$ tags or an autoregressive generation method.