no code implementations • DeeLIO (ACL) 2022 • Sukmin Cho, Soyeong Jeong, Wonsuk Yang, Jong Park
The dense retriever with the queries requiring implicit information is found to make good performance improvement.
no code implementations • LREC 2022 • Jung-Ho Kim, Eui Jun Hwang, Sukmin Cho, Du Hui Lee, Jong Park
To address these problems, we introduce an avatar-based SLP system composed of a sign language translation (SLT) model and an avatar animation generation module.
no code implementations • 22 Apr 2024 • Sukmin Cho, Soyeong Jeong, Jeongyeon Seo, Taeho Hwang, Jong C. Park
The robustness of recent Large Language Models (LLMs) has become increasingly crucial as their applicability expands across various domains and real-world applications.
1 code implementation • 21 Mar 2024 • Soyeong Jeong, Jinheon Baek, Sukmin Cho, Sung Ju Hwang, Jong C. Park
Retrieval-Augmented Large Language Models (LLMs), which incorporate the non-parametric knowledge from external knowledge bases into LLMs, have emerged as a promising approach to enhancing response accuracy in several tasks, such as Question-Answering (QA).
no code implementations • 26 Oct 2023 • Sukmin Cho, Jeongyeon Seo, Soyeong Jeong, Jong C. Park
Large language models (LLMs) enable zero-shot approaches in open-domain question answering (ODQA), yet with limited advancements as the reader is compared to the retriever.
1 code implementation • 20 Oct 2023 • Soyeong Jeong, Jinheon Baek, Sukmin Cho, Sung Ju Hwang, Jong C. Park
Moreover, further finetuning LMs with labeled datasets is often infeasible due to their absence, but it is also questionable if we can transfer smaller LMs having limited knowledge only with unlabeled test data.
1 code implementation • 23 May 2023 • Sukmin Cho, Soyeong Jeong, Jeongyeon Seo, Jong C. Park
Along with highlighting the impact of optimization on the zero-shot re-ranker, we propose a novel discrete prompt optimization method, Constrained Prompt generation (Co-Prompt), with the metric estimating the optimum for re-ranking.
1 code implementation • ACL 2022 • Soyeong Jeong, Jinheon Baek, Sukmin Cho, Sung Ju Hwang, Jong C. Park
Dense retrieval models, which aim at retrieving the most relevant document for an input query on a dense representation space, have gained considerable attention for their remarkable success.
Ranked #1000000000 on Passage Retrieval on Natural Questions