no code implementations • 29 May 2024 • Gyuseok Lee, SeongKu Kang, Wonbin Kweon, Hwanjo Yu
We expect this research direction to contribute to narrowing the gap between existing KD studies and practical applications, thereby enhancing the applicability of KD in real-world systems.
1 code implementation • 14 Mar 2024 • Joonwon Jang, Sanghwan Jang, Wonbin Kweon, Minjin Jeon, Hwanjo Yu
However, LLMs often rely on their pre-trained semantic priors of demonstrations rather than on the input-label relationships to proceed with ICL prediction.
1 code implementation • 26 Feb 2024 • Wonbin Kweon, SeongKu Kang, Junyoung Hwang, Hwanjo Yu
Recent recommender systems started to use rating elicitation, which asks new users to rate a small seed itemset for inferring their preferences, to improve the quality of initial recommendations.
no code implementations • 26 Feb 2024 • Wonbin Kweon
Despite the importance of having a measure of confidence in recommendation results, it has been surprisingly overlooked in the literature compared to the accuracy of the recommendation.
1 code implementation • 26 Feb 2024 • Wonbin Kweon, SeongKu Kang, Sanghwan Jang, Hwanjo Yu
To address this issue, we introduce Top-Personalized-K Recommendation, a new recommendation task aimed at generating a personalized-sized ranking list to maximize individual user satisfaction.
1 code implementation • 26 Feb 2024 • Wonbin Kweon, Hwanjo Yu
On this basis, we propose a Doubly Calibrated Estimator that involves the calibration of both the imputation and propensity models.
1 code implementation • 2 Mar 2023 • SeongKu Kang, Wonbin Kweon, Dongha Lee, Jianxun Lian, Xing Xie, Hwanjo Yu
Our work aims to transfer the ensemble knowledge of heterogeneous teachers to a lightweight student model using knowledge distillation (KD), to reduce the huge inference costs while retaining high accuracy.
1 code implementation • 26 Feb 2022 • SeongKu Kang, Dongha Lee, Wonbin Kweon, Junyoung Hwang, Hwanjo Yu
ConCF constructs a multi-branch variant of a given target model by adding auxiliary heads, each of which is trained with heterogeneous objectives.
1 code implementation • 9 Dec 2021 • Wonbin Kweon, SeongKu Kang, Hwanjo Yu
Extensive evaluations with various personalized ranking models on real-world datasets show that both the proposed calibration methods and the unbiased empirical risk minimization significantly improve the calibration performance.
no code implementations • 16 Jun 2021 • SeongKu Kang, Junyoung Hwang, Wonbin Kweon, Hwanjo Yu
To address this issue, we propose a novel method named Hierarchical Topology Distillation (HTD) which distills the topology hierarchically to cope with the large capacity gap.
1 code implementation • 5 Jun 2021 • Wonbin Kweon, SeongKu Kang, Hwanjo Yu
Recommender systems (RS) have started to employ knowledge distillation, which is a model compression technique training a compact model (student) with the knowledge transferred from a cumbersome model (teacher).
2 code implementations • 8 Dec 2020 • SeongKu Kang, Junyoung Hwang, Wonbin Kweon, Hwanjo Yu
Recent recommender systems have started to employ knowledge distillation, which is a model compression technique distilling knowledge from a cumbersome model (teacher) to a compact model (student), to reduce inference latency while maintaining performance.