no code implementations • 3 Apr 2024 • Simiao Li, Yun Zhang, Wei Li, Hanting Chen, Wenjia Wang, BingYi Jing, Shaohui Lin, Jie Hu
Knowledge distillation (KD) is a promising yet challenging model compression technique that transfers rich learning representations from a well-performing but cumbersome teacher model to a compact student model.
no code implementations • 28 Mar 2024 • Kexin Shi, Jing Zhang, Linjiajie Fang, Wenjia Wang, BingYi Jing
In implicit collaborative filtering, hard negative mining techniques are developed to accelerate and enhance the recommendation model learning.
no code implementations • 8 Feb 2024 • Wenyu Jiang, Zhenlong Liu, Zejian Xie, Songxin Zhang, BingYi Jing, Hongxin Wei
In this paper, we propose to treat the learning complexity (LC) as the scoring function for classification and regression tasks.
no code implementations • 8 Dec 2023 • Junyu Lu, Dixiang Zhang, Songxin Zhang, Zejian Xie, Zhuoyang Song, Cong Lin, Jiaxing Zhang, BingYi Jing, Pingjian Zhang
During the instruction fine-tuning stage, we introduce semantic-aware visual feature extraction, a crucial method that enables the model to extract informative features from concrete visual objects.
Ranked #1 on Image Captioning on nocaps entire
1 code implementation • 25 Sep 2023 • Yun Zhang, Wei Li, Simiao Li, Hanting Chen, Zhijun Tu, Wenjia Wang, BingYi Jing, Shaohui Lin, Jie Hu
Knowledge distillation (KD) compresses deep neural networks by transferring task-related knowledge from cumbersome pre-trained teacher models to compact student models.
Ranked #22 on Image Super-Resolution on Urban100 - 4x upscaling
no code implementations • 25 Nov 2022 • Kexin Shi, Yun Zhang, BingYi Jing, Wenjia Wang
In implicit collaborative filtering (CF) task of recommender systems, recent works mainly focus on model structure design with promising techniques like graph neural networks (GNNs).