no code implementations • ICML 2020 • Ning Xu, Yun-Peng Liu, Jun Shu, Xin Geng
Label distribution covers a certain number of labels, representing the degree to which each label describes the instance.
no code implementations • 25 Apr 2024 • Shi-Yu Xia, Wenxuan Zhu, Xu Yang, Xin Geng
When initializing variable-sized models adapting for different resource constraints, SWS achieves better results while reducing around 20x parameters stored to initialize these models and around 10x pre-training costs, in contrast to the pre-training and fine-tuning approach.
no code implementations • 25 Mar 2024 • Yunlong Tang, Yuxuan Wan, Lei Qi, Xin Geng
The Style Generation module refreshes all styles at every training epoch, while the Style Removal module eliminates variations in the encoder's output features caused by input styles.
1 code implementation • 29 Feb 2024 • Chenghao Li, Lei Qi, Xin Geng
In this paper, considering these two critical factors, we propose a SAM-guided Two-stream Lightweight Model for unsupervised anomaly detection (STLM) that not only aligns with the two practical application requirements but also harnesses the robust generalization capabilities of SAM.
1 code implementation • 23 Jan 2024 • Tiankai Hang, Shuyang Gu, Dong Chen, Xin Geng, Baining Guo
This paper presents a novel generative model, Collaborative Competitive Agents (CCA), which leverages the capabilities of multiple Large Language Models (LLMs) based agents to execute complex tasks.
no code implementations • 16 Jan 2024 • Fu Feng, Jing Wang, Xin Geng
GTL trains a population of networks, selects superior learngenes by tournaments, performs learngene mutations, and passes the learngenes to next generations.
no code implementations • 15 Dec 2023 • Shunxin Guo, Hongsong Wang, Xin Geng
Existing heterogeneous federated learning mainly focuses on skewing the label distribution across clients.
no code implementations • 11 Dec 2023 • Kouzhiqiang Yucheng Xie, Jing Wang, Yuheng Jia, Boyu Shi, Xin Geng
This paper introduces RankMatch, an innovative approach for Semi-Supervised Label Distribution Learning (SSLDL).
no code implementations • 10 Dec 2023 • Boyu Shi, Shiyu Xia, Xu Yang, Haokun Chen, Zhiqiang Kou, Xin Geng
To overcome these challenges, motivated by the recently proposed Learngene framework, we propose a novel method called Learngene Pool.
1 code implementation • 9 Dec 2023 • Shiyu Xia, Miaosen Zhang, Xu Yang, Ruiming Chen, Haokun Chen, Xin Geng
Under the situation where we need to produce models of varying depths adapting for different resource constraints, TLEG achieves comparable results while reducing around 19x parameters stored to initialize these models and around 5x pre-training costs, in contrast to the pre-training and fine-tuning approach.
no code implementations • 1 Dec 2023 • Haokun Chen, Xu Yang, Yuhang Huang, Zihan Wu, Jing Wang, Xin Geng
Specifically, using our approach on ImageNet, we increase accuracy from 74. 70\% in a 4-shot setting to 76. 21\% with just 2 shots.
1 code implementation • 28 Nov 2023 • Xingyu Zhao, Yuexuan An, Lei Qi, Xin Geng
Most existing MLC methods are based on the assumption that the correlation of two labels in each label pair is symmetric, which is violated in many real-world scenarios.
no code implementations • 22 Nov 2023 • Lei Qi, Peng Dong, Tan Xiong, Hui Xue, Xin Geng
In this paper, we aim to solve the single-domain generalizable object detection task in urban scenarios, meaning that a model trained on images from one weather condition should be able to perform well on images from any other weather conditions.
no code implementations • 25 Sep 2023 • Biao Liu, Jie Wang, Ning Xu, Xin Geng
Single-positive multi-label learning (SPMLL) is a typical weakly supervised multi-label learning problem, where each training example is annotated with only one positive label.
no code implementations • 31 Aug 2023 • Yu Shi, Dong-Dong Wu, Xin Geng, Min-Ling Zhang
This is known as Unreliable Partial Label Learning (UPLL) that introduces an additional complexity due to the inherent unreliability and ambiguity of partial labels, often resulting in a sub-optimal performance with existing methods.
no code implementations • 2 Aug 2023 • Dongjia Zhao, Lei Qi, Xiao Shi, Yinghuan Shi, Xin Geng
Horizontally, it applies image-level and feature-level perturbations to enhance the diversity of the training data, mitigating the issue of limited diversity in single-source domains.
no code implementations • 1 Aug 2023 • Biao Liu, Congyu Qiao, Ning Xu, Xin Geng, Ziran Zhu, Jun Yang
In order to fully exploit the inherent spatial label-correlation between neighboring grids, we propose a novel approach, {\ours}, i. e., VAriational Label-Correlation Enhancement for Congestion Prediction, which considers the local label-correlation in the congestion map, associating the estimated congestion value of each grid with a local label-correlation weight influenced by its surrounding grids.
no code implementations • 25 Jul 2023 • Lei Qi, Hongpeng Yang, Yinghuan Shi, Xin Geng
Our method includes two paths: the main path and the auxiliary (augmented) path.
no code implementations • 21 Jun 2023 • Lei Qi, Ziang Liu, Yinghuan Shi, Xin Geng
Additionally, we introduce the Dropout-based Perturbation (DP) module to enhance the generalization capability of the metric network by enriching the sample-pair diversity.
1 code implementation • 17 Jun 2023 • Fu Feng, Jing Wang, Xu Yang, Xin Geng
Inspired by the biological intelligence, artificial intelligence (AI) has devoted to building the machine intelligence.
no code implementations • 30 May 2023 • Jin Yuan, Yang Zhang, Yangzhou Du, Zhongchao shi, Xin Geng, Jianping Fan, Yong Rui
In this paper, a novel Epistemic Graph Layer (EGLayer) is introduced to enable hybrid learning, enhancing the exchange of information between deep features and a structured knowledge graph.
1 code implementation • NeurIPS 2023 • Xu Yang, Yongliang Wu, Mingzhuo Yang, Haokun Chen, Xin Geng
After discovering that Language Models (LMs) can be good in-context few-shot learners, numerous strategies have been proposed to optimize in-context sequence configurations.
1 code implementation • CVPR 2023 • Shiyu Xia, Jiaqi Lv, Ning Xu, Gang Niu, Xin Geng
Under partial-label learning (PLL) where, for each training instance, only a set of ambiguous candidate labels containing the unknown true label is accessible, contrastive learning has recently boosted the performance of PLL on vision tasks, attributed to representations learned by contrasting the same/different classes of entities.
no code implementations • 3 May 2023 • Qiufeng Wang, Xu Yang, Shuxia Lin, Jing Wang, Xin Geng
(i) Accumulating: the knowledge is accumulated during the continuous learning of an ancestry model.
no code implementations • 6 Apr 2023 • Lei Qi, Dongjia Zhao, Yinghuan Shi, Xin Geng
By exploiting the differences between local patches of an image, our proposed PBN can effectively enhance the robustness of the model's parameters.
no code implementations • 21 Mar 2023 • Zhiqiang Kou, Yuheng Jia, Jing Wang, Boyu Shi, Xin Geng
Existing LE approach have the following problems: (\textbf{i}) They use logical label to train mappings to LD, but the supervision information is too loose, which can lead to inaccurate model prediction; (\textbf{ii}) They ignore feature redundancy and use the collected features directly.
2 code implementations • ICCV 2023 • Tiankai Hang, Shuyang Gu, Chen Li, Jianmin Bao, Dong Chen, Han Hu, Xin Geng, Baining Guo
Denoising diffusion models have been a mainstream approach for image generation, however, training these models often suffers from slow convergence.
Ranked #1 on Image Generation on ImageNet 256x256
no code implementations • 25 Feb 2023 • Zhiqiang Kou, Yuheng Jia, Jing Wang, Xin Geng
The previous LDL methods all assumed the LDs of the training instances are accurate.
no code implementations • 20 Feb 2023 • Yu Shi, Ning Xu, Hua Yuan, Xin Geng
Therefore, a generalized PLL named Unreliable Partial Label Learning (UPLL) is proposed, in which the true label may not be in the candidate label set.
1 code implementation • 11 Aug 2022 • Tiankai Hang, Huan Yang, Bei Liu, Jianlong Fu, Xin Geng, Baining Guo
Specifically, we propose a recurrent motion generator to extract a series of semantic and motion information from the language and feed it along with visual information to a pre-trained StyleGAN to generate high-quality frames.
no code implementations • 11 Aug 2022 • Lei Qi, Hongpeng Yang, Yinghuan Shi, Xin Geng
To address the task, we first analyze the theory of the multi-domain learning, which highlights that 1) mitigating the impact of domain gap and 2) exploiting all samples to train the model can effectively reduce the generalization error in each source domain so as to improve the quality of pseudo-labels.
no code implementations • 2 Jun 2022 • Ning Xu, Biao Liu, Jiaqi Lv, Congyu Qiao, Xin Geng
Partial label learning (PLL) aims to train multiclass classifiers from the examples each annotated with a set of candidate labels where a fixed but unknown candidate label is correct.
1 code implementation • 1 Jun 2022 • Ning Xu, Congyu Qiao, Jiaqi Lv, Xin Geng, Min-Ling Zhang
To cope with the challenge, we investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label, and show that one can successfully learn a theoretically grounded multi-label classifier for the problem.
no code implementations • 12 Apr 2022 • Lei Qi, Jiaying Shen, Jiaqi Liu, Yinghuan Shi, Xin Geng
Besides, for the label distribution of each class, we further revise it to give more and equal attention to the other domains that the class does not belong to, which can effectively reduce the domain gap across different domains and obtain the domain-invariant feature.
1 code implementation • 8 Apr 2022 • Congyu Qiao, Ning Xu, Xin Geng
Most existing PLL approaches assume that the incorrect labels in each training example are randomly picked as the candidate labels and model the generation process of the candidate labels in a simple way.
1 code implementation • 8 Apr 2022 • Jin Yuan, Feng Hou, Yangzhou Du, Zhongchao shi, Xin Geng, Jianping Fan, Yong Rui
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data, and multi-source domain adaptation (MSDA) is very attractive for real world applications.
1 code implementation • 8 Mar 2022 • Jin Yuan, Shikai Chen, Yao Zhang, Zhongchao shi, Xin Geng, Jianping Fan, Yong Rui
Subsequently, we design the graph attention transformer layer to transfer this adjacency matrix to adapt to the current domain.
no code implementations • 24 Jan 2022 • Lei Qi, Lei Wang, Yinghuan Shi, Xin Geng
Different from the conventional data augmentation, the proposed domain-aware mix-normalization to enhance the diversity of features during training from the normalization view of the neural network, which can effectively alleviate the model overfitting to the source domains, so as to boost the generalization capability of the model in the unseen domain.
no code implementations • 15 Jan 2022 • Wenyan Pan, Zhili Zhou, Miaogen Ling, Xin Geng, Q. M. Jonathan Wu
The objective of image manipulation detection is to identify and locate the manipulated regions in the images.
1 code implementation • 30 Nov 2021 • Lei Qi, Jiaqi Liu, Lei Wang, Yinghuan Shi, Xin Geng
A significance of our work lies in that it shows the potential of unsupervised domain generalization for person ReID and sets a strong baseline for the further research on this topic.
2 code implementations • 22 Nov 2021 • Boyu Zhang, Jiayuan Chen, Yinfei Xu, HUI ZHANG, Xu Yang, Xin Geng
Traditionally, AQA is treated as a regression problem to learn the underlying mappings between videos and action scores.
Ranked #1 on Action Quality Assessment on JIGSAWS
1 code implementation • NeurIPS 2021 • Ning Xu, Congyu Qiao, Xin Geng, Min-Ling Zhang
In this paper, we consider instance-dependent PLL and assume that each example is associated with a latent label distribution constituted by the real number of each label, representing the degree to each label describing the feature.
1 code implementation • 12 Jun 2021 • Qiufeng Wang, Xin Geng, Shuxia Lin, Shiyu Xia, Lei Qi, Ning Xu
Moreover, the learngene, i. e., the gene for learning initialization rules of the target model, is proposed to inherit the meta-knowledge from the collective model and reconstruct a lightweight individual model on the target task.
no code implementations • 11 Jun 2021 • Jiaqi Lv, Biao Liu, Lei Feng, Ning Xu, Miao Xu, Bo An, Gang Niu, Xin Geng, Masashi Sugiyama
Partial-label learning (PLL) utilizes instances with PLs, where a PL includes several candidate labels but only one is the true label (TL).
no code implementations • 1 Apr 2021 • Hao Yang, Youzhi Jin, Ziyin Li, Deng-Bao Wang, Lei Miao, Xin Geng, Min-Ling Zhang
During the training process, DLT records the loss value of each sample and calculates dynamic loss thresholds.
no code implementations • 18 Sep 2020 • Jiaqi Lv, Tianran Wu, Chenglun Peng, Yun-Peng Liu, Ning Xu, Xin Geng
In this paper, we present a compact learning (CL) framework to embed the features and labels simultaneously and with mutual guidance.
no code implementations • NeurIPS 2020 • Lei Feng, Jiaqi Lv, Bo Han, Miao Xu, Gang Niu, Xin Geng, Bo An, Masashi Sugiyama
Partial-label learning (PLL) is a multi-class classification problem, where each training example is associated with a set of candidate labels.
1 code implementation • 3 Jul 2020 • Bin-Bin Gao, Xin-Xin Liu, Hong-Yu Zhou, Jianxin Wu, Xin Geng
The effectiveness of our approach has been demonstrated on both facial age and attractiveness estimation tasks.
Ranked #1 on Attractiveness Estimation on CFD
1 code implementation • ICML 2020 • Jiaqi Lv, Miao Xu, Lei Feng, Gang Niu, Xin Geng, Masashi Sugiyama
Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.
no code implementations • 2 Aug 2019 • Lei Qi, Lei Wang, Jing Huo, Yinghuan Shi, Xin Geng, Yang Gao
To achieve the camera alignment, we develop a Multi-Camera Adversarial Learning (MCAL) to map images of different cameras into a shared subspace.
no code implementations • 14 May 2019 • Dongdong Yu, Kai Su, Xin Geng, Changhu Wang
In this paper, a novel Context-and-Spatial Aware Network (CSANet), which integrates both a Context Aware Path and Spatial Aware Path, is proposed to obtain effective features involving both context information and spatial information.
no code implementations • CVPR 2019 • Kai Su, Dongdong Yu, Zhenqi Xu, Xin Geng, Changhu Wang
Multi-person pose estimation is an important but challenging problem in computer vision.
1 code implementation • 13 Jul 2018 • Bin-Bin Gao, Hong-Yu Zhou, Jianxin Wu, Xin Geng
Age estimation performance has been greatly improved by using convolutional neural network.
no code implementations • 26 Jun 2017 • Ruifeng Shao, Ning Xu, Xin Geng
To solve this problem, we assume that each multi-label instance is described by a vector of latent real-valued labels, which can reflect the importance of the corresponding labels.
2 code implementations • 6 Nov 2016 • Bin-Bin Gao, Chao Xing, Chen-Wei Xie, Jianxin Wu, Xin Geng
However, it is difficult to collect sufficient training images with precise labels in some domains such as apparent age estimation, head pose estimation, multi-label classification and semantic segmentation.
Ranked #1 on Head Pose Estimation on BJUT-3D
no code implementations • CVPR 2016 • Chao Xing, Xin Geng, Hui Xue
In order to learn this general model family, this paper uses a method called Logistic Boosting Regression (LogitBoost) which can be seen as an additive weighted function regression from the statistical viewpoint.
no code implementations • 26 Aug 2014 • Xin Geng
This paper proposes six working LDL algorithms in three ways: problem transformation, algorithm adaptation, and specialized algorithm design.
no code implementations • CVPR 2014 • Xin Geng, Longrun Luo
The key idea is to learn a latent preference distribution for each instance.
no code implementations • CVPR 2014 • Xin Geng, Yu Xia
Accurate ground truth pose is essential to the training of most existing head pose estimation algorithms.