1 code implementation • COLING 2022 • Li-Ming Zhan, Haowen Liang, Lu Fan, Xiao-Ming Wu, Albert Y.S. Lam
Comprehensive experiments on three real-world intent detection benchmark datasets demonstrate the high effectiveness of our proposed approach and its great potential in improving state-of-the-art methods for few-shot OOD intent detection.
2 code implementations • 9 Apr 2024 • Li-Ming Zhan, Bo Liu, Xiao-Ming Wu
Out-of-distribution (OOD) detection plays a crucial role in ensuring the safety and reliability of deep neural networks in various applications.
Out-of-Distribution Detection Out of Distribution (OOD) Detection +4
1 code implementation • ACL 2022 • Yuwei Zhang, Haode Zhang, Li-Ming Zhan, Xiao-Ming Wu, Albert Y. S. Lam
Existing approaches typically rely on a large amount of labeled utterances and employ pseudo-labeling methods for representation learning and clustering, which are label-intensive, inefficient, and inaccurate.
1 code implementation • NeurIPS 2021 • Guangyuan Shi, Jiaxin Chen, Wenlong Zhang, Li-Ming Zhan, Xiao-Ming Wu
Our study shows that existing methods severely suffer from catastrophic forgetting, a well-known problem in incremental learning, which is aggravated due to data scarcity and imbalance in the few-shot setting.
Ranked #7 on Few-Shot Class-Incremental Learning on mini-Imagenet
no code implementations • Findings (EMNLP) 2021 • Haode Zhang, Yuwei Zhang, Li-Ming Zhan, Jiaxin Chen, Guangyuan Shi, Xiao-Ming Wu, Albert Y. S. Lam
This paper investigates the effectiveness of pre-training for few-shot intent classification.
1 code implementation • ICML Workshop AutoML 2021 • Jiaxin Chen, Li-Ming Zhan, Xiao-Ming Wu, Fu-Lai Chung
Many meta-learning algorithms can be formulated into an interleaved process, in the sense that task-specific predictors are learned during inner-task adaptation and meta-parameters are updated during meta-update.
no code implementations • ACL 2021 • Li-Ming Zhan, Haowen Liang, Bo Liu, Lu Fan, Xiao-Ming Wu, Albert Y. S. Lam
Since the distribution of outlier utterances is arbitrary and unknown in the training stage, existing methods commonly rely on strong assumptions on data distribution such as mixture of Gaussians to make inference, resulting in either complex multi-step training procedures or hand-crafted rules such as confidence threshold selection for outlier detection.
2 code implementations • 18 Feb 2021 • Bo Liu, Li-Ming Zhan, Li Xu, Lin Ma, Yan Yang, Xiao-Ming Wu
We show that SLAKE can be used to facilitate the development and evaluation of Med-VQA systems.
1 code implementation • NeurIPS 2020 • Jiaxin Chen, Xiao-Ming Wu, Yanke Li, Qimai Li, Li-Ming Zhan, Fu-Lai Chung
The support/query (S/Q) episodic training strategy has been widely used in modern meta-learning algorithms and is believed to improve their generalization ability to test environments.
1 code implementation • 26 Dec 2019 • Jiaxin Chen, Li-Ming Zhan, Xiao-Ming Wu, Fu-Lai Chung
In this paper, we recast metric-based meta-learning from a Bayesian perspective and develop a variational metric scaling framework for learning a proper metric scaling parameter.