1 code implementation • 20 Jul 2023 • Wenwei Gu, Jinyang Liu, Zhuangbin Chen, Jianping Zhang, Yuxin Su, Jiazhen Gu, Cong Feng, Zengyin Yang, Michael Lyu
Performance issues permeate large-scale cloud service systems, which can lead to huge revenue losses.
no code implementations • 23 May 2023 • Wenxuan Wang, Jingyuan Huang, Chang Chen, Jiazhen Gu, Jianping Zhang, Weibin Wu, Pinjia He, Michael Lyu
To this end, content moderation software has been widely deployed on these platforms to detect and blocks toxic content.
1 code implementation • 21 May 2023 • Yuxuan Wan, Wenxuan Wang, Pinjia He, Jiazhen Gu, Haonan Bai, Michael Lyu
Particularly, it is hard to generate inputs that can comprehensively trigger potential bias due to the lack of data containing both social groups as well as biased properties.
no code implementations • 15 Mar 2023 • Haoran Wu, Wenxuan Wang, Yuxuan Wan, Wenxiang Jiao, Michael Lyu
ChatGPT is a cutting-edge artificial intelligence language model developed by OpenAI, which has attracted a lot of attention due to its surprisingly strong ability in answering follow-up questions.
1 code implementation • 11 Feb 2023 • Wenxuan Wang, Jen-tse Huang, Weibin Wu, Jianping Zhang, Yizhan Huang, Shuqing Li, Pinjia He, Michael Lyu
In addition, we leverage the test cases generated by MTTM to retrain the model we explored, which largely improves model robustness (0% to 5. 9% EFR) while maintaining the accuracy on the original test set.
no code implementations • ACL 2022 • Wenxuan Wang, Wenxiang Jiao, Yongchang Hao, Xing Wang, Shuming Shi, Zhaopeng Tu, Michael Lyu
In this paper, we present a substantial step in better understanding the SOTA sequence-to-sequence (Seq2Seq) pretraining for neural machine translation~(NMT).
1 code implementation • 16 Jun 2021 • Xianghong Fang, Haoli Bai, Jian Li, Zenglin Xu, Michael Lyu, Irwin King
We further design discrete latent space for the variational attention and mathematically show that our model is free from posterior collapse.
1 code implementation • ACL 2021 • Haoli Bai, Wei zhang, Lu Hou, Lifeng Shang, Jing Jin, Xin Jiang, Qun Liu, Michael Lyu, Irwin King
In this paper, we propose BinaryBERT, which pushes BERT quantization to the limit by weight binarization.
1 code implementation • NeurIPS 2020 • Jiaxing Wang, Haoli Bai, Jiaxiang Wu, Xupeng Shi, Junzhou Huang, Irwin King, Michael Lyu, Jian Cheng
Nevertheless, it is unclear how parameter sharing affects the searching process.
1 code implementation • NAACL 2021 • Yongchang Hao, Shilin He, Wenxiang Jiao, Zhaopeng Tu, Michael Lyu, Xing Wang
In addition, experimental results demonstrate that our Multi-Task NAT is complementary to knowledge distillation, the standard knowledge transfer method for NAT.
no code implementations • 9 Oct 2020 • Pengpeng Liu, Xintong Han, Michael Lyu, Irwin King, Jia Xu
We present a self-supervised learning approach to learning monocular 3D face reconstruction with a pose guidance network (PGN).
1 code implementation • ACL 2020 • Yifan Gao, Chien-Sheng Wu, Shafiq Joty, Caiming Xiong, Richard Socher, Irwin King, Michael Lyu, Steven C. H. Hoi
The goal of conversational machine reading is to answer user questions given a knowledge base text which may require asking clarification questions.
no code implementations • 21 Apr 2020 • Xianghong Fang, Haoli Bai, Zenglin Xu, Michael Lyu, Irwin King
Variational autoencoders have been widely applied for natural language generation, however, there are two long-standing problems: information under-representation and posterior collapse.
1 code implementation • CVPR 2020 • Pengpeng Liu, Irwin King, Michael Lyu, Jia Xu
In this paper, we propose a unified method to jointly learn optical flow and stereo matching.
1 code implementation • 21 Nov 2019 • Haoli Bai, Jiaxiang Wu, Irwin King, Michael Lyu
The core challenge of few shot network compression lies in high estimation errors from the original network during inference, since the compressed network can easily over-fits on the few training instances.
no code implementations • 5 Sep 2019 • Jiazhen Gu, Huanlin Xu, Yangfan Zhou, Xin Wang, Hui Xu, Michael Lyu
Deep neural networks (DNNs) are shown to be promising solutions in many challenging artificial intelligence tasks.
1 code implementation • CVPR 2019 • Pengpeng Liu, Michael Lyu, Irwin King, Jia Xu
We present a self-supervised learning approach for optical flow.
Ranked #7 on Optical Flow Estimation on KITTI 2012
1 code implementation • 22 May 2017 • Yuxin Su, Irwin King, Michael Lyu
First, we design a concept called \textit{ideal candidate document} to introduce metric learning algorithm to query-independent model.
no code implementations • 11 Nov 2016 • Guangxi Li, Zenglin Xu, Linnan Wang, Jinmian Ye, Irwin King, Michael Lyu
Probabilistic Temporal Tensor Factorization (PTTF) is an effective algorithm to model the temporal tensor data.
no code implementations • NeurIPS 2009 • Zenglin Xu, Rong Jin, Jianke Zhu, Irwin King, Michael Lyu, Zhirong Yang
In this framework, SVM and TSVM can be regarded as a learning machine without regularization and one with full regularization from the unlabeled data, respectively.
no code implementations • NeurIPS 2008 • Haixuan Yang, Irwin King, Michael Lyu
Regularized Least Squares (RLS) algorithms have the ability to avoid over-fitting problems and to express solutions as kernel expansions.
no code implementations • NeurIPS 2008 • Zenglin Xu, Rong Jin, Irwin King, Michael Lyu
We consider the problem of multiple kernel learning (MKL), which can be formulated as a convex-concave problem.
no code implementations • NeurIPS 2007 • Zenglin Xu, Rong Jin, Jianke Zhu, Irwin King, Michael Lyu
We consider the problem of Support Vector Machine transduction, which involves a combinatorial problem with exponential computational complexity in the number of unlabeled examples.