1 code implementation • 29 Apr 2024 • Hong Nguyen, Hoang Nguyen, Melinda Chang, Hieu Pham, Shrikanth Narayanan, Michael Pazzani
Understanding the severity of conditions shown in images in medical diagnosis is crucial, serving as a key guide for clinical assessment, treatment, as well as evaluating longitudinal progression.
2 code implementations • NeurIPS 2023 • Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, Adams Wei Yu
The mixture proportions of pretraining data domains (e. g., Wikipedia, books, web text) greatly affect language model (LM) performance.
no code implementations • 18 Apr 2023 • Tue M. Cao, Nhat H. Tran, Phi Le Nguyen, Hieu Pham
This work discusses the use of contrastive learning and deep learning for diagnosing cardiovascular diseases from electrocardiography (ECG) signals.
1 code implementation • 12 Mar 2023 • Nguyen Tuan, Phi Nguyen, Dai Tran, Hung Pham, Quang Nguyen, Thanh Le, Hanh Van, Bach Do, Phuong Tran, Vinh Le, Thuy Nguyen, Long Tran, Hieu Pham
The proposed approach demonstrated excellent performance in detecting MI.
no code implementations • NeurIPS 2023 • Xiangning Chen, Chen Liang, Da Huang, Esteban Real, Kaiyuan Wang, Yao Liu, Hieu Pham, Xuanyi Dong, Thang Luong, Cho-Jui Hsieh, Yifeng Lu, Quoc V. Le
On diffusion models, Lion outperforms Adam by achieving a better FID score and reducing the training compute by up to 2. 3x.
1 code implementation • 27 Dec 2022 • Yingtian Zou, Vikas Verma, Sarthak Mittal, Wai Hoh Tang, Hieu Pham, Juho Kannala, Yoshua Bengio, Arno Solin, Kenji Kawaguchi
Mixup is a popular data augmentation technique for training deep neural networks where additional samples are generated by linearly interpolating pairs of inputs and their labels.
no code implementations • 19 Nov 2021 • Hieu Pham, Zihang Dai, Golnaz Ghiasi, Kenji Kawaguchi, Hanxiao Liu, Adams Wei Yu, Jiahui Yu, Yi-Ting Chen, Minh-Thang Luong, Yonghui Wu, Mingxing Tan, Quoc V. Le
Second, while increasing the dataset size and the model size has been the defacto method to improve the performance of deep learning models like BASIC, the effect of a large contrastive batch size on such contrastive-trained image-text models is not well-understood.
no code implementations • 2 Aug 2021 • Saba Moeinizade, Hieu Pham, Ye Han, Austin Dobbels, Guiping Hu
Specifically for soybeans, identifying their relative maturity is a vital piece of information used for advancement decisions.
no code implementations • 17 Mar 2021 • Saeed Khaki, Nima Safaei, Hieu Pham, Lizhi Wang
To help mitigate this data collection bottleneck in wheat breeding, we propose a novel deep learning framework to accurately and efficiently count wheat heads to aid in the gathering of real-time data for decision making.
1 code implementation • ICLR 2021 • Hieu Pham, Xinyi Wang, Yiming Yang, Graham Neubig
Back-translation is an effective strategy to improve the performance of Neural Machine Translation~(NMT) by generating pseudo-parallel data.
4 code implementations • 11 Feb 2021 • Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, YunHsuan Sung, Zhen Li, Tom Duerig
In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset.
Ranked #1 on Image Classification on VTAB-1k (using extra training data)
1 code implementation • 5 Jan 2021 • Hieu Pham, Quoc V. Le
As a result, these conventional methods are less effective than methods that leverage the structures, such as SpatialDropout and DropBlock, which randomly drop the values at certain contiguous areas in the hidden states and setting them to zero.
Ranked #1 on Image Classification on cifar-10,4000
no code implementations • 5 Dec 2020 • Saeed Khaki, Hieu Pham, Lizhi Wang
A model that predicts the yield of multiple crops and concurrently considers the interaction between multiple crop yields.
no code implementations • 9 Nov 2020 • Vikas Verma, Minh-Thang Luong, Kenji Kawaguchi, Hieu Pham, Quoc V. Le
Despite recent success, most contrastive self-supervised learning methods are domain-specific, relying heavily on data augmentation techniques that require knowledge about a particular domain, such as image cropping and rotation.
no code implementations • 30 Oct 2020 • Arissa Wongpanich, Hieu Pham, James Demmel, Mingxing Tan, Quoc Le, Yang You, Sameer Kumar
EfficientNets are a family of state-of-the-art image classification models based on efficiently scaled convolutional neural networks.
no code implementations • 23 Oct 2020 • Saeed Khaki, Hieu Pham, Ye Han, Wade Kent, Lizhi Wang
The future landscape of modern farming and plant breeding is rapidly changing due to the complex needs of our society.
no code implementations • 24 Sep 2020 • Lawrence Mosley, Hieu Pham, Yogesh Bansal, Eric Hare
Modern trends in digital agriculture have seen a shift towards artificial intelligence for crop quality assessment and yield estimation.
no code implementations • 20 Jul 2020 • Saeed Khaki, Hieu Pham, Ye Han, Andy Kuhl, Wade Kent, Lizhi Wang
In this paper, we propose a novel deep learning method for counting on-ear corn kernels in-field to aid in the gathering of real-time data and, ultimately, to improve decision making to maximize yield.
no code implementations • 26 Mar 2020 • Saeed Khaki, Hieu Pham, Ye Han, Andy Kuhl, Wade Kent, Lizhi Wang
The sliding window approach uses a convolutional neural network (CNN) for kernel detection.
9 code implementations • CVPR 2021 • Hieu Pham, Zihang Dai, Qizhe Xie, Minh-Thang Luong, Quoc V. Le
We present Meta Pseudo Labels, a semi-supervised learning method that achieves a new state-of-the-art top-1 accuracy of 90. 2% on ImageNet, which is 1. 6% better than the existing state-of-the-art.
1 code implementation • ICML 2020 • Xinyi Wang, Hieu Pham, Paul Michel, Antonios Anastasopoulos, Jaime Carbonell, Graham Neubig
To acquire a new skill, humans learn better and faster if a tutor, based on their current knowledge level, informs them of how much attention they should pay to particular content or practice problems.
no code implementations • 25 Sep 2019 • Hieu Pham, Quoc V. Le
Recent semi-supervised learning (SSL) methods often have a teacher to train a student in order to propagate labels from labeled data to unlabeled data.
1 code implementation • 14 Aug 2019 • Mohsen Shahhosseini, Guiping Hu, Hieu Pham
To this end, an optimization-based nested algorithm that considers tuning hyperparameters as well as finding the optimal weights to combine ensembles (Generalized Weighted Ensemble with Internally Tuned Hyperparameters (GEM-ITH)) is designed.
1 code implementation • ICLR 2019 • Xinyi Wang, Hieu Pham, Philip Arthur, Graham Neubig
Multilingual training of neural machine translation (NMT) systems has led to impressive accuracy improvements on low-resource languages.
1 code implementation • EMNLP 2018 • Xinyi Wang, Hieu Pham, Pengcheng Yin, Graham Neubig
Recent advances in Neural Machine Translation (NMT) show that adding syntactic information to NMT systems can improve the quality of their translations.
no code implementations • EMNLP 2018 • Xinyi Wang, Hieu Pham, Zihang Dai, Graham Neubig
In this work, we examine methods for data augmentation for text-based tasks such as neural machine translation (NMT).
no code implementations • ICML 2018 • Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, Jeff Dean
We propose Efficient Neural Architecture Search (ENAS), a fast and inexpensive approach for automatic model design.
Ranked #32 on Neural Architecture Search on NAS-Bench-201, CIFAR-10
28 code implementations • 9 Feb 2018 • Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, Jeff Dean
The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set.
no code implementations • ICLR 2018 • Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, Jeff Dean
We propose Efficient Neural Architecture Search (ENAS), a faster and less expensive approach to automated model design than previous methods.
no code implementations • ICLR 2018 • Azalia Mirhoseini, Anna Goldie, Hieu Pham, Benoit Steiner, Quoc V. Le, Jeff Dean
We introduce a hierarchical model for efficient placement of computational graphs onto hardware devices, especially in heterogeneous environments with a mixture of CPUs, GPUs, and other computational devices.
1 code implementation • ICML 2017 • Azalia Mirhoseini, Hieu Pham, Quoc V. Le, Benoit Steiner, Rasmus Larsen, Yuefeng Zhou, Naveen Kumar, Mohammad Norouzi, Samy Bengio, Jeff Dean
Key to our method is the use of a sequence-to-sequence model to predict which subsets of operations in a TensorFlow graph should run on which of the available devices.
10 code implementations • 29 Nov 2016 • Irwan Bello, Hieu Pham, Quoc V. Le, Mohammad Norouzi, Samy Bengio
Despite the computational expense, without much engineering and heuristic designing, Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidean graphs with up to 100 nodes.
47 code implementations • EMNLP 2015 • Minh-Thang Luong, Hieu Pham, Christopher D. Manning
Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25. 9 BLEU points, an improvement of 1. 0 BLEU points over the existing best system backed by NMT and an n-gram reranker.
Ranked #1 on Machine Translation on 20NEWS (Accuracy metric)