1 code implementation • CVPR 2022 • Shaojie Bai, Zhengyang Geng, Yash Savani, J. Zico Kolter
Many recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms by encouraging iterative refinements toward a stable flow estimation.
Ranked #1 on Optical Flow Estimation on KITTI 2015 (train)
1 code implementation • NeurIPS 2021 • Shen Yan, Colin White, Yash Savani, Frank Hutter
While early research in neural architecture search (NAS) required extreme computational resources, the recent releases of tabular and surrogate benchmarks have greatly increased the speed and reproducibility of NAS research.
2 code implementations • NeurIPS 2020 • Colin White, Willie Neiswanger, Sam Nolen, Yash Savani
First we formally define architecture encodings and give a theoretical characterization on the scalability of the encodings we study Then we identify the main encoding-dependent subroutines which NAS algorithms employ, running experiments to show which encodings work best with each subroutine for many popular algorithms.
3 code implementations • NeurIPS 2020 • Yash Savani, Colin White, Naveen Sundar Govindarajulu
Intra-processing methods are designed specifically to debias large models which have been trained on a generic dataset and fine-tuned on a more specific task.
2 code implementations • 6 May 2020 • Colin White, Sam Nolen, Yash Savani
In this work, we show that (1) the simplest hill-climbing algorithm is a powerful baseline for NAS, and (2), when the noise in popular NAS benchmark datasets is reduced to a minimum, hill-climbing to outperforms many popular state-of-the-art algorithms.
3 code implementations • 25 Oct 2019 • Colin White, Willie Neiswanger, Yash Savani
Bayesian optimization (BO), which has long had success in hyperparameter optimization, has recently emerged as a very promising strategy for NAS when it is coupled with a neural predictor.
no code implementations • 25 Sep 2019 • Colin White, Willie Neiswanger, Yash Savani
We develop a path-based encoding scheme to featurize the neural architectures that are used to train the neural network model.