Search Results for author: Wen-Hsuan Chu

Found 4 papers, 1 papers with code

DreamScene4D: Dynamic Multi-Object Scene Generation from Monocular Videos

1 code implementation3 May 2024 Wen-Hsuan Chu, Lei Ke, Katerina Fragkiadaki

There are two challenges in this direction: First, rendering error gradients are often insufficient to recover fast object motion, and second, view predictive generative models work much better for objects than whole scenes, so, score distillation objectives cannot currently be applied at the scene level directly.

Depth Estimation Depth Prediction +4

Zero-Shot Open-Vocabulary Tracking with Large Pre-Trained Models

no code implementations10 Oct 2023 Wen-Hsuan Chu, Adam W. Harley, Pavel Tokmakov, Achal Dave, Leonidas Guibas, Katerina Fragkiadaki

This begs the question: can we re-purpose these large-scale pre-trained static image models for open-vocabulary video tracking?

Object Object Tracking +5

Spot and Learn: A Maximum-Entropy Patch Sampler for Few-Shot Image Classification

no code implementations CVPR 2019 Wen-Hsuan Chu, Yu-Jhe Li, Jing-Cheng Chang, Yu-Chiang Frank Wang

Few-shot learning (FSL) requires one to learn from object categories with a small amount of training data (as novel classes), while the remaining categories (as base classes) contain a sufficient amount of data for training.

Data Augmentation Few-Shot Image Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.