no code implementations • 24 Apr 2024 • RuiLong Li, Sanja Fidler, Angjoo Kanazawa, Francis Williams
We present NeRF-XL, a principled method for distributing Neural Radiance Fields (NeRFs) across multiple GPUs, thus enabling the training and rendering of NeRFs with an arbitrarily large capacity.
2 code implementations • ICCV 2023 • Chenfeng Xu, Bichen Wu, Ji Hou, Sam Tsai, RuiLong Li, Jialiang Wang, Wei Zhan, Zijian He, Peter Vajda, Kurt Keutzer, Masayoshi Tomizuka
We present NeRF-Det, a novel method for indoor 3D detection with posed RGB images as input.
no code implementations • ICCV 2023 • RuiLong Li, Hang Gao, Matthew Tancik, Angjoo Kanazawa
Optimizing and rendering Neural Radiance Fields is computationally expensive due to the vast number of samples required by volume rendering.
2 code implementations • 8 Feb 2023 • Matthew Tancik, Ethan Weber, Evonne Ng, RuiLong Li, Brent Yi, Justin Kerr, Terrance Wang, Alexander Kristoffersen, Jake Austin, Kamyar Salahi, Abhik Ahuja, David McAllister, Angjoo Kanazawa
Neural Radiance Fields (NeRF) are a rapidly growing area of research with wide-ranging applications in computer vision, graphics, robotics, and more.
1 code implementation • 24 Oct 2022 • Hang Gao, RuiLong Li, Shubham Tulsiani, Bryan Russell, Angjoo Kanazawa
We study the recent progress on dynamic view synthesis (DVS) from monocular video.
1 code implementation • 10 Oct 2022 • RuiLong Li, Matthew Tancik, Angjoo Kanazawa
We propose NerfAcc, a toolbox for efficient volumetric rendering of radiance fields.
1 code implementation • 17 Jun 2022 • RuiLong Li, Julian Tanke, Minh Vo, Michael Zollhofer, Jurgen Gall, Angjoo Kanazawa, Christoph Lassner
Since TAVA does not require a body template, it is applicable to humans as well as other creatures such as animals.
5 code implementations • ICCV 2021 • Alex Yu, RuiLong Li, Matthew Tancik, Hao Li, Ren Ng, Angjoo Kanazawa
We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation which supports view-dependent effects.
1 code implementation • ICCV 2021 • RuiLong Li, Shan Yang, David A. Ross, Angjoo Kanazawa
We present AIST++, a new multi-modal dataset of 3D dance motion and music, along with FACT, a Full-Attention Cross-modal Transformer network for generating 3D dance motion conditioned on music.
Ranked #2 on Motion Synthesis on BRACE
1 code implementation • ECCV 2020 • Ruilong Li, Yuliang Xiu, Shunsuke Saito, Zeng Huang, Kyle Olszewski, Hao Li
We present the first approach to volumetric performance capture and novel-view rendering at real-time speed from monocular video, eliminating the need for expensive multi-view systems or cumbersome pre-acquisition of a personalized template model.
1 code implementation • CVPR 2020 • Ruilong Li, Karl Bladin, Yajie Zhao, Chinmay Chinara, Owen Ingraham, Pengda Xiang, Xinglei Ren, Pratusha Prasad, Bipin Kishore, Jun Xing, Hao Li
Based on a combined data set of 4000 high resolution facial scans, we introduce a non-linear morphable face model, capable of producing multifarious face geometry of pore-level resolution, coupled with material attributes for use in physically-based rendering.