1 code implementation • 18 Jan 2024 • René Zurbrügg, Yifan Liu, Francis Engelmann, Suryansh Kumar, Marco Hutter, Vaishakh Patil, Fisher Yu
Executing a successful grasp in a cluttered environment requires multiple levels of scene understanding: First, the robot needs to analyze the geometric properties of individual objects to find feasible grasps.
1 code implementation • 11 Dec 2023 • Bin Yang, Patrick Pfreundschuh, Roland Siegwart, Marco Hutter, Peyman Moghadam, Vaishakh Patil
In this paper, we propose TULIP, a new method to reconstruct high-resolution LiDAR point clouds from low-resolution LiDAR input.
no code implementations • 21 Mar 2023 • Kamil Adamczewski, Christos Sakaridis, Vaishakh Patil, Luc van Gool
Lidar is a vital sensor for estimating the depth of a scene.
1 code implementation • CVPR 2022 • Vaishakh Patil, Christos Sakaridis, Alexander Liniger, Luc van Gool
We focus on the supervised setup, in which ground-truth depth is available only at training time.
Ranked #6 on Depth Estimation on NYU-Depth V2
no code implementations • 8 Jan 2020 • Vaishakh Patil, Wouter Van Gansbeke, Dengxin Dai, Luc van Gool
In particular, we put three different types of depth estimation (supervised depth prediction, self-supervised depth prediction, and self-supervised depth completion) into a common framework.
no code implementations • 9 Dec 2019 • Qi Dai, Vaishakh Patil, Simon Hecker, Dengxin Dai, Luc van Gool, Konrad Schindler
We present a self-supervised learning framework to estimate the individual object motion and monocular depth from video.