no code implementations • 14 Mar 2024 • Tomas Hodan, Martin Sundermeyer, Yann Labbe, Van Nguyen Nguyen, Gu Wang, Eric Brachmann, Bertram Drost, Vincent Lepetit, Carsten Rother, Jiri Matas
In the new tasks, methods were required to learn new objects during a short onboarding stage (max 5 minutes, 1 GPU) from provided 3D object models.
1 code implementation • 21 Nov 2023 • Yongliang Lin, Yongzhi Su, Praveen Nathan, Sandeep Inuganti, Yan Di, Martin Sundermeyer, Fabian Manhardt, Didier Stricker, Jason Rambach, Yu Zhang
In this work, we present a novel dense-correspondence method for 6DoF object pose estimation from a single RGB-D image.
no code implementations • 23 Mar 2023 • Maximilian Ulmer, Maximilian Durner, Martin Sundermeyer, Manuel Stoiber, Rudolph Triebel
We present a novel technique to estimate the 6D pose of objects from single images where the 3D geometry of the object is only given approximately and not as a precise 3D model.
no code implementations • 25 Feb 2023 • Martin Sundermeyer, Tomas Hodan, Yann Labbe, Gu Wang, Eric Brachmann, Bertram Drost, Carsten Rother, Jiri Matas
In 2022, we witnessed another significant improvement in the pose estimation accuracy -- the state of the art, which was 56. 9 AR$_C$ in 2019 (Vidal et al.) and 69. 8 AR$_C$ in 2020 (CosyPose), moved to new heights of 83. 7 AR$_C$ (GDRNPP).
1 code implementation • 2 Aug 2022 • Manuel Stoiber, Martin Sundermeyer, Wout Boerdijk, Rudolph Triebel
Our approach focuses on methods that employ Newton-like optimization techniques, which are widely used in object tracking.
Ranked #1 on 3D Object Tracking on RTB
1 code implementation • CVPR 2022 • Manuel Stoiber, Martin Sundermeyer, Rudolph Triebel
Tracking objects in 3D space and predicting their 6DoF pose is an essential task in computer vision.
Ranked #2 on 6D Pose Estimation on OPT
1 code implementation • 25 Mar 2021 • Martin Sundermeyer, Arsalan Mousavian, Rudolph Triebel, Dieter Fox
Our novel grasp representation treats 3D points of the recorded point cloud as potential grasp contacts.
2 code implementations • 11 Mar 2021 • Maximilian Durner, Wout Boerdijk, Martin Sundermeyer, Werner Friedl, Zoltan-Csaba Marton, Rudolph Triebel
This has the major advantage that instead of a noisy, and potentially incomplete depth map as an input, on which the segmentation is computed, we use the original image pair to infer the object instances and a dense depth map.
1 code implementation • 6 Nov 2020 • Wout Boerdijk, Martin Sundermeyer, Maximilian Durner, Rudolph Triebel
Furthermore, while the motion of the manipulator and the object are substantial cues for our algorithm, we present means to robustly deal with distraction objects moving in the background, as well as with completely static scenes.
4 code implementations • 15 Sep 2020 • Tomas Hodan, Martin Sundermeyer, Bertram Drost, Yann Labbe, Eric Brachmann, Frank Michel, Carsten Rother, Jiri Matas
This paper presents the evaluation methodology, datasets, and results of the BOP Challenge 2020, the third in a series of public competitions organized with the goal to capture the status quo in the field of 6D object pose estimation from an RGB-D image.
no code implementations • 11 Feb 2020 • Wout Boerdijk, Martin Sundermeyer, Maximilian Durner, Rudolph Triebel
Accurate object segmentation is a crucial task in the context of robotic manipulation.
4 code implementations • 25 Oct 2019 • Maximilian Denninger, Martin Sundermeyer, Dominik Winkelbauer, Youssef Zidan, Dmitry Olefir, Mohamad Elbadrawy, Ahsan Lodhi, Harinandan Katam
BlenderProc is a modular procedural pipeline, which helps in generating real looking images for the training of convolutional neural networks.
1 code implementation • CVPR 2020 • Martin Sundermeyer, Maximilian Durner, En Yen Puang, Zoltan-Csaba Marton, Narunas Vaskevicius, Kai O. Arras, Rudolph Triebel
We introduce a scalable approach for object pose estimation trained on simulated RGB views of multiple 3D models together.
1 code implementation • ECCV 2018 • Martin Sundermeyer, Zoltan-Csaba Marton, Maximilian Durner, Manuel Brucker, Rudolph Triebel
Our novel 3D orientation estimation is based on a variant of the Denoising Autoencoder that is trained on simulated views of a 3D model using Domain Randomization.
Ranked #1 on 6D Pose Estimation using RGBD on T-LESS