1 code implementation • 11 Jan 2024 • Linus Franke, Darius Rückert, Laura Fink, Marc Stamminger
In this paper, we present TRIPS (Trilinear Point Splatting), an approach that combines ideas from both Gaussian Splatting and ADOP.
1 code implementation • 28 Nov 2023 • Laura Fink, Darius Rückert, Linus Franke, Joachim Keinert, Marc Stamminger
Based on the RGB-D input stream, novel views are rendered by projecting neural features into the target view via a densely fused depth map and aggregating the features in image-space to a target feature map.
1 code implementation • 8 Nov 2023 • Linus Franke, Darius Rückert, Laura Fink, Matthias Innmann, Marc Stamminger
In our results, we show that our approach can improve the quality of a point cloud obtained by structure from motion and thus increase novel view synthesis quality significantly.
no code implementations • 28 Feb 2022 • Rui Li, Darius Rückert, Yuanhao Wang, Ramzi Idoughi, Wolfgang Heidrich
Neural rendering with implicit neural networks has recently emerged as an attractive proposition for scene reconstruction, achieving excellent quality albeit at high computational cost.
1 code implementation • 4 Feb 2022 • Darius Rückert, Yuanhao Wang, Rui Li, Ramzi Idoughi, Wolfgang Heidrich
Through a combination of neural features with an adaptive explicit representation, we achieve reconstruction times far superior to existing neural inverse rendering methods.
Ranked #4 on Low-Dose X-Ray Ct Reconstruction on X3D
2 code implementations • 13 Oct 2021 • Darius Rückert, Linus Franke, Marc Stamminger
Like other neural renderers, our system takes as input calibrated camera images and a proxy geometry of the scene, in our case a point cloud.