no code implementations • 16 Apr 2024 • Florian Barthel, Arian Beckmann, Wieland Morgenstern, Anna Hilsmann, Peter Eisert
By training a decoder that maps implicit NeRF representations to explicit 3D Gaussian Splatting attributes, we can integrate the representational diversity and quality of 3D GANs into the ecosystem of 3D Gaussian Splatting for the first time.
1 code implementation • CVPR Workshop on Event-based Vision 2023 • Wieland Morgenstern, Niklas Gard, Simon Baumann, Anna Hilsmann, Peter Eisert
We present a new approach to direct depth estimation for Spatial Augmented Reality (SAR) applications using event cameras.
1 code implementation • 19 Dec 2023 • Wieland Morgenstern, Florian Barthel, Anna Hilsmann, Peter Eisert
In this paper, we introduce a compact scene representation organizing the parameters of 3D Gaussian Splatting (3DGS) into a 2D grid with local homogeneity, ensuring a drastic reduction in storage requirements without compromising visual quality during rendering.
no code implementations • 6 Nov 2023 • Paul Knoll, Wieland Morgenstern, Anna Hilsmann, Peter Eisert
The extension to a controllable synthesis of dynamic human performances poses an exciting research question.
no code implementations • 5 Oct 2023 • Wieland Morgenstern, Milena T. Bagdasarian, Anna Hilsmann, Peter Eisert
We propose a novel representation of virtual humans for highly realistic real-time animation and rendering in 3D applications.
no code implementations • 7 Feb 2022 • Alexandra Zimmer, Anna Hilsmann, Wieland Morgenstern, Peter Eisert
In detail, we derive parameters of a sequence of body models, representing shape and motion of a person, including jaw poses, facial expressions, and finger poses.
no code implementations • 2 Sep 2020 • Anna Hilsmann, Philipp Fechteler, Wieland Morgenstern, Wolfgang Paier, Ingo Feldmann, Oliver Schreer, Peter Eisert
Going beyond the application of free-viewpoint volumetric video, we allow re-animation and alteration of an actor's performance through (i) the enrichment of the captured data with semantics and animation properties and (ii) applying hybrid geometry- and video-based animation methods that allow a direct animation of the high-quality data itself instead of creating an animatable model that resembles the captured data.