1 code implementation • 1 Apr 2022 • Steve Dias Da Cruz, Bertram Taetz, Thomas Stifter, Didier Stricker
Learning on synthetic data and transferring the resulting properties to their real counterparts is an important challenge for reducing costs and increasing safety in machine learning.
1 code implementation • 1 Apr 2022 • Steve Dias Da Cruz, Bertram Taetz, Thomas Stifter, Didier Stricker
While input images close to known samples will converge to the same or similar attractor, input samples containing unknown features are unstable and converge to different training samples by potentially removing or changing characteristic features.
no code implementations • 7 May 2021 • Steve Dias Da Cruz, Bertram Taetz, Oliver Wasenmüller, Thomas Stifter, Didier Stricker
Common domain shift problem formulations consider the integration of multiple source domains, or the target domain during training.
no code implementations • 10 Nov 2020 • Hans-Peter Beise, Steve Dias Da Cruz
In Radhakrishnan et al. [2020], the authors empirically show that autoencoders trained with usual SGD methods shape out basins of attraction around their training data.
no code implementations • 6 Nov 2020 • Steve Dias Da Cruz, Bertram Taetz, Thomas Stifter, Didier Stricker
Our method exploits the availability of identical sceneries under different illumination and environmental conditions for which we formulate a partially impossible reconstruction target: the input image will not convey enough information to reconstruct the target in its entirety.
1 code implementation • 10 Jan 2020 • Steve Dias Da Cruz, Oliver Wasenmüller, Hans-Peter Beise, Thomas Stifter, Didier Stricker
We release SVIRO, a synthetic dataset for sceneries in the passenger compartment of ten different vehicles, in order to analyze machine learning-based approaches for their generalization capacities and reliability when trained on a limited number of variations (e. g. identical backgrounds and textures, few instances per class).
no code implementations • 3 Jul 2018 • Hans-Peter Beise, Steve Dias Da Cruz, Udo Schröder
We show that for neural network functions that have width less or equal to the input dimension all connected components of decision regions are unbounded.