2 code implementations • 27 Jun 2022 • Jiahao Lu, CHONG YIN, Oswin Krause, Kenny Erleben, Michael Bachmann Nielsen, Sune Darkner
Visualisation of the learned space further indicates that the correlation between the clustering of malignancy and nodule attributes coincides with clinical knowledge.
1 code implementation • 3 May 2022 • Oswin Krause, Anasua Chatterjee, Ferdinand Kuemmeth, Evert van Nieuwenburg
We introduce an algorithm that is able to find the facets of Coulomb diamonds in quantum dot arrays.
no code implementations • 20 Aug 2021 • Oswin Krause, Torbjørn Rasmussen, Bertram Brovang, Anasua Chatterjee, Ferdinand Kuemmeth
In spin based quantum dot arrays, material or fabrication imprecisions affect the behaviour of the device, which must be taken into account when controlling it.
1 code implementation • NeurIPS 2021 • Steffen Czolbe, Aasa Feragen, Oswin Krause
As a first step towards solving such alignment problems, we propose an unsupervised algorithm for the detection of changes in image topology.
no code implementations • 20 May 2021 • Kasra Arnavaz, Oswin Krause, Kilian Zepf, Jelena M. Krivokapic, Silja Heilmann, Jakob Andreas Bærentzen, Pia Nyeng, Aasa Feragen
b) We provide a full deep-learning methodology for this difficult noisy task on time-series image data.
1 code implementation • 20 Apr 2021 • Steffen Czolbe, Oswin Krause, Aasa Feragen
We propose a semantic similarity metric for image registration.
1 code implementation • 30 Mar 2021 • Steffen Czolbe, Kasra Arnavaz, Oswin Krause, Aasa Feragen
Probabilistic image segmentation encodes varying prediction confidence and inherent ambiguity in the segmentation problem.
1 code implementation • 18 Jan 2021 • Svetlana Kutuzova, Oswin Krause, Douglas McCloskey, Mads Nielsen, Christian Igel
Multimodal generative models should be able to learn a meaningful latent representation that enables a coherent joint generation of all modalities (e. g., images and text).
1 code implementation • NeurIPS 2020 • Steffen Czolbe, Oswin Krause, Ingemar Cox, Christian Igel
To train Variational Autoencoders (VAEs) to generate realistic imagery requires a loss function that reflects human perception of image similarity.
1 code implementation • 11 Nov 2020 • Steffen Czolbe, Oswin Krause, Aasa Feragen
We propose a semantic similarity metric for image registration.
no code implementations • 6 Sep 2020 • Tobias Glasmachers, Oswin Krause
The class of algorithms called Hessian Estimation Evolution Strategies (HE-ESs) update the covariance matrix of their sampling distribution by directly estimating the curvature of the objective function.
1 code implementation • 26 Jun 2020 • Steffen Czolbe, Oswin Krause, Ingemar Cox, Christian Igel
To train Variational Autoencoders (VAEs) to generate realistic imagery requires a loss function that reflects human perception of image similarity.
no code implementations • 30 Mar 2020 • Tobias Glasmachers, Oswin Krause
We demonstrate that our approach to covariance matrix adaptation is efficient by evaluation it on the BBOB/COCO testbed.
no code implementations • 25 Sep 2019 • Henrik Høeg, Matthias Brix, Oswin Krause
We present an architecture based on the conditional Variational Autoencoder to learn a representation of transformations in time-sequence data.
no code implementations • 3 Apr 2017 • Malte Stær Nissen, Oswin Krause, Kristian Almstrup, Søren Kjærulff, Torben Trindkær Nielsen, Mads Nielsen
We compare a set of convolutional neural network (CNN) architectures for the task of segmenting and detecting human sperm cells in an image taken from a semen sample.
no code implementations • NeurIPS 2016 • Oswin Krause, Dídac Rodríguez Arbonès, Christian Igel
The covariance matrix adaptation evolution strategy (CMA-ES) is arguably one of the most powerful real-valued derivative-free optimization algorithms, finding many applications in machine learning.
no code implementations • 6 Oct 2015 • Oswin Krause, Asja Fischer, Christian Igel
Compared to CD, it leads to a consistent estimate and may have a significantly lower bias.