1 code implementation • 5 Jul 2023 • Lukas Muttenthaler, Robert A. Vandermeulen, Qiuyi Zhang, Thomas Unterthiner, Klaus-Robert Müller
Model overconfidence and poor calibration are common in machine learning and difficult to account for when applying standard empirical risk minimization.
no code implementations • 8 Feb 2023 • Robert A. Vandermeulen
Recent works have demonstrated that the convergence rate of a nonparametric density estimator can be greatly improved by using a low-rank estimator when the target density is a convex combination of separable probability densities with Lipschitz continuous marginals, i. e. a multiview model.
1 code implementation • 2 Nov 2022 • Lukas Muttenthaler, Jonas Dippel, Lorenz Linhardt, Robert A. Vandermeulen, Simon Kornblith
Linear transformations of neural network representations learned from behavioral responses from one dataset substantially improve alignment with human similarity judgments on the other two datasets.
no code implementations • 22 Jul 2022 • Robert A. Vandermeulen, René Saitenmacher
Recent work has shown that finite mixture models with $m$ components are identifiable, while making no assumptions on the mixture components, so long as one has access to groups of samples of size $2m-1$ which are known to come from the same mixture component.
1 code implementation • 23 May 2022 • Philipp Liznerski, Lukas Ruff, Robert A. Vandermeulen, Billy Joe Franks, Klaus-Robert Müller, Marius Kloft
We find that standard classifiers and semi-supervised one-class methods trained to discern between normal samples and relatively few random natural images are able to outperform the current state of the art on an established AD benchmark with ImageNet.
Ranked #1 on Anomaly Detection on One-class CIFAR-10 (using extra training data)
1 code implementation • 2 May 2022 • Lukas Muttenthaler, Charles Y. Zheng, Patrick McClure, Robert A. Vandermeulen, Martin N. Hebart, Francisco Pereira
This paper introduces Variational Interpretable Concept Embeddings (VICE), an approximate Bayesian method for embedding object concepts in a vector space using data collected from humans in a triplet odd-one-out task.
no code implementations • NeurIPS 2021 • Robert A. Vandermeulen, Antoine Ledent
In this paper we investigate the theoretical implications of incorporating a multi-view latent variable model, a type of low-rank model, into nonparametric density estimation.
1 code implementation • 21 Sep 2021 • Saurabh Varshneya, Antoine Ledent, Robert A. Vandermeulen, Yunwen Lei, Matthias Enders, Damian Borth, Marius Kloft
We propose a novel training methodology -- Concept Group Learning (CGL) -- that encourages training of interpretable CNN filters by partitioning filters in each layer into concept groups, each of which is trained to learn a single visual concept.
no code implementations • 6 Oct 2020 • Robert A. Vandermeulen
One technique for avoiding this is to assume no dependence between features and that the data are sampled from a separable density.
no code implementations • 5 Oct 2020 • Lucas Deecke, Lukas Ruff, Robert A. Vandermeulen, Hakan Bilen
Deep anomaly detection is a difficult task since, in high dimensions, it is hard to completely characterize a notion of "differentness" when given only examples of normality.
no code implementations • 24 Sep 2020 • Lukas Ruff, Jacob R. Kauffmann, Robert A. Vandermeulen, Grégoire Montavon, Wojciech Samek, Marius Kloft, Thomas G. Dietterich, Klaus-Robert Müller
Deep learning approaches to anomaly detection have recently improved the state of the art in detection performance on complex datasets such as large collections of images or text.
no code implementations • 14 Sep 2020 • Waleed Mustafa, Robert A. Vandermeulen, Marius Kloft
Regularizing the input gradient has shown to be effective in promoting the robustness of neural networks.
2 code implementations • ICLR 2021 • Philipp Liznerski, Lukas Ruff, Robert A. Vandermeulen, Billy Joe Franks, Marius Kloft, Klaus-Robert Müller
Deep one-class classification variants for anomaly detection learn a mapping that concentrates nominal samples in feature space causing anomalies to be mapped away.
Ranked #5 on Anomaly Detection on One-class ImageNet-30 (using extra training data)
1 code implementation • NeurIPS 2020 • Alexander Ritchie, Robert A. Vandermeulen, Clayton Scott
Recent research has established sufficient conditions for finite mixture models to be identifiable from grouped observations.
1 code implementation • 30 May 2020 • Lukas Ruff, Robert A. Vandermeulen, Billy Joe Franks, Klaus-Robert Müller, Marius Kloft
Though anomaly detection (AD) can be viewed as a classification problem (nominal vs. anomalous) it is usually treated in an unsupervised manner since one typically does not have access to, or it is infeasible to utilize, a dataset that sufficiently characterizes what it means to be "anomalous."
no code implementations • 29 Jan 2020 • Fabian Jirasek, Rodrigo A. S. Alves, Julie Damay, Robert A. Vandermeulen, Robert Bamler, Michael Bortz, Stephan Mandt, Marius Kloft, Hans Hasse
Activity coefficients, which are a measure of the non-ideality of liquid mixtures, are a key property in chemical engineering with relevance to modeling chemical and phase equilibria as well as transport processes.
7 code implementations • ICLR 2020 • Lukas Ruff, Robert A. Vandermeulen, Nico Görnitz, Alexander Binder, Emmanuel Müller, Klaus-Robert Müller, Marius Kloft
Deep approaches to anomaly detection have recently shown promising results over shallow methods on large and complex datasets.
no code implementations • 30 Jun 2016 • Robert A. Vandermeulen, Clayton D. Scott
In this work, we make no distributional assumptions on the mixture components and instead assume that observations from the mixture model are grouped, such that observations in the same group are known to be drawn from the same mixture component.
no code implementations • 23 Feb 2015 • Robert A. Vandermeulen, Clayton D. Scott
In such models it is assumed that data are drawn from random probability measures, called mixture components, which are themselves drawn from a probability measure P over probability measures.
no code implementations • NeurIPS 2014 • Robert A. Vandermeulen, Clayton D. Scott
As with other estimators, a robust version of the KDE is useful since sample contamination is a common issue with datasets.