1 code implementation • 16 Apr 2024 • Khaled Kahouli, Stefaan Simon Pierre Hessmann, Klaus-Robert Müller, Shinichi Nakajima, Stefan Gugler, Niklas Wolf Andreas Gebauer
As a remedy, we propose MoreRed, molecular relaxation by reverse diffusion, a conceptually novel and purely statistical approach where non-equilibrium structures are treated as noisy instances of their corresponding equilibrium states.
no code implementations • 5 Mar 2024 • Dennis Grinwald, Philipp Wiesner, Shinichi Nakajima
We tackle a major challenge in federated learning (FL) -- achieving good performance under highly heterogeneous client distributions.
1 code implementation • NeurIPS 2023 • Kirill Bykov, Laura Kopf, Shinichi Nakajima, Marius Kloft, Marina M. -C. Höhne
Deep Neural Networks (DNNs) demonstrate remarkable capabilities in learning complex hierarchical data representations, but the nature of these representations remains largely unknown.
no code implementations • 26 Oct 2023 • Gabriel Nobis, Marco Aversa, Maximilian Springenberg, Michael Detzel, Stefano Ermon, Shinichi Nakajima, Roderick Murray-Smith, Sebastian Lapuschkin, Christoph Knochenhauer, Luis Oala, Wojciech Samek
We generalize the continuous time framework for score-based generative models from an underlying Brownian motion (BM) to an approximation of fractional Brownian motion (FBM).
no code implementations • 27 Feb 2023 • Kim A. Nicoli, Christopher J. Anders, Tobias Hartung, Karl Jansen, Pan Kessel, Shinichi Nakajima
In this work, we first point out that the tunneling problem is also present for normalizing flows but is shifted from the sampling to the training phase of the algorithm.
1 code implementation • 6 Oct 2022 • Stephanie Brandl, David Lassner, Anne Baillot, Shinichi Nakajima
Complementary to finding good general word embeddings, an important question for representation learning is to find dynamic word embeddings, e. g., across time or domain.
no code implementations • 17 Jul 2022 • Lorenz Vaitl, Kim A. Nicoli, Shinichi Nakajima, Pan Kessel
We propose an algorithm to estimate the path-gradient of both the reverse and forward Kullback-Leibler divergence for an arbitrary manifestly invertible normalizing flow.
no code implementations • 23 Jun 2022 • Alexander Bauer, Shinichi Nakajima, Klaus-Robert Müller
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
1 code implementation • 17 Jun 2022 • Lorenz Vaitl, Kim A. Nicoli, Shinichi Nakajima, Pan Kessel
Recent work has established a path-gradient estimator for simple variational Gaussian distributions and has argued that the path-gradient is particularly beneficial in the regime in which the variational distribution approaches the exact target distribution.
no code implementations • 11 Apr 2022 • Jannik Wolff, Tassilo Klein, Moin Nabi, Rahul G. Krishnan, Shinichi Nakajima
Machine learning systems are often deployed in domains that entail data from multiple modalities, for example, phenotypic and genotypic characteristics describe patients in healthcare.
no code implementations • 26 Jan 2022 • Dennis Grinwald, Kirill Bykov, Shinichi Nakajima, Marina M. -C. Höhne
Explainable Artificial Intelligence (XAI) aims to make learning machines less opaque, and offers researchers and practitioners various tools to reveal the decision-making strategies of neural networks.
no code implementations • 22 Nov 2021 • Kim A. Nicoli, Christopher Anders, Lena Funcke, Tobias Hartung, Karl Jansen, Pan Kessel, Shinichi Nakajima, Paolo Stornati
Crucially, these models allow for the direct estimation of the free energy at a given point in parameter space.
no code implementations • 29 Sep 2021 • Jannik Wolff, Rahul G Krishnan, Lukas Ruff, Jan Nikolas Morshuis, Tassilo Klein, Shinichi Nakajima, Moin Nabi
Humans find structure in natural phenomena by absorbing stimuli from multiple input sources such as vision, text, and speech.
no code implementations • 23 Aug 2021 • Kirill Bykov, Marina M. -C. Höhne, Adelaida Creosteanu, Klaus-Robert Müller, Frederick Klauschen, Shinichi Nakajima, Marius Kloft
Bayesian approaches such as Bayesian Neural Networks (BNNs) so far have a limited form of transparency (model transparency) already built-in through their prior weight distribution, but notably, they lack explanations of their predictions for given instances.
2 code implementations • 18 Jun 2021 • Kirill Bykov, Anna Hedström, Shinichi Nakajima, Marina M. -C. Höhne
For local explanation, stochasticity is known to help: a simple method, called SmoothGrad, has improved the visual quality of gradient-based attribution by adding noise to the input space and averaging the explanations of the noisy inputs.
no code implementations • 25 May 2021 • Danny Panknin, Klaus Robert Müller, Shinichi Nakajima
Assuming that a small number of initial samples are available, we derive the optimal training density that minimizes the generalization error of local polynomial smoothing (LPS) with its kernel bandwidth tuned locally: We adopt the mean integrated squared error (MISE) as a generalization criterion, and use the asymptotic behavior of the MISE as well as the locally optimal bandwidths (LOB) - the bandwidth function that minimizes MISE in the asymptotic limit.
1 code implementation • 31 Aug 2020 • Vignesh Srinivasan, Klaus-Robert Müller, Wojciech Samek, Shinichi Nakajima
Domain translation is the task of finding correspondence between two domains.
no code implementations • 14 Jul 2020 • Kim A. Nicoli, Christopher J. Anders, Lena Funcke, Tobias Hartung, Karl Jansen, Pan Kessel, Shinichi Nakajima, Paolo Stornati
In this work, we demonstrate that applying deep generative machine learning models for lattice field theory is a promising route for solving problems where Markov Chain Monte Carlo (MCMC) methods are problematic.
1 code implementation • 16 Jun 2020 • Kirill Bykov, Marina M. -C. Höhne, Klaus-Robert Müller, Shinichi Nakajima, Marius Kloft
Explainable AI (XAI) aims to provide interpretations for predictions made by learning machines, such as deep neural networks, in order to make the machines more transparent for the user and furthermore trustworthy also for applications in e. g. safety-critical areas.
no code implementations • 5 Jun 2020 • Thomas Schnake, Oliver Eberle, Jonas Lederer, Shinichi Nakajima, Kristof T. Schütt, Klaus-Robert Müller, Grégoire Montavon
In this paper, we show that GNNs can in fact be naturally explained using higher-order expansions, i. e. by identifying groups of edges that jointly contribute to the prediction.
no code implementations • 20 Mar 2020 • David Lassner, Anne Baillot, Sergej Dogadov, Klaus-Robert Müller, Shinichi Nakajima
In addition to the findings based on the digital scholarly edition Berlin Intellectuals, we present a general framework for the analysis of text genesis that can be used in the context of other digital resources representing document variants.
no code implementations • 27 Dec 2019 • Alexander Bauer, Shinichi Nakajima
Considering the worst-case scenario, junction tree algorithm remains the most general solution for exact MAP inference with polynomial run-time guarantees.
no code implementations • 29 Oct 2019 • Kim A. Nicoli, Shinichi Nakajima, Nils Strodthoff, Wojciech Samek, Klaus-Robert Müller, Pan Kessel
We propose a general framework for the estimation of observables with generative neural samplers focusing on modern deep generative neural networks that provide an exact sampling probability.
1 code implementation • 22 Oct 2019 • Maximilian Kohlbrenner, Alexander Bauer, Shinichi Nakajima, Alexander Binder, Wojciech Samek, Sebastian Lapuschkin
In this paper, we focus on a popular and widely used method of XAI, the Layer-wise Relevance Propagation (LRP).
Ranked #1 on Object Detection on SIXray
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +3
no code implementations • 11 Apr 2019 • Vignesh Srinivasan, Ercan E. Kuruoglu, Klaus-Robert Müller, Wojciech Samek, Shinichi Nakajima
Many existing methods employ Gaussian random variables for exploring the data space to find the most adversarial (for attacking) or least adversarial (for defense) point.
no code implementations • 26 Mar 2019 • Kim Nicoli, Pan Kessel, Nils Strodthoff, Wojciech Samek, Klaus-Robert Müller, Shinichi Nakajima
In this comment on "Solving Statistical Mechanics Using Variational Autoregressive Networks" by Wu et al., we propose a subtle yet powerful modification of their approach.
no code implementations • 27 Feb 2019 • Danny Panknin, Stefan Chmiela, Klaus-Robert Müller, Shinichi Nakajima
Inhomogeneities in real-world data, e. g., due to changes in the observation noise level or variations in the structural complexity of the source function, pose a unique set of challenges for statistical inference.
no code implementations • 29 Jun 2018 • Jacob Kauffmann, Grégoire Montavon, Luiz Alberto Lima, Shinichi Nakajima, Klaus-Robert Müller, Nico Görnitz
Detecting and explaining anomalies is a challenging effort.
no code implementations • 30 May 2018 • Vignesh Srinivasan, Arturo Marban, Klaus-Robert Müller, Wojciech Samek, Shinichi Nakajima
Adversarial attacks on deep learning models have compromised their performance considerably.
no code implementations • 5 Sep 2017 • Alexander Bauer, Shinichi Nakajima, Nico Görnitz, Klaus-Robert Müller
Many statistical learning problems in the area of natural language processing including sequence tagging, sequence segmentation and syntactic parsing has been successfully approached by means of structured prediction methods.
no code implementations • ICML 2017 • János Höner, Shinichi Nakajima, Alexander Bauer, Klaus-Robert Müller, Nico Görnitz
Sybil detection is a crucial task to protect online social networks (OSNs) against intruders who try to manipulate automatic services provided by OSNs to their customers.
no code implementations • 11 Sep 2016 • Wikor Pronobis, Danny Panknin, Johannes Kirschnick, Vignesh Srinivasan, Wojciech Samek, Volker Markl, Manohar Kaul, Klaus-Robert Mueller, Shinichi Nakajima
In this paper, we propose {multiple purpose LSH (mp-LSH) which shares the hash codes for different dissimilarities.
no code implementations • 2 Sep 2016 • Shinichi Nakajima, Sebastian Krause, Dirk Weissenborn, Sven Schmeier, Nico Goernitz, Feiyu Xu
In relation extraction, a key process is to obtain good detectors that find relevant sentences describing the target relation.
1 code implementation • 15 Dec 2015 • Shinichi Nakajima, Ryota Tomioka, Masashi Sugiyama, S. Derin Babacan
In this paper, we clarify the behavior of VB learning in probabilistic PCA (or fully-observed matrix factorization).
no code implementations • 16 Jul 2015 • Stephan Mandt, Florian Wenzel, Shinichi Nakajima, John P. Cunningham, Christoph Lippert, Marius Kloft
Formulated as models for linear regression, LMMs have been restricted to continuous phenotypes.
no code implementations • NeurIPS 2014 • Shinichi Nakajima, Issei Sato, Masashi Sugiyama, Kazuho Watanabe, Hiroko Kobayashi
Latent Dirichlet allocation (LDA) is a popular generative model of various objects such as texts and images, where an object is expressed as a mixture of latent topics.
no code implementations • NeurIPS 2013 • Shinichi Nakajima, Akiko Takeda, S. Derin Babacan, Masashi Sugiyama, Ichiro Takeuchi
However, Bayesian learning is often obstructed by computational difficulty: the rigorous Bayesian learning is intractable in many models, and its variational Bayesian (VB) approximation is prone to suffer from local minima.
no code implementations • NeurIPS 2013 • Ichiro Takeuchi, Tatsuya Hongo, Masashi Sugiyama, Shinichi Nakajima
We introduce a novel formulation of multi-task learning (MTL) called parametric task learning (PTL) that can systematically handle infinitely many tasks parameterized by a continuous parameter.
no code implementations • NeurIPS 2012 • S. D. Babacan, Shinichi Nakajima, Minh Do
In this paper, we consider the problem of clustering data points into low-dimensional subspaces in the presence of outliers.
no code implementations • NeurIPS 2012 • Shinichi Nakajima, Ryota Tomioka, Masashi Sugiyama, S. D. Babacan
The variational Bayesian (VB) approach is one of the best tractable approximations to the Bayesian estimation, and it was demonstrated to perform well in many applications.
no code implementations • NeurIPS 2011 • Shinichi Nakajima, Masashi Sugiyama, S. D. Babacan
A recent study on fully-observed VBMF showed that, under a stronger assumption that the two factorized matrices are column-wise independent, the global optimal solution can be analytically computed.
no code implementations • NeurIPS 2010 • Shinichi Nakajima, Masashi Sugiyama, Ryota Tomioka
Bayesian methods of matrix factorization (MF) have been actively explored recently as promising alternatives to classical singular value decomposition.
no code implementations • NeurIPS 2007 • Masashi Sugiyama, Shinichi Nakajima, Hisashi Kashima, Paul V. Buenau, Motoaki Kawanabe
In this paper, we propose a direct importance estimation method that does not require the input density estimates.