1 code implementation • 10 Mar 2024 • Kaspar Märtens, Christopher Yau
Common or shared factors could be important for explaining variation across modalities whereas other factors may be private and important only for the explanation of a single modality.
no code implementations • 19 Jan 2023 • Fabian Falck, Christopher Williams, Dominic Danks, George Deligiannidis, Christopher Yau, Chris Holmes, Arnaud Doucet, Matthew Willetts
U-Net architectures are ubiquitous in state-of-the-art deep learning, however their regularisation properties and relationship to wavelets are understudied.
1 code implementation • NeurIPS 2021 • Fabian Falck, Haoting Zhang, Matthew Willetts, George Nicholson, Christopher Yau, Chris Holmes
Work in deep clustering focuses on finding a single partition of data.
1 code implementation • 25 Jun 2020 • Kaspar Märtens, Christopher Yau
Our goal is to provide a feature-level variance decomposition, i. e. to decompose variation in the data by separating out the marginal additive effects of latent variables z and fixed inputs c from their non-linear interactions.
1 code implementation • 6 Mar 2020 • Kaspar Märtens, Christopher Yau
Variational Autoencoders (VAEs) provide a flexible and scalable framework for non-linear dimensionality reduction.
no code implementations • 28 Jun 2019 • Tammo Rukat, Christopher Yau
We build upon probabilistic models for Boolean Matrix and Boolean Tensor factorisation that have recently been shown to solve these problems with unprecedented accuracy and to enable posterior inference to scale to Billions of observation.
2 code implementations • 16 Oct 2018 • Kaspar Märtens, Kieran R. Campbell, Christopher Yau
The interpretation of complex high-dimensional data typically requires the use of dimensionality reduction techniques to extract explanatory low-dimensional representations.
1 code implementation • ICML 2018 • Tammo Rukat, Chris Holmes, Christopher Yau
Boolean tensor decomposition approximates data of multi-way binary relationships as product of interpretable low-rank binary factors, following the rules Boolean algebra.
1 code implementation • 11 May 2018 • Tammo Rukat, Chris C. Holmes, Christopher Yau
Boolean tensor decomposition approximates data of multi-way binary relationships as product of interpretable low-rank binary factors, following the rules of Boolean algebra.
no code implementations • 24 Mar 2017 • Kaspar Märtens, Michalis K. Titsias, Christopher Yau
Bayesian inference for factorial hidden Markov models is challenging due to the exponentially sized latent variable space.
no code implementations • NeurIPS 2017 • Ho Chung Leon Law, Christopher Yau, Dino Sejdinovic
Kernel embeddings of distributions and the Maximum Mean Discrepancy (MMD), the resulting distance between distributions, are useful tools for fully nonparametric two-sample testing and learning on distributions.
no code implementations • ICML 2017 • Tammo Rukat, Chris C. Holmes, Michalis K. Titsias, Christopher Yau
Boolean matrix factorisation aims to decompose a binary data matrix into an approximate Boolean product of two low rank, binary matrices: one containing meaningful patterns, the other quantifying how the observations can be expressed as a combination of these patterns.
1 code implementation • 27 Oct 2016 • Kieran R. Campbell, Christopher Yau
To learn such a continuous disease score one could infer a latent variable from dynamic "omics" data such as RNA-seq that correlates with an outcome of interest such as survival time.
no code implementations • NeurIPS 2014 • Michalis Titsias Rc Aueb, Christopher Yau
We introduce a novel sampling algorithm for Markov chain Monte Carlo-based Bayesian inference for factorial hidden Markov models.
no code implementations • 5 Nov 2013 • Michalis K. Titsias, Christopher Yau, Christopher C. Holmes
Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data.