no code implementations • 8 Jun 2023 • Bethany Connolly, Kim Moore, Tobias Schwedes, Alexander Adam, Gary Willis, Ilya Feige, Christopher Frye
Understanding causality should be a core requirement of any attempt to build real impact through AI.
no code implementations • 1 Jan 2021 • Benoit Gaujac, Ilya Feige, David Barber
We further study the trade off between disentanglement and reconstruction on more-difficult data sets with unknown generative factors, where we expect improved reconstructions due to the flexibility of the WAE paradigm.
no code implementations • 23 Oct 2020 • Alex Mansbridge, Gregory Barbour, Davide Piras, Michael Murray, Christopher Frye, Ilya Feige, David Barber
In this work, our contributions are two-fold: first, by adapting state-of-the-art techniques from representation learning, we introduce a novel approach to learning LDP mechanisms.
no code implementations • 14 Oct 2020 • Tom Begley, Tobias Schwedes, Christopher Frye, Ilya Feige
Moreover, motivated by the linearity of Shapley explainability, we propose a meta algorithm for applying existing training-time fairness interventions, wherein one trains a perturbation to the original model, rather than a new model entirely.
no code implementations • 14 Oct 2020 • Damien de Mijolla, Christopher Frye, Markus Kunesch, John Mansir, Ilya Feige
The importance of explainability in machine learning continues to grow, as both neural-network architectures and the data they model become increasingly complex.
no code implementations • 7 Oct 2020 • Benoit Gaujac, Ilya Feige, David Barber
Probabilistic models with hierarchical-latent-variable structures provide state-of-the-art results amongst non-autoregressive, unsupervised density-based models.
no code implementations • 7 Oct 2020 • Benoit Gaujac, Ilya Feige, David Barber
We further study the trade off between disentanglement and reconstruction on more-difficult data sets with unknown generative factors, where the flexibility of the WAE paradigm in the reconstruction term improves reconstructions.
no code implementations • ICLR 2021 • Christopher Frye, Damien de Mijolla, Tom Begley, Laurence Cowton, Megan Stanley, Ilya Feige
Explainability in AI is crucial for model development, compliance with regulation, and providing operational nuance to predictions.
1 code implementation • NeurIPS 2020 • Christopher Frye, Colin Rowat, Ilya Feige
We introduce a less restrictive framework, Asymmetric Shapley values (ASVs), which are rigorously founded on a set of axioms, applicable to any AI system, and flexible enough to incorporate any causal structure known to be respected by the data.
no code implementations • 24 Jun 2019 • Anders Andreassen, Ilya Feige, Christopher Frye, Matthew D. Schwartz
We refer to this refined approach as Binary JUNIPR.
High Energy Physics - Phenomenology
no code implementations • 18 Feb 2019 • Christopher Frye, Ilya Feige
Autonomous agents trained via reinforcement learning present numerous safety concerns: reward hacking, negative side effects, and unsafe exploration, among others.
no code implementations • ICLR 2019 • Ilya Feige
Representations learnt through deep neural networks tend to be highly informative, but opaque in terms of what information they learn to encode.
no code implementations • 12 Jun 2018 • Alex Mansbridge, Roberto Fierimonte, Ilya Feige, David Barber
Powerful generative models, particularly in Natural Language Modelling, are commonly trained by maximizing a variational lower bound on the data log likelihood.
no code implementations • 12 Jun 2018 • Benoit Gaujac, Ilya Feige, David Barber
Generative models with both discrete and continuous latent variables are highly motivated by the structure of many real-world data sets.
no code implementations • 25 Apr 2018 • Anders Andreassen, Ilya Feige, Christopher Frye, Matthew D. Schwartz
As a third application, JUNIPR models can reweight events from one (e. g. simulated) data set to agree with distributions from another (e. g. experimental) data set.