no code implementations • 19 Jun 2023 • Kamil Adamczewski, Yingchen He, Mijung Park
To tackle this challenge, we take advantage of the fact that neural networks are overparameterized, which allows us to improve neural network training with differential privacy.
no code implementations • 25 May 2023 • Saiyue Lyu, Michael F. Liu, Margarita Vinaroz, Mijung Park
In this paper, we further improve the current state of DMs with DP by adopting the Latent Diffusion Models (LDMs).
no code implementations • 8 Mar 2023 • Kamil Adamczewski, Mijung Park
We study the interplay between neural network pruning and differential privacy, through the two modes of parameter updates.
no code implementations • 3 Mar 2023 • Yilin Yang, Kamil Adamczewski, Danica J. Sutherland, Xiaoxiao Li, Mijung Park
Maximum mean discrepancy (MMD) is a particularly useful distance metric for differentially private data generation: when used with finite-dimensional features it allows us to summarize and privatize the data distribution once, which we can repeatedly use during generator training without further privacy loss.
1 code implementation • 25 May 2022 • Fredrik Harder, Milad Jalali Asadabadi, Danica J. Sutherland, Mijung Park
Training even moderately-sized generative models with differentially-private stochastic gradient descent (DP-SGD) is difficult: the required level of noise for reasonable levels of privacy is simply too large.
no code implementations • 25 Nov 2021 • Margarita Vinaroz, Mijung Park
We provide a theoretical analysis of the privacy-accuracy trade-off in the posterior estimates under our method, called differentially private stochastic expectation propagation (DP-SEP).
1 code implementation • 9 Jun 2021 • Margarita Vinaroz, Mohammad-Amin Charusaie, Frederik Harder, Kamil Adamczewski, Mijung Park
Hence, a relatively low order of Hermite polynomial features can more accurately approximate the mean embedding of the data distribution compared to a significantly higher number of random features.
1 code implementation • 10 Nov 2020 • Kamil Adamczewski, Mijung Park
We introduce Dirichlet pruning, a novel post-processing technique to transform a large neural network model into a compressed one.
no code implementations • 26 Oct 2020 • Kamil Adamczewski, Frederik Harder, Mijung Park
We introduce a simple and intuitive framework that provides quantitative explanations of statistical models through the probabilistic assessment of input feature importance.
1 code implementation • 26 Feb 2020 • Frederik Harder, Kamil Adamczewski, Mijung Park
We propose a differentially private data generation paradigm using random feature representations of kernel mean embeddings when comparing the distribution of true data with that of synthetic data.
1 code implementation • 15 Oct 2019 • Frederik Harder, Jonas Köhler, Max Welling, Mijung Park
Developing a differentially private deep learning algorithm is challenging, due to the difficulty in analyzing the sensitivity of objective functions that are typically used to train deep neural networks.
no code implementations • 11 Oct 2019 • Mijung Park, Margarita Vinaroz, Wittawat Jitkrittum
SVT incurs the privacy cost only when a condition (whether a quantity of interest is above/below a threshold) is met.
no code implementations • 3 Jul 2019 • Kamil Adamczewski, Mijung Park
Convolutional neural networks (CNNs) in recent years have made a dramatic impact in science, technology and industry, yet the theoretical mechanism of CNN architecture design remains surprisingly vague.
1 code implementation • 5 Jun 2019 • Frederik Harder, Matthias Bauer, Mijung Park
Interpretable predictions, where it is clear why a machine learning model has made a particular decision, can compromise privacy by revealing the characteristics of individual data points.
no code implementations • 29 May 2019 • Si Kai Lee, Luigi Gresele, Mijung Park, Krikamol Muandet
The use of inverse probability weighting (IPW) methods to estimate the causal effect of treatments from observational studies is widespread in econometrics, medicine and social sciences.
2 code implementations • 7 Feb 2019 • Changyong Oh, Kamil Adamczewski, Mijung Park
We propose a new variational family for Bayesian neural networks.
2 code implementations • 1 Aug 2018 • Anant Raj, Ho Chung Leon Law, Dino Sejdinovic, Mijung Park
As a result, a simple chi-squared test is obtained, where a test statistic depends on a mean and covariance of empirical differences between the samples, which we perturb for a privacy guarantee.
1 code implementation • 1 Nov 2016 • Mijung Park, James Foulds, Kamalika Chaudhuri, Max Welling
Many applications of Bayesian data analysis involve sensitive information, motivating methods which ensure that privacy is protected.
no code implementations • 14 Sep 2016 • Mijung Park, James Foulds, Kamalika Chaudhuri, Max Welling
We develop a privatised stochastic variational inference method for Latent Dirichlet Allocation (LDA).
no code implementations • 24 May 2016 • Mijung Park, Max Welling
In particular, IRLS for L1 minimisation under the linear model provides a closed-form solution in each step, which is a simple multiplication between the inverse of the weighted second moment matrix and the weighted first moment vector.
1 code implementation • 23 May 2016 • Mijung Park, Jimmy Foulds, Kamalika Chaudhuri, Max Welling
The iterative nature of the expectation maximization (EM) algorithm presents a challenge for privacy-preserving estimation, as each iteration increases the amount of noise needed.
no code implementations • NeurIPS 2015 • Mijung Park, Gergo Bohner, Jakob H. Macke
Neural population activity often exhibits rich variability.
no code implementations • 9 Feb 2015 • Mijung Park, Wittawat Jitkrittum, Dino Sejdinovic
Complicated generative models often result in a situation where computing the likelihood of observed data is intractable, while simulating from the conditional density given a parameter value is relatively easy.
no code implementations • NeurIPS 2014 • Anqi Wu, Mijung Park, Oluwasanmi O. Koyejo, Jonathan W. Pillow
Classical sparse regression methods, such as the lasso and automatic relevance determination (ARD), model parameters as independent a priori, and therefore do not exploit such dependencies.
no code implementations • NeurIPS 2015 • Mijung Park, Wittawat Jitkrittum, Ahmad Qamar, Zoltan Szabo, Lars Buesing, Maneesh Sahani
We introduce the Locally Linear Latent Variable Model (LL-LVM), a probabilistic model for non-linear manifold discovery that describes a joint distribution over observations, their manifold coordinates and locally linear maps conditioned on a set of neighbourhood relationships.
no code implementations • 12 Oct 2014 • Mijung Park, Jakob H. Macke
Here, we introduce a hierarchical statistical model of neural population activity which models both neural population dynamics as well as inter-trial modulations in firing rates.
no code implementations • NeurIPS 2013 • Mijung Park, Jonathan W. Pillow
In typical experiments with naturalistic or flickering spatiotemporal stimuli, RFs are very high-dimensional, due to the large number of coefficients needed to specify an integration profile across time and space.
no code implementations • NeurIPS 2012 • Mijung Park, Jonathan W. Pillow
Active learning can substantially improve the yield of neurophysiology experiments by adaptively selecting stimuli to probe a neuron's receptive field (RF) in real time.
no code implementations • NeurIPS 2011 • Mijung Park, Greg Horwitz, Jonathan W. Pillow
With simulated experiments, we show that optimal design substantially reduces the amount of data required to estimate this nonlinear combination rule.