no code implementations • 8 Mar 2023 • Zhun Deng, Cynthia Dwork, Linjun Zhang
Fairness is captured by incorporating demographic subgroups into the class of functions~$\mathcal{C}$.
no code implementations • 21 Jan 2023 • Cynthia Dwork, Daniel Lee, Huijia Lin, Pranay Tankala
We identify and explore connections between the recent literature on multi-group fairness for prediction algorithms and the pseudorandomness notions of leakage-resilience and graph regularity.
1 code implementation • 6 Nov 2022 • Travis Dick, Cynthia Dwork, Michael Kearns, Terrance Liu, Aaron Roth, Giuseppe Vietri, Zhiwei Steven Wu
Our attacks significantly outperform those that are based only on access to a public distribution or population from which the private dataset $D$ was sampled, demonstrating that they are exploiting information in the aggregate statistics $Q(D)$, and not simply the overall structure of the distribution.
no code implementations • 20 Jul 2022 • Elbert Du, Cynthia Dwork
Differential privacy is known to protect against threats to validity incurred due to adaptive, or exploratory, data analysis -- even when the analyst adversarially searches for a statistical estimate that diverges from the true value of the quantity of interest on the underlying population.
no code implementations • 4 Nov 2021 • Maya Burhanpurkar, Zhun Deng, Cynthia Dwork, Linjun Zhang
Predictors map individual instances in a population to the interval $[0, 1]$.
no code implementations • 26 Nov 2020 • Cynthia Dwork, Michael P. Kim, Omer Reingold, Guy N. Rothblum, Gal Yona
Prediction algorithms assign numbers to individuals that are popularly understood as individual "probabilities" -- what is the probability of 5-year survival after cancer diagnosis?
no code implementations • ICML 2020 • Zhun Deng, Cynthia Dwork, Jialiang Wang, Linjun Zhang
Robust optimization has been widely used in nowadays data science, especially in adversarial training.
1 code implementation • ICLR 2021 • Marcel Neunhoeffer, Zhiwei Steven Wu, Cynthia Dwork
We also provide a non-private variant of PGB that improves the data quality of standard GAN training.
no code implementations • 20 Jun 2020 • Zhun Deng, Frances Ding, Cynthia Dwork, Rachel Hong, Giovanni Parmigiani, Prasad Patil, Pragya Sur
We study an adversarial loss function for $k$ domains and precisely characterize its limiting behavior as $k$ grows, formalizing and proving the intuition, backed by experiments, that observing data from a larger number of domains helps.
no code implementations • 12 Apr 2020 • Cynthia Dwork, Christina Ilvento, Meena Jagadeesan
It is well understood that a system built from individually fair components may not itself be individually fair.
no code implementations • 4 Apr 2020 • Cynthia Dwork, Christina Ilvento, Guy N. Rothblum, Pragya Sur
Our principal conceptual result is an extraction procedure that learns the underlying truth; moreover, the procedure can learn an approximation to this truth given access to a weak form of the oracle.
no code implementations • 4 Jun 2019 • Zhun Deng, Cynthia Dwork, Jialiang Wang, Yao Zhao
We provide a general framework for characterizing the trade-off between accuracy and robustness in supervised learning.
no code implementations • 11 Jul 2018 • Cynthia Dwork, Weijie J. Su, Li Zhang
Differential privacy provides a rigorous framework for privacy-preserving data analysis.
no code implementations • 15 Jun 2018 • Cynthia Dwork, Christina Ilvento
Algorithmic fairness, and in particular the fairness of scoring and classification algorithms, has become a topic of increasing social concern and has recently witnessed an explosion of research in theoretical computer science, machine learning, statistics, the social sciences, and law.
no code implementations • 27 Mar 2018 • Cynthia Dwork, Vitaly Feldman
We demonstrate that this overhead can be avoided for the well-studied class of thresholds on a line and for a number of standard settings of convex regression.
no code implementations • 20 Jul 2017 • Cynthia Dwork, Nicole Immorlica, Adam Tauman Kalai, Max Leiserson
When it is ethical and legal to use a sensitive attribute (such as gender or race) in machine learning systems, the question remains how to do so.
no code implementations • 12 Nov 2015 • Cynthia Dwork, Weijie Su, Li Zhang
This destroys the classical proof of FDR control.
1 code implementation • NeurIPS 2015 • Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Roth
We also formalize and address the general problem of data reuse in adaptive data analysis.
no code implementations • 10 Nov 2014 • Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Roth
We show that, surprisingly, there is a way to estimate an exponential in $n$ number of expectations accurately even if the functions are chosen adaptively.
1 code implementation • 1 May 2014 • Cynthia Dwork, Kunal Talwar, Abhradeep Thakurta, Li Zhang
We show that the well-known, but misnamed, randomized response algorithm, with properly tuned parameters, provides a nearly optimal additive quality gap compared to the best possible singular subspace of A.
2 code implementations • International Conference on Machine Learning 2013 • Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, Cynthia Dwork
We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the proportion in the population as a whole), and individual fairness (similar individuals should be treated similarly).