no code implementations • 12 Feb 2024 • Parikshit Gopalan, Lunjia Hu, Guy N. Rothblum
Projected smooth calibration gives strong guarantees for all downstream decision makers who want to use the predictor for binary classification problems of the form: does the label belong to a subset $T \subseteq [k]$: e. g. is this an image of an animal?
no code implementations • 18 Mar 2022 • Guy N. Rothblum, Gal Yona
We formalize a natural (distribution-free) solution concept: given anticipated miscalibration of $\alpha$, we propose using the threshold $j$ that minimizes the worst-case regret over all $\alpha$-miscalibrated predictors, where the regret is the difference in clinical utility between using the threshold in question and using the optimal threshold in hindsight.
no code implementations • 2 Oct 2021 • Guy N. Rothblum, Gal Yona
The notion of "too much" is quantified via a parameter $\gamma$ that serves as a vehicle for specifying acceptable tradeoffs between accuracy and fairness, in a way that is independent from the specific metrics used to quantify fairness and accuracy in a given task.
no code implementations • 26 Nov 2020 • Cynthia Dwork, Michael P. Kim, Omer Reingold, Guy N. Rothblum, Gal Yona
Prediction algorithms assign numbers to individuals that are popularly understood as individual "probabilities" -- what is the probability of 5-year survival after cancer diagnosis?
no code implementations • 4 Apr 2020 • Cynthia Dwork, Christina Ilvento, Guy N. Rothblum, Pragya Sur
Our principal conceptual result is an extraction procedure that learns the underlying truth; moreover, the procedure can learn an approximation to this truth given access to a weak form of the oracle.
no code implementations • 3 Apr 2019 • Michael P. Kim, Aleksandra Korolova, Guy N. Rothblum, Gal Yona
We introduce and study a new notion of preference-informed individual fairness (PIIF) that is a relaxation of both individual fairness and envy-freeness.
no code implementations • NeurIPS 2018 • Michael P. Kim, Omer Reingold, Guy N. Rothblum
We study the problem of fair classification within the versatile framework of Dwork et al. [ITCS '12], which assumes the existence of a metric that measures similarity between pairs of individuals.
no code implementations • ICML 2018 • Guy N. Rothblum, Gal Yona
We show that approximate metric-fairness {\em does} generalize, and leverage these generalization guarantees to construct polynomial-time PACF learning algorithms for the classes of linear and logistic predictors.
1 code implementation • 22 Nov 2017 • Úrsula Hébert-Johnson, Michael P. Kim, Omer Reingold, Guy N. Rothblum
We develop and study multicalbration -- a new measure of algorithmic fairness that aims to mitigate concerns about discrimination that is introduced in the process of learning a predictor from data.