no code implementations • 30 Jan 2024 • Venetia Pliatsika, Joao Fonseca, Tilun Wang, Julia Stoyanovich
Using ShaRP, we show that even when the scoring function used by an algorithmic ranker is known and linear, the weight of each feature does not correspond to its Shapley value contribution.
no code implementations • 29 Jan 2024 • Andrew Bell, Joao Fonseca, Carlo Abrate, Francesco Bonchi, Julia Stoyanovich
Building upon an agent-based framework for simulating recourse, this paper demonstrates how much effort is needed to overcome disparities in initial circumstances.
no code implementations • 25 Jan 2024 • Lucius E. J. Bynum, Joshua R. Loftus, Julia Stoyanovich
The traditional paradigm for counterfactual reasoning in this literature is the interventional counterfactual, where hypothetical interventions are imagined and simulated.
no code implementations • 18 Dec 2023 • Lucas Rosenblatt, Julia Stoyanovich, Christopher Musco
Our theoretical results center on the private mean estimation problem, while our empirical results center on extensive experiments on private data synthesis to demonstrate the effectiveness of stratification on a variety of private mechanisms.
no code implementations • 13 Sep 2023 • Joao Fonseca, Andrew Bell, Carlo Abrate, Francesco Bonchi, Julia Stoyanovich
The bulk of the literature on algorithmic recourse to-date focuses primarily on how to provide recourse to a single individual, overlooking a critical element: the effects of a continuously changing context.
no code implementations • 17 Feb 2023 • Falaah Arif Khan, Julia Stoyanovich
In this paper we revisit the bias-variance decomposition of model error from the perspective of designing a fair classifier: we are motivated by the widely held socio-technical belief that noise variance in large datasets in social domains tracks demographic characteristics such as gender, race, disability, etc.
no code implementations • 13 Feb 2023 • Andrew Bell, Lucius Bynum, Nazarii Drushchak, Tetiana Herasymova, Lucas Rosenblatt, Julia Stoyanovich
The ``impossibility theorem'' -- which is considered foundational in algorithmic fairness literature -- asserts that there must be trade-offs between common notions of fairness and performance when fitting statistical models, except in two special cases: when the prevalence of the outcome being predicted is equal across groups, or when a perfectly accurate predictor is used.
no code implementations • 9 Feb 2023 • Falaah Arif Khan, Denys Herasymuk, Julia Stoyanovich
We demonstrate when group-wise statistical bias analysis gives an incomplete picture, and what group-wise variance analysis can tell us in settings that differ in the magnitude of statistical bias.
no code implementations • 7 Dec 2022 • Lucius E. J. Bynum, Joshua R. Loftus, Julia Stoyanovich
Counterfactuals are often described as 'retrospective,' focusing on hypothetical alternatives to a realized past.
no code implementations • 6 Jul 2022 • Falaah Arif Khan, Eleni Manis, Julia Stoyanovich
In this work we use Equal Oppportunity (EO) doctrines from political philosophy to make explicit the normative judgements embedded in different conceptions of algorithmic fairness.
no code implementations • 10 Jun 2022 • Andrew Bell, Oded Nov, Julia Stoyanovich
Increasingly, laws are being proposed and passed by governments around the world to regulate Artificial Intelligence (AI) systems implemented into the public and private sectors.
no code implementations • 27 Apr 2022 • Lucas Rosenblatt, Joshua Allen, Julia Stoyanovich
Our methods are based on the insights that feature importance can inform how privacy budget is allocated, and, further, that per-group feature importance and fairness-related performance objectives can be incorporated in the allocation.
no code implementations • 23 Jan 2022 • Alene K. Rhea, Kelsey Markey, Lauren D'Arinzo, Hilke Schellmann, Mona Sloane, Paul Squires, Falaah Arif Kahn, Julia Stoyanovich
Our approach is to (a) develop a methodology for an external audit of stability of predictions made by algorithmic personality tests, and (b) instantiate this methodology in an audit of two systems, Humantic AI and Crystal.
1 code implementation • 1 Jul 2021 • Lucius E. J. Bynum, Joshua R. Loftus, Julia Stoyanovich
We develop a disaggregated approach to tackling pre-existing disparities that relaxes the typical set of assumptions required for the use of social categories in structural causal models.
no code implementations • 15 Jun 2021 • Falaah Arif Khan, Eleni Manis, Julia Stoyanovich
Through our EOP-framework we hope to answer what it means for an ADS to be fair from a moral and political philosophy standpoint, and to pave the way for similar scholarship from ethics and legal experts.
no code implementations • 25 Mar 2021 • Meike Zehlike, Ke Yang, Julia Stoyanovich
In this survey, we describe four classification frameworks for fairness-enhancing interventions, along which we relate the technical methods surveyed in this paper, discuss evaluation datasets, and present technical work on fairness in score-based ranking.
no code implementations • ICLR Workshop Rethinking_ML_Papers 2021 • Falaah Arif Khan, Eleni Manis, Julia Stoyanovich
Recent interest in codifying fairness in Automated Decision Systems (ADS) has resulted in a wide range of formulations of what it means for an algorithm to be “fair.” Most of these propositions are inspired by, but inadequately grounded in, scholarship from political philosophy.
2 code implementations • 15 Jun 2020 • Ke Yang, Joshua R. Loftus, Julia Stoyanovich
In this paper we propose a causal modeling approach to intersectional fairness, and a flexible, task-specific method for computing intersectionally fair rankings.
no code implementations • 23 Dec 2019 • Julia Stoyanovich, Armanda Lewis
Recounting our own experience, and leveraging literature on pedagogical methods in data science and beyond, we propose the notion of an "object-to-interpret-with".
no code implementations • 28 Nov 2019 • Sebastian Schelter, Yuxuan He, Jatin Khilnani, Julia Stoyanovich
FairPrep is based on a developer-centered design, and helps data scientists follow best practices in software engineering and machine learning.
no code implementations • 4 Jun 2019 • Ke Yang, Vasilis Gkatzelis, Julia Stoyanovich
Many set selection and ranking algorithms have recently been enhanced with diversity constraints that aim to explicitly increase representation of historically disadvantaged populations, or to improve the overall representativeness of the selected set.
no code implementations • 10 May 2018 • Benny Kimelfeld, Phokion G. Kolaitis, Julia Stoyanovich
At the conceptual level, we give rigorous semantics to queries in this framework by introducing the notions of necessary answers and possible answers to queries.