Search Results for author: Christoph Heitz

Found 8 papers, 4 papers with code

On Prediction-Modelers and Decision-Makers: Why Fairness Requires More Than a Fair Prediction Model

no code implementations9 Oct 2023 Teresa Scantamburlo, Joachim Baumann, Christoph Heitz

We clarify the distinction between the concepts of prediction and decision and show the different ways in which these two elements influence the final fairness properties of a prediction-based decision system.

Decision Making Fairness

Group Fairness in Prediction-Based Decision Making: From Moral Assessment to Implementation

1 code implementation19 Oct 2022 Joachim Baumann, Christoph Heitz

In this paper, we present a step-by-step procedure integrating three elements: (a) a framework for the moral assessment of what fairness means in a given context, based on the recently proposed general principle of "Fair equality of chances" (FEC) (b) a mapping of the assessment's results to established statistical group fairness criteria, and (c) a method for integrating the thus-defined fairness into optimal decision making.

Decision Making Fairness

Enforcing Group Fairness in Algorithmic Decision Making: Utility Maximization Under Sufficiency

1 code implementation5 Jun 2022 Joachim Baumann, Anikó Hannák, Christoph Heitz

We show that group-specific threshold rules are optimal for PPV parity and FOR parity, similar to well-known results for other group fairness criteria.

Decision Making Fairness

Is calibration a fairness requirement? An argument from the point of view of moral philosophy and decision theory

no code implementations11 May 2022 Michele Loi, Christoph Heitz

For our paper, we equate fairness with (non-)discrimination, which is a legitimate understanding in the discussion about group fairness.

Fairness Philosophy

A Systematic Approach to Group Fairness in Automated Decision Making

no code implementations9 Sep 2021 Corinna Hertweck, Christoph Heitz

While the field of algorithmic fairness has brought forth many ways to measure and improve the fairness of machine learning models, these findings are still not widely used in practice.

Decision Making Fairness +1

On the Moral Justification of Statistical Parity

no code implementations4 Nov 2020 Corinna Hertweck, Christoph Heitz, Michele Loi

This means that the question of whether independence should be used or not cannot be satisfactorily answered by only considering the justness of differences in the predictive features.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.