Munich Center for Mathematical Philosophy (MCMP)
print


Breadcrumb Navigation


Content

Talk (Work in Progress): Michele Loi (Zürich)

Location: Ludwigstr. 31, ground floor, Room 021.

25.07.2019 12:00  – 14:00 

Title:

An ethical theory of non-deterministic fairness for prediction-based decisions (with Hoda Heidari and Anders Herlitz)

Abstract:

In October 2017, an op-ed in the New York Times brought the publics’ attention to a risk assessment algorithm called COMPAS. COMPAS had been widely used by the criminal justice system to assess the risk that defendants would re-offend, assessments which provided grounds for parole decisions. It was argued to the broad public that COMPAS was racially biased. The argument was based on a study made by investigative journalists at ProPublica, claiming that black defendants who do not recidivate were nearly twice as likely to be classified by COMPAS as high-risk compared to their white counterparts. Northpointe, the company that had developed COMPAS responded to the accusations and claimed that their product was not biased, because it was “well-calibrated": there were no disparities between how successful COMPAS was at correctly identifying black and white recidivists. More precisely: when COMPAS identified a black person as a future recidivist, it was as likely to have made the correct guess, as they would have been, if the person with that same prediction had been white. Northpointe claimed that this shows that COMPAS is not biased, and that the disparities ProPublica identified are irrelevant. Computer scientists have recognized these two standards cannot be satisfied together except in a narrow range of circumstances that are extremely rare and treat it as a genuine fairness dilemma. The dilemma is of course not confined to predictions of recidivism, but arises in all contexts where one uses statistical methods to predict the distribution of some property. Our goal in the paper is to gain a deeper understanding of “statistical bias” such as this is exemplified by cases like the COMPAS algorithm. The paper introduces different fairness criteria that might be proposed in the context of algorithmic predictions and use these to present the fairness dilemma in more detail. We introduce a general framework for thinking about algorithmic fairness that is neutral with respect to moral theories of fairness/justice and can be used to express different views of what justifies unequal distributions of goods (e.g. responsibility, merit, need, etc.), and we illustrate how the proposed framework provides a way of mapping moral theories with statistical fairness criteria.