Zoom Talk (Work in Progress): Sander Beckers (MCMP)
Please contact firstname.lastname@example.org for the password.
Reflections on the causal approach to Fair AI
Automated decision-making in morally sensitive domains is spreading rapidly throughout different sectors of society. By using massive amounts of data, machine learning algorithms train a model to optimally predict a decision variable as a function of observed features. This outsourcing of our decision-making to an artificial agent opens up a significant challenge: how do we ensure that AI decisions are fair? In many cases the law prohibits discrimination on the basis of so-called sensitive features of an individual. However, it fails to offer a formal account of what a discriminatory effect might be, and thus we currently have no reliable way of assessing whether discrimination by AI systems occurs. Recently this challenge has been addressed by defining various notions of counterfactual fairness, which improve upon purely statistical measures of fairness by explicitly incorporating the causal structure of the prediction model.
In this talk I will identify several flaws with these causal approaches to fairness, and offer suggestions on how to fix them. First, these notions of fairness are not grounded in a legal understanding of discrimination but are justified through flawed examples and intuitions. Second, no attention has been paid to a form of discrimination that is particularly pernicious in this context, due to the fact that the very design of AI systems creates an incentive to discriminate in this manner: unintentional proxy discrimination. Third, the restriction to probabilistic measures overlooks cases of unfairness through actual causation.