Munich Center for Mathematical Philosophy (MCMP)
print


Breadcrumb Navigation


Content

Mini-Workshop on Policy-Relevant Philosophy of Science

Location: Ludwigstr. 31, ground floor, Room 021.

05.02.2025 16:00  – 20:00 

Idea & Motivation:

This mini-workshop is a first step to discuss our work-in-progress on various policy-relevant issues in philosophy of science and on philosophical analyses of policy-relevant sciences (including health and climate science, the social sciences and economics, and perhaps also computer science and AI). The purpose of the workshop is to gather philosophers (of science) at the MCMP and LMU to explore the potential of such issues and to exchange ideas.

Speakers:

Ina Jäntgen (MCMP, LMU Munich)
Alexander Reutlinger (MCMP, LMU Munich)
Gabriel Tarziu (MCMP, LMU Munich)

Venue:

LMU Munich
Ludwigstr. 31
Room 021

Program:

16:00-16:50 Ina Jäntgen: “A Distributive Injustice Challenge of Effect Size Measurement in Applied Science (Or: How Should We Measure Effect Sizes in a Way That is Just?)”
17:00-17:50 Alexander Reutlinger: “Why Strategic Science Skeptics Are Not ‘Modern-Day Galileos’. Debunking the Argument from Scientific Revolutions”
18:00-18:50 Gabriel Tarziu: “Climate Services and Climate Policies: How Different Is (Should) Decision-Support Science (Be) from Regular Science?”

Organizers:

Ina Jäntgen (MCMP, LMU Munich)
Alexander Reutlinger (MCMP, LMU Munich)
Gabriel Tarziu (MCMP, LMU Munich)

Abstracts

Ina Jäntgen: “A Distributive Injustice Challenge of Effect Size Measurement in Applied Science (Or: How Should We Measure Effect Sizes in a Way that is Just?)”

To achieve our goals effectively, we often turn to science for advice. And scientists across many fields—from social science to medicine—test the effectiveness of interventions at achieving our goals, for example, the effectiveness of treatments at curing diseases. When doing so, scientists measure and report how effective these interventions are using effect size measures. The resulting effect sizes are often used in evidence-based decision-making, ranging from clinical decisions to policymaking. In this talk, I argue that the practice of relying on effect sizes to inform decision-making faces a challenge of distributive injustice. Scientists could either report the effect sizes of interventions to decision-makers, summarising only some information about the probability distributions observed in trials testing these interventions; or they could report all available information about these probability distributions. But both reporting practices plausibly often leave some decision-makers worse and some decision-makers better off. On the one hand, effect sizes omit available information about the observed probability distributions that can be vital for rational decision-makers. And, crucially, whether observed effect sizes inform a person’s rational decision-making well differs between people. Reporting all available information about the probability distributions would inform everyone’s rational decision-making equally well. But, on the other hand, reporting all available information rather than just effect sizes would often be more demanding for scientists and decision-makers for methodological (e.g., Colnet et al., 2024) and cognitive reasons (e.g., Spiegelhalter, 2017). As a result, those people who would be well informed by effect sizes would be left worse off when learning more than they care about learning. Both considerations suggest that when scientists report effect sizes to decision-makers, they often leave some people better and some people worse off – while reporting all available probabilistic information to people would plausibly improve the situation for some and worsen it for others. But how ought scientists then trade off some people's interests with others' interests in a just way when reporting how effective tested interventions are? I suggest tentative avenues for addressing this challenge, drawing on the emerging literature on distributive justice of epistemic goods in science (e.g., Irzik and Kurtulmus, forthcoming; Thoma, 2024).

Alexander Reutlinger: “Why Strategic Science Skeptics Are Not ‘Modern-Day Galileos’. Debunking the Argument from Scientific Revolutions”

Strategic science skeptics criticize scientific claims solely to promote non-epistemic (for instance, political and economic) goals. Strategic science skeptics present arguments to support their criticisms of scientific claims. In this paper, I will analyze and debunk a neglected argument exploited by strategic science skeptics: the argument from scientific revolutions (aka the “Galileo gambit”). According to this argument, strategic science skeptics liken themselves to key figures in scientific revolutions throughout the history of science – in particular, Galileo Galilei (Mann 2012). I will suggest that providing information as to why skeptical arguments are flawed is an instance of policy-relevant philosophy of science.

Gabriel Tarziu: “Climate Services and Climate Policies: How Different Is (Should) Decision-Support Science (Be) from Regular Science?”

Climate action requires usable climate change information to support adaptation and mitigation efforts. However, the state-of-the-art tools used for research in climate science, such as Earth system and (coupled) general circulation models, are not well-suited to provide information about the regional or local impacts of climate change. This creates what is referred to in the literature as a 'usability gap' in climate science. This gap has led to the emergence of climate services, a type of decision-support science (Vezér et al., 2018) aimed at producing information specifically tailored to assist decision-makers. One of the biggest challenges associated with the climate services is how to determine what counts as good usable information. This issue can be addressed from different perspectives. From a political philosophy perspective, usable information must be just. From an ethical perspective, it must align with a set of ethical responsibilities. From an economic perspective, it must help determine the cost-benefit of potential actions and their impact on societal welfare. There are many who argue recently that these usability-related concerns should shape the way knowledge is produced in this context. Most importantly, it is argued that the value systems of science users (their ethical, political, economic, and cultural values) should become an integral part of the scientific practice. So, from this perspective, it is incumbent upon climate services to take into account, in the process of producing knowledge about the impacts of anthropogenic climate change, "the knowledge, experience, and values of 'users,' 'stakeholders,' and indigenous communities" (Coen and Sobel, 2022). In this talk, I will critically examine this conclusion, exploring whether and to what extent climate services should diverge from the norms and practices of traditional science.