Munich Center for Mathematical Philosophy (MCMP)
print


Breadcrumb Navigation


Content

Workshop: Inferentialism, Bayesianism, and Scientific Explanation (25 - 26 January 2017)

Idea and Motivation

What makes a given explanation successful? Many philosophers of science have tried to answer this question, but there is no consensus answer. In this workshop, we will assess the prospects of taking a novel approach to answering this question. Specifically, we will discuss whether and how an inferentialist account of explanation can be combined with Bayesian resources to deliver an adequate account of scientific explanation. This involves assessing not only whether the inferentialist can capture aspects of explanation that are often thought to resist Bayesian treatment (e.g., Inference to the Best Explanation and the asymmetry of explanation), but also whether inferentialism avoids problems that are thought to plague ontic accounts of explanation (e.g., an untenable insensitivity to contextual and pragmatic factors). Since it may not be entirely clear what the commitments of the inferentialist are in the context of scientific explanation, we likewise hope to consider what exactly it means to be an inferentialist about explanation.

Organization

The workshop is organized by Lorenzo Casini (Geneva/MCMP), Stephan Hartmann (LMU/MCMP), Reuben Stern (LMU/MCMP) and Marcel Weber (Geneva).

Speakers

  • Lorenzo Casini
  • Igor Douven
  • Ben Eva
  • Julian Reiss
  • Alexander Reutlinger
  • Jan Sprenger
  • Reuben Stern
  • John Williamson

Program

Day 1 (25 January 2017)

TimeEvent
09:15 - 09:45 Registration
09:45 - 10:00 Welcome and Opening
10:00 - 11:00 Jon Williamson: "Inferentialism and Causal Explanation"
11:00 - 12:00 Julian Reiss: "The Goodness of a Causal Explanation"
12:00 - 13:30 Lunch
13:30 - 14:30 Reuben Stern: "Causation, Explanation, and Context"
14:30 - 15:30 Ben Eva and Reuben Stern: "Causal Explanatory Strength"
15:30 - 16:00 Coffee Break
16:00 - 17:00 Jan Sprenger: "Stalnaker's Hypothesis: A Causal Account"
17:00 - 18:00 Lorenzo Casini and Radin Dardashti: "Confirmation by Robustness
Analysis: A Bayesian Account
"
20:00 Conference Dinner

Day 2 (26 January 2017)

TimeEvent
10:00 - 11:00 Alex Reutlinger: "Is the Counterfactual Theory of Explanation an
Epistemic Account?
"
11:00 - 12:00 Carsten Held: "The Counterfactual Theory of Explanation and
Explanatory Asymmetry
"
12:00 - 13:30 Lunch
13:30 - 14:30 Jared Millson, Kareem Khalifa, and Mark Risjord: "Explanatory
Asymmetry and Inferentialist Expressivism
"
14:30 - 15:30 Borut Trpin and Max Pellert: "Inference to the Best Explanation in
Cases of Uncertain Evidence
"
15:30 - 16:00 Coffee Break
16:00 - 17:00 Igor Douven: "Inference to the Best Explanation and the Relevance
of the Closest Competitor
"

Abstracts

Lorenzo Casini and Radin Dardashti: Confirmation by Robustness Analysis: A Bayesian Account

In the last decades, philosophers have claimed time and again that theoretical explorations on models have limited epistemic value, insofar as they provide no empirical evidence, but merely study the consequences of the models’ assumptions (Hausman, 1992; Guala, 2002; Grüne-Yano ff, 2009; Fumagalli, 2015, 2016). In particular, the confirmatory role of robustness analysis (RA), which is an important form of theoretical exploration, has been the object of an intense debate, with some expressing positive views (Levins, 1966; Wimsatt, 1981, 1987; Weisberg, 2006; Kuorikoski et al., 2010, 2012) and others being more disenchanted (Orzack and Sober, 1993; Sugden, 2000, 2009; Odenbaugh and Alexandrova, 2011; Lisciandra, 2016). Here, we systematically reassess these views by rationalizing the confirmatory role of RA in a Bayesian framework. We illustrate our argument by a case of confirmation of an explanatory hypothesis in macroeconomics. By shedding light on the conditions under which RA is confirmatory, our argument contributes to clarify the potential of theoretical explorations for empirical research.
top

Igor Douven: Inference to the Best Explanation and the Relevance of the Closest Competitor

Many textbooks present IBE as the rule according to which we are warranted in believing whichever hypothesis explains the available evidence best. But, as various authors have pointed out, this formulation is too crude in that it fails to mention the explanatory quality of the best explanation: if the best explanation is a poor one, then, probably, the fact that all rival explanations are poorer still provides little warrant for believing the best. Here we want to draw attention to another problematic aspect of the textbook version of IBE, to wit, that it makes no reference to the explanatory goodness of the second best explanation, in particular, that it ignores the question of how much better the best explanation is than its closest competitor. We argue that this question may matter to what we can infer from the evidence. Experimental results will be presented that suggest that the question also actually does matter to what people are willing to infer.top

Ben Eva and Reuben Stern: Causal Explanatory Strength

Schupbach and Sprenger (2011) introduced a novel probabilistic approach to measuring the explanatory strength that a given explanans exerts over a corresponding explanandum. We show that the measure obtained by Schupbach and Sprenger gives incorrect results for distinctively causal explanations, and go on to define an alternative measure of explanatory strength that is better able to model the strength of causal explanations. This alternative approach relies crucially on Pearl's notion of an 'intervention' and suggests the existence of both an ontic and an epistemic component of explanatory power.
top

Carsten Held: The Counterfactual Theory of Explanation and Explanatory Asymmetry

Any promising account of explanation must accommodate non-causal explanations. A recent candidate is the Counterfactual Theory of Explanation (Woodward 2003 and recently Reutlinger 2016). According to this theory, in a non-statistical explanation the explanans (consisting of generalizations G1, ..., Gm and auxiliary statements S1, ..., Sn) logically entails the explanandum, and G1, ..., Gm, support a counterfactual of the form: if S1, ..., Sn had been different, then E would have been different. This account is problematic, however, because it does not capture the explanatory asymmetry in non-causal explanations. E.g., in the famous flagpole example laws and auxiliary statements make the flagpole's height and the shadow's length interdeducible and support counterfactuals both ways – although we think that the flagpole's height explains the shadow length, but not vice versa. Similarly, in Euler's explanation the definitions of an Eulerian graph and an Euler path through it make a certain non-Eulerian graph (isomorphic to a certain map of Königsberg) and the non-existence of an Euler path through the graph interdeducible and support counterfactuals both ways – although we think that the graph explains the non-existence of the path, but not vice versa. In causal explanations, the symmetry can be broken by requiring that the relevant counterfactuals are no backtrackers. I discuss whether a parallel strategy is available for non-causal explanations, i.e. whether we can clarify what backtracker counterfactuals are, in the non-causal context.top

Jared Millson, Kareem Khalifa, and Mark Risjord: Explanatory Asymmetry and Inferentialist Expressivism

The question of whether inference to the best explanation (IBE) can be reduced to other forms of ampliative inference or not depends, at least in part, on an account of explanation. Unfortunately, none of the extant theories of explanation provides the basis for IBE. The views of explanation that link it closely to inference, such as Hempel's deductive-nomological model, face daunting and unmet challenges. Notoriously, they fail to capture the idea that, in general, if A explains B, then B does not explain A (the symmetry problem). Causal accounts of explanation break the link between explanation and inference, and the pragmatic views are skeptical about IBE. We offer an alternative view: the essential discursive role of explanatory vocabulary is to make explicit subjunctively robust, nonmonotonic material inferences. Our guiding insight is that explanatory inferences are those materially correct inferences which remain good under more suppositions (typically expressed in the subjunctive mood) than any other material inferences that share the same conclusion. Treating explanatory vocabulary as expressive of the material inferences meeting this condition permits us to analyze ''best explains'' in terms of the conditions under which the expression may be introduced and eliminated. IBE emerges as the inference rule whereby we may detach entitlement to the explanandum from entitlement to the explanation. In this paper, we will sketch the central formal results of our project and the account of explanation made possible by them. By way of showing the strength of the view, we will show how it resolves the symmetry problem.top

Julian Reiss: The Goodness of a Causal Explanation

This paper addresses the question of what makes a causal explanation a good one. An explanation is a contextual material inference, that is, an inference the validity of which is driven by facts about the content of the premisses from which it is made and the context within it is made rather than the form in which it is made. Consequently, material content and the context of the inference are places to look for clues for what are good-making features of the explanation. The paper will focus on causal explanations in the biomedical and social sciences for concreteness.top

Alexander Reutlinger: Is the Counterfactual Theory of Explanation an Epistemic Account?

In recent work, I defend a counterfactual theory of explanation (CTE) that is intended to capture both causal and non-causal explanations in science (Reutlinger forthcoming a, b, c). The core idea of the CTE is that causal and non-causal explanations are explanatory by virtue of revealing counterfactual dependencies of the explanandum on certain factors cited in the explanans. The central question of my talk is: what kind of an account of explanation is the CTE? Salmon introduced three ways of characterizing theories of explanations: ontic, modal and epistemic theories (also Craver 2013; Lange 2013, forthcoming; Woodward 2003, 2015; Reutlinger forthcoming d). I will argue that the CTE is neither ontic nor modal. Instead, I think that the CTE is best understood as an epistemic account. The CTE is epistemic because (a) it relies on deductive and probabilistic inferences, and because (b) explanation is a context sensitive matter. Is the CTE also an inferentialist account? I suspect that this is not necessarily so, if inferentialism about explanation entails an inferentialist theory of meaning. In particular, inferentialists portray the law statements figuring in an explanation as inference-tickets without truth-values. I want to resist this move and maintain that law statements (and other assumption in the explanans) are required to be approximately true – at least, in the case of how-actually explanations.top

Jan Sprenger: Stalnaker's Hypothesis: A Causal Account

Recent research in epistemology and the psychology of reasoning suggests that indicative conditionals express, in the first place, an inferential connection between antecedens and consequens (e.g., Douven, 2016; Skovgaard-Olsen et al., 2016). Taking this position seriously has consequences for Stalnaker's Hypothesis (SH), according to which the probability of a conditional is equal to the conditional probability: p(AC) = p(C|A). While SH is often discarded on grounds of Lewis' (1976) triviality arguments, Douven and Verbrugge (2013) show that SH agrees well with the judgments of ordinary reasoners. It is rather the Generalized Stalnaker's Hypothesis (GSH) p(AC|X) = p(C|A,X), for any proposition X, which licenses Lewis' triviality proof, and fails to be supported empirically. In this paper, I investigate which conditions X has to satisfy in order to escape Lewis' triviality arguments. Those conditions are spelled out in terms of the causal structure between A, C, and X, and the commitments which knowledge of X imposes on our inferences with A and C. Finally, it will be tested experimentally whether causal structure and inferential commitments can specify the conditions under which GSH is valid.top

Reuben Stern: Causation, Explanation, and Context

In this talk, I hope to clarify the relation between causation and causal explanation. At first blush, it may seem reasonable to think that X causally explains Y if and only if X causes Y. But this view is not tenable if causal relevance is context-invariant (as realists think) while causal explanatory relevance varies with context (as some examples suggest). My aim in this talk is to use graphical causal models to develop a realist (context-invariant) analysis of causal relevance that is likewise a necessary but not sufficient condition for causal explanatory relevance. Then, I consider what other conditions must be satisfied in order for X to causally explain Y in some knowledge context K. The resulting account of causal explanation is ontic in the sense that X causally explains Y (in any context) only if X causes Y, but also inferentialist in the sense that whether X causally explains Y in K depends on whether, in K, one can justifiably infer Y from the fact that one intervenes to bring about X.top

Borut Trpin and Max Pellert: Inference to the best explanation in cases of uncertain evidence

A probabilistic inference to the best explanation (IBE) has recently been proposed as an alternative to Bayesian conditionalisation (Douven, 2013). While offering practical advantages in some contexts, IBE does not account for those cases, in which the evidence that agents are updating on is only partially certain. To address this shortcoming, we formulated IBE*, a generalisation of the IBE updating rule. We developed a computer simulation to test the performance of the IBE* and the Bayesian updater (updating by Jeffrey's rule). In one scenario, the agents are trying to detect the bias of a coin, in an environment in which the (observed) outcomes of the tosses are not fully certain. We found that both the IBE* and the Bayesian updater often fail to detect the true bias, but the Bayesian updater much more frequently assigns high probability to a false bias. This suggests that IBE* updating offers advantages not only from a practical viewpoint (e.g. concerning speed) but also on general grounds: Its adoption helps in preventing an agent from assigning high subjective probability to false hypotheses (thereby limiting false positives).top

Jon Williamson: Inferentialism and Causal Explanation

In this talk I will discuss the question of how causal explanations can be genuinely explanatory. In part, this depends on how causality itself is understood: different accounts of causality can yield causal explanations that differ in how explanatory they are. I will focus on an inferentialist account of causality and causal explanation and will argue that there are important limitations to such an account. I will end by discussing an approach that has much in common with the inferentialist position but which yields a better account of causal explanation.top

Venue

Internationales Begegnungszentrum der Wissenschaft München e.V.
Amalienstraße 38
80799 München