Munich Center for Mathematical Philosophy (MCMP)
print


Breadcrumb Navigation


Content

Workshop: Recent Work in Formal Epistemology (28 June 2017)

Idea & Motivation

As the area of formal epistemology continues to grow, this half-day workshop brings together researchers in the field to present work on some recent trends. The workshop will cover and explore a diverse set of issues including concept learning, representations of ignorance, opinion aggregation, and more.

Speakers

  • Peter Brössel (Ruhr-Universität Bochum)
  • Ben Eva (MCMP/LMU)
  • Lee Elkin (MCMP/LMU)
  • Richard Pettigrew (University of Bristol)

Organizers

  • Lee Elkin (MCMP/LMU)
  • Stephan Hartmann (MCMP/LMU)

Registration

Attendance is free, but registration is required (ljelkin3@gmail.com)

Acknowledgement

This workshop is generously supported by the Alexander von Humboldt Foundation and organized by the Munich Center for Mathematical Philosophy (MCMP, LMU Munich).

Contact

For information about the workshop, please contact Lee Elkin (ljelkin3@gmail.com)

Program

28 June 2017

TimeEvent
09:00 - 09:05 Welcome
09:05 - 09:50 Peter Brössel: Bayesian Concept Learning
09:50 - 10:35 Lee Elkin: Transitioning from Ignorance to Informativeness
10:35 - 10:50 Coffee Break
10:50 - 11:35 Ben Eva: Multi-Level Explanation and Causation: An Axiomatic Approach
11:35 - 12:20 Richard Pettigrew: Accuracy and Aggregating Credences

Abstracts

Peter Brössel: Bayesian Concept Learning

In this paper, we introduce a new account of concept acquisition and word learning that combines the very popular account of Bayesian word learning by Xu and Tenenbaum (2007) with Gärdenfors' (2000, 2014) account of conceptual representation and conceptual thought. According to Xu and Tenenbaum's 2007 Bayesian account of word learning, children learn new terms as if they were perfect Bayesian agents. But how should we determine the prior probabilities of the evidence and the hypothesis as well as the likelihood? The very rough idea of Xu and Tenenbaum is to tie these probabilities to the size of the concept that we refer to with the given term, which, according to them, roughly corresponds to the average dissimilarity of the objects falling under it. However, the number of objects falling under a concept or the average dissimilarity of these objects cannot play that role. It is not the extension of an concept that matters for determining the prior probabilities (after all, this extension might accidentally be very small) but its intension. We demonstrate that Xu and Tenenbaum's approach can be made more precise and fruitful by relying on Gärdenfors's Conceptual Spaces account of conceptual representation and conceptual thought.top

Lee Elkin: Transitioning from Ignorance to Informativeness

Representing complete ignorance by vacuous priors in the language of imprecise probability seems quite natural as the representation neither admits an ounce of evidential support nor a commitment to accepting risky wagers. However, the approach is dynamically challenged in that an agent is to remain ignorant for all eternity. The problem stems from a restriction to the canonical belief updating method for imprecise probabilities, namely generalized conditioning. I propose in this paper an alternative updating method called credal set replacement allowing an agent to transition from a state of ignorance to a more informative state. Furthermore, I point out that the method need not be restricted to only one task, but instead it may be applied widely.top

Ben Eva: Multi-Level Explanation and Causation: An Axiomatic Approach

In recent years, a number of writers have attempted to utilise Woodward's (2003) interventionist account of causation to provide novel analyses of the famous 'causal exclusion' arguments. However, this literature has paid little attention to the implications of the fundamental axioms of the graphical causal modelling framework in which the interventionist approach is generally formulated. Here we show that, properly applied, these axioms serve to greatly refine and clarify a number of controversies surrounding higher level causation and the soundness of the causal exclusion arguments. We conclude by demonstrating that the answer to the question of whether multiply realisable 'higher level' events can be causally efficacious depends crucially on which variable sets one deems to be 'causally sufficient’.
top

Richard Pettigrew: Accuracy and Aggregating Credences

We often ask for the opinion of a group of individuals. How strongly does the scientific community believe that the rate at which sea levels are rising increased over the last 200 years? How likely does the UK Treasury think it is that there will be a recession if the country leaves the European Union? What are these group credences that such questions request? And how do they relate to the individual credences assigned by the members of the particular group in question? According to the credal judgment aggregation principle, Linear Pooling, the credence function of a group should be a weighted average or linear pool of the credence functions of the individuals in the group. In this paper, I give an argument for Linear Pooling based on considerations of accuracy. And I respond to two standard objections to the aggregation principle.top

Venue

Luisenstraße 37
80333 Munich
Room A032

LMU Roomfinder

If you need help finding the venue or a particalur room, you might consider using the LMU's roomfinder, a mobile web app that lets you display all of the 22.000 rooms at the 83 locations of the LMU in Munich.