Munich Center for Mathematical Philosophy (MCMP)
print


Breadcrumb Navigation


Content

Mini-Workshop: Conditioning (13 July 2019)

Idea & Motivation

Conditioning and conditional probability are fundamental concepts in probability theory. The mini-workshop discusses recent results that analyse features of conditioning and conditional probability that are relevant from the perspective of foundations of probability theory, belief revision and Bayesianism.

Speakers

Attendance

Attendance is free but please register your intention to attend by sending an email to: m.redei@lse.ac.uk by June 30.

Program

TimeEvent
09:30 - 10:30 Franz Dietrich: Belief Revision Generalized: A Joint Characterization of Bayes's and Jeffrey's Rules
10:30 - 10:45 Coffee Break
10:45 - 11:45 Z. Gyenis and M. Redei: Bayesian Learning and Modal Logics
11:45 - 12:45 Stephan Hartmann: The Distance-Based Approach to Bayesianism
12:45 - 14:00 Lunch Break
14:00 - 15:00 Tom Sterkenburg: The Truth-Convergence of Open-Minded Bayesianism
15:00 - 15:15 Coffee Break
15:15 - 16:15 Rush Stewart: Persistent Disagreement and Polarization in a Bayesian Setting

Abstracts

Franz Dietrich: Belief Revision Generalized: A Joint Characterization of Bayes's and Jeffrey's Rules

We present a general framework for representing belief-revision rules and use it to characterize Bayes's rule as a classical example and Jeffrey's rule as a non-classical one. In Jeffrey's rule, the input to a belief revision is not simply the information that some event has occurred, as in Bayes's rule, but a new assignment of probabilities to some events. Despite their differences, Bayes's and Jeffrey's rules can be characterized in terms of the same axioms: responsiveness, which requires that revised beliefs incorporate what has been learnt, and conservativeness, which requires that beliefs on which the learnt input is 'silent' do not change. To illustrate the use of non-Bayesian belief revision in economic theory, we sketch a simple decision-theoretic application. (Joint work with R. Bradley and C. List)top

Z. Gyenis and M. Redei: Bayesian Learning and Modal Logics

In this talk we present probabilistic inference as a logical inference: we define a hierarchy of modal logics that are capable of capturing general principles that probabilistic learning satisfies. We put the emphasis on probabilistic inference based on Bayes or Jeffrey updating, and determine many of the features of the corresponding logics. The talk is based on a series of recent joint papers with William Brown and Miklos Redei.top


Stephan Hartmann: The Distance-Based Approach to Bayesianism

Bayesianism is currently our best theory of uncertain reasoning with many applications in philosophy and related fields. In this talk, I will first present the standard approach to Bayesianism which recommends (Jeffrey) conditionalization as the canonical updating method. I will then point out several limitations of the standard approach and propose the distance-based approach to Bayesianism as a remedy. The distance-based approach generalizes the standard approach and has many important applications, such as modeling the learning of conditional information. It also allows us to address “the problem of the algebra”, i.e. the problem of how to specify the new probability distribution after adding a new variable to the algebra. Finally, I apply the distance-based approach to the problem of probability aggregation and show how the proposed methodology can be used to provide a rational justification of the probability weighting functions used in Prospect Theory.
top

Tom Sterkenburg: The Truth-Convergence of Open-Minded Bayesianism

Following suggestions by Shimony (1970) and Earman (1992), Wemmackers and Romeijn (2015) work out an extension of Bayesian confirmation theory that can deal with newly proposed hypotheses. I will discuss how their "open-minded Bayesianism" does not preserve the classic guarantee of almost-sure merger with the truehypothesis, and propose a "forward-looking" open-minded Bayesian that does retain this guarantee.top

Rush Stewart: Persistent Disagreement and Polarization in a Bayesian Setting

For two ideally rational agents, does learning a finite amount of shared evidence necessitate agreement? No. But does it at least guard against belief polarization, the case in which their opinions get further apart? No. OK, but are rational agents guaranteed to avoid polarization if they have access to an infinite, increasing stream of shared evidence? No.
top

Venue

Professor-Huber-Platz 2
80539 München
Room W 401

LMU Roomfinder

If you need help finding the venue or a particular room, you might consider using the LMU's roomfinder, a mobile web app that lets you display all of the 22.000 rooms at the 83 locations of the LMU in Munich.