FFIUM Project Closing Workshop (15-16 September 2022)
The German closing workshop for the project FFIUM (Formalism, Formalization, Intuition and Understanding in Mathematics) will take place on September 15th and 16th at the Munich Center for Mathematical Philosophy.
The project aims to investigate the epistemic aspects of formalisation, and in particular how the process of formalisation contributes to a deeper understanding of a domain of reasoning. The investigation of mathematical practice and what makes it rigorous is the main, but not the exclusive, domain of interest for the project.
A more detailed description of the project's scope can be found on the project website:
FFIUM is a binational French-German project jointly funded by the ANR and the DFG. The partner institutions include the Archives Henri Poincaré (AHP) at the Université de Lorraine, the Institut d'Histoire et de Philosophie des Sciences et des Techniques (IHPST) in Paris, and the Munich Center for Mathematical Philosophy (MCMP) at Ludwig-Maximilians-Universität München.
- Volker Halbach (University of Oxford)
- Gerhard Heinzmann (Université de Lorraine)
- Toby Meadows (University of California, Irvine)
- Perceval Pillon (Université Paris 1 Panthéon-Sorbonne)
- Jeffrey Schatz (University of Bergen)
- Will Stafford (University of Bristol)
- David Waszek (Archives Henri Poincaré and Université Paris 1 Panthéon-Sorbonne)
Program (update 16 September)
|09:30 - 09:45||Welcome and opening|
|09:45 - 11:00||Talk: Gerhard Heinzmann|
|11:00 - 11:30||Coffee break|
|11:30 - 12:45||Talk: Perceval Pillon|
|12:45 - 14:30||Lunch|
|14:30 - 15:45||Talk: Jeffrey Schatz|
|15:45 - 16:15||Coffee break|
|16:15 - 17:30||Talk: Toby Meadows (via Zoom)|
|09:45 - 11:00||Talk: Volker Halbach|
|11:00 - 11:30||Coffee break|
|11:30 - 12:45||Talk: Will Stafford|
|12:45 - 14:30||Lunch|
|14:30 - 15:45||Talk: David Waszek|
Volker Halbach (University of Oxford): Logical consequence and formal rigour
I outline an analysis of logical consequence. Kreisel's informal rigour is not needed to argue for the adequacy of the account. I present a criterion for logical constants that overcomes at least some of the problems of the usual invariance accounts. In the theory of logical constants and the theory of logical consequence in general I eliminate domains, which are the source for many problems discussed in the literature. My foundational theory in which the notions of logical constants and validity are analyzed differs from the usual set-theoretic setting in the addition of a primitive predicate for satisfaction.
Gerhard Heinzmann, (Université de Lorraine): The vague demarcation line of the formalization of predicativity
Predicativity is a central concept in mathematics, which illustrates the unresolved tension between justification, creativity and discoverability in mathematics. The foundational problem that was occupying Bertrand Russell and Henri Poincaré in the wake of the discovery of the famous Russell paradox was trying to answer the question as to which propositional functions define in a non-circular way, i. e. predicatively, sets. However, the discussion about an explicit characterization of a predicative definition and of what constitutes a vicious circle lasted about more than 50 years and is not definitively ended. The creativity of the search centered on an effort to show as much classical mathematics as possible to be predicative. In a first section, we discuss the predicative definability of Russell and Poincaré to Weyl, Wang and Lorenzen. In a second section, we move on to relative predicativity, predicative provability, and a denotation system of the limit number Gamma_0 of predicativity. From this we conclude that predicativity does not provide a standard account of the condition to be fulfilled to have evidence for the general concept of mathematical definitions or proofs proceeding by inferences. The demarcation line between predicative and non predicative reasoning remains vague.
Toby Meadows (University of California, Irvine): When are two models of computation equivalent?
There is a growing literature focused on understanding when two theories are equivalent for some intents and purposes. The main engine of this work is the theory of relative interpretation. Results from this field can be put to a variety of - sometimes controversial - philosophical purposes. However, the most famous and philosophically significant equivalence results are those used as evidence for the Church-Turing thesis. For example, it is a piece of folk knowledge that Turing machines and register machines can be used to execute the same functions. Nonetheless, this equivalence seems quite weak when compared to those obtained between theories. My goal in this paper is to talk about an ongoing project to better understand the nature of the equivalences that obtain between models of computation.
Perceval Pillon (Université Paris 1 Panthéon-Sorbonne): Formalising obligation? Deontic logic as a limit case of proper formalisation
The aim of this presentation is to give an account of how deontic logic presents a specific challenge to formalisation. Deontic logic tries to formalise a concept, obligation, and a form of reasoning, normative reasoning, that are socially rooted. This family of logics have proven challenging both technically and conceptually, starting as a byproduct of modal logic to an inflation of systems designed to each treat a specific aspect of the target phenomenon. We will give a brief overview of those difficulties and propose a reflection about the different way the formalisation is managed, especially regarding the aim and potential epistemic gain of such an enterprise. A special attention will be given to the relationship between the intuition of the targeted concept and the system rooted problem arising when one formalises.
Jeffrey Schatz (University of Bergen): An Arealist's Guide to the Mathematical Multiverse: Theory Choice, Formalization, and Axiom Selection
Since Cohen’s discovery of the forcing method in 1963, set-theoretic practice has been characterized by the independence of numerous natural mathematical questions from the axioms ZFC with large cardinals. Much of the work in the philosophy of set theory has centered on the question of how, and indeed whether, to determine the truth or falsity of stronger axioms which could settle these independent questions. In this talk, I will survey one recent approach which suggests that the search for a uniquely well-justified collection of stronger axioms is entirely wrong-headed: the set-theoretic multiverse. I will present some recent prominent approaches to the multiverse, noting important differences between them. I will then suggest that the overtly metaphysical nature of the current debate about the multiverse has led to a quagmire, with little hope of resolving the relevant philosophical issues. I instead propose an alternate, arealist reframing of the debate which offers the promise of making progress on these issues: replacing the metaphysical question of “how many set-theoretic universes exist” with the more tractable question of which language set theory should be formalized in. I conclude by examining how this understanding of the multiverse debate should alter our understanding of the virtues of multiversism.
Will Stafford (University of Bristol): Is the Proof-Theoretically Valid Logic Intuitionistic?
Several recent results bring into focus the superintuitionistic nature of most notions of proof-theoretic validity, but little work has been done evaluating the consequences of these results. Proof-theoretic validity claims to offer a formal explication of how inferences follow from the definitions of logic connectives (which are defined by their introduction rules). This paper explores whether the new results undermine this claim. It is argued that, while the formal results are worrying, superintuitionistic inferences are valid because the treatments of atomic formulas are insufficiently general, and a resolution to this issue is proposed.
David Waszek (Archives Henri Poincaré and Université Paris 1 Panthéon-Sorbonne): Boole against Frege, or how to think about symbolic calculi
Part of a broader project exploring how to think about notations and symbolic calculi in logic and mathematics, this talk (based on joint work with Dirk Schlimm) highlights a little-known aspect of Boole's work, namely his original conception of what a (logical or mathematical) calculus is good for, which stands in stark contrast with Frege's more familiar position. Boole has a clear notion of a logical problem; for him, the whole point of a logical calculus is to enable systematic and goal-directed solution methods for such problems. Frege's Begriffsschrift, on the other hand, is a visual tool to scrutinize concepts and inferences, and is a calculus only in the thin sense that every possible transition between sentences is fully and unambiguously specified in advance. While Frege's outlook has dominated much of philosophical thinking about logical symbolism, we argue there is value in reviving Boole's idea of an intrinsic link between, as he put it, a ‘calculus’ and a ‘directive method’ to solve problems.
Contact and Registration
If you would like to attend, either in person or via zoom, please contact Benedict Eastaugh at firstname.lastname@example.org.
Munich Center for Mathematical Philosophy
Ground floor, Room 021
This workshop is funded by Deutsche Forschungsgemeinschaft (DFG) grant number 390218268.