Formal Epistemology Meets Philosophy of AI
05.07.2025 – 06.07.2025
Idea & Motivation
The workshop aims at exploring novel topics in Formal Epistemology that might be of relevance to philosophical and/or foundational questions about AI, and novel topics in the foundations and/or the philosophy of AI that might be of relevance to questions in Formal Epistemology.
Speakers
- Javier Belastegui Lazcano (ILCLI)
- Cordelia Berz (MCMP)
- Tina Eliassi-Rad (Northeastern University)
- Branden Fitelson (Northeastern University)
- Levin Hornischer (MCMP)
- Simon Huttegger (UC Irvine)
- Hannes Leitgeb (MCMP)
- Alessandra Marra (MCMP)
- Silvia Milano (Exeter)
- Jan-Willem Romeijn (Groningen)
- Tom Sterkenburg (MCMP)
Program
Time | Event |
---|---|
July 5th | |
08:30 - 09:00 | Welcome & Coffee |
09:00 - 10:30 | Branden Fitelson (Northeastern University) |
10:45 - 12:15 | Simon Huttegger (UC Irvine) |
12:15 - 13:15 | Lunch Break |
13:15 - 14:45 | Hannes Leitgeb (MCMP) |
15:00 - 16:30 | Alessandra Marra (MCMP) & Javier Belastegui Lazcano (ILCLI) |
16:30 - 17:00 | Coffee Break |
17:00 - 18:30 | Jan-Willem Romeijn (Groningen) |
19:00 | Dinner at Arabesk (https://www.arabesk.de/en/restaurant/), Kaulbachstraße 86, 80802 Munich. |
July 6th | |
08:30 - 09:00 | Welcome & Coffee |
09:00 - 10:30 | Tina Eliassi-Rad (Northeastern University) |
10:45 - 12:15 | Cordelia Berz (MCMP) |
12:15 - 13:15 | Lunch Break |
13:15 - 14:45 | Levin Hornischer (MCMP) |
15:00 - 16:30 | Tom Sterkenburg (MCMP) |
16:30 - 17:00 | Coffee Break |
17:00 - 18:30 | Silvano Milano (Exeter) |
Abstracts
Branden Fitelson: Automated Reasoning Tools for Pedagogy & Research in Logic & Formal Epistemology
In this talk, I will discuss various computational tools that I have been using in both my teaching and research in Logic and Formal Epistemology. Some notable applications will include: the logic and meaning of sentential connectives (especially conditionals), probability, and inductive logic.
Tina Eliassi-Rad: The Trilemma of Truth: Assessing Truthfulness of Responses by Large Language Models
I will present sAwMIL, an approach to separating true, false, and unverifiable statements in large language models (LLMs). sAwMIL addresses flawed assumptions in previous works. I will also discuss criteria for validating approaches like sAwMIL that track the truthfulness of probes on LLMs. I will conclude with findings from a comprehensive study involving multiple LLMs and probe datasets. If time permits, I will outline my larger project on measuring epistemic instability in human-AI societies. This work was done jointly with Germans Savcisens.
Organizer
Hannes Leitgeb
Venue
IBZ, Amalienstr. 38, 80799 Munich.
Registration
Please register sending an email to Office.Leitgeb@lrz.uni-muenchen.de.