Munich Center for Mathematical Philosophy (MCMP)
print


Breadcrumb Navigation


Content

Zoom Talk: Tim Räz (Bern) and Tom Sterkenburg (MCMP)

Meeting-ID: 950 1039 5841

08.12.2021 16:00  – 18:00 

Please contact office.leitgeb@lrz.uni-muenchen.de for the password.


Tim Räz (Bern): On Model Interpretability

Some successful machine learning models, and deep neural networks in particular, are black boxes and lack interpretability. As a consequence, there has been increasing interest in the concept of interpretability of ML models. However, the concept of interpretability is less than clear, as has been noted by both computer scientists and philosophers. So far, most philosophers have tried to clarify the nature of interpretability by focussing on non-interpretable models and methods such as explainable AI that aim to make these models more transparent. In this talk, I will try to clarify the nature of interpretability by focussing on the other end of the ''interpretability spectrum''. I will tackle the following questions: Why are some models, such as linear models and decision trees, considered to be highly interpretable? What can we learn about interpretability from models such as GAMs and MARS that are more general, but still retain some degree of interpretability? And: What does this teach us about the concept of interpretability?

Tom Sterkenburg (MCMP): The theoretical justification for machine learning algorithms

What reasons can we offer for thinking that our standard machine learning algorithms are reliable? A natural place to look for such reasons is the theory of machine learning, that offers theoretical guarantees to the effect that some (standard) learning procedures are better than others. But at the same time, there exist mathematical results (in particular, the infamous no-free-lunch theorems) to the effect that every possible learning algorithm is as good (or bad) as the next one. In my talk, I will first dissolve this apparent paradox by presenting an account of the model-relative justification that learning theory gives for standard learning algorithms. Second, I will discuss two challenges, from the opposite directions of theory and practice, for this account.