Zoom Talk: Tim Räz (Bern) and Tom Sterkenburg (MCMP)
Please contact email@example.com for the password.
Tim Räz (Bern): On Model Interpretability
Some successful machine learning models, and deep neural networks in particular, are black boxes and lack interpretability. As a consequence, there has been increasing interest in the concept of interpretability of ML models. However, the concept of interpretability is less than clear, as has been noted by both computer scientists and philosophers. So far, most philosophers have tried to clarify the nature of interpretability by focussing on non-interpretable models and methods such as explainable AI that aim to make these models more transparent. In this talk, I will try to clarify the nature of interpretability by focussing on the other end of the ''interpretability spectrum''. I will tackle the following questions: Why are some models, such as linear models and decision trees, considered to be highly interpretable? What can we learn about interpretability from models such as GAMs and MARS that are more general, but still retain some degree of interpretability? And: What does this teach us about the concept of interpretability?