Munich Center for Mathematical Philosophy (MCMP)
print


Breadcrumb Navigation


Content

Machine Learning meets Mathematical Philosophy (16 June 2023)

The 1st MCML-MCMP-Workshop

Idea & Motivation

The ever increasing impact of methods in machine learning and data science urgently calls for the study of their foundations. Important such foundational questions include their mathematical justification, their interpretability, and their consequences for the nature of scientific inference; and these questions are the concern of theorists, methodologists and philosophers alike. This workshop brings together researchers from the Munich Center for Machine Learning (MCML) and the Munich Center for Mathematical Philosophy (MCMP) to exchange ideas on these topics and initiate possible collaborations.

Program

Time
09:00-9:05 Quick welcome address by the organizers
09:05-09:55 Stefan Kolek Martínez Azagra (relAI/MCML): Mask-Based Explanations of Image Classifiers: A Computational Harmonic Analysis Approach
09:45-10:25 Dr. Tom Sterkenburg (MCMP): Epistemology of learning theory
10:25-10:35 Coffee Break: Open Discussion
10:35-11:05 Dr. Ignacio Ojea (MCMP): Evaluating social generics on Twitter with NLP tools
11:05-11:45 Dr. Giuseppe Casalicchio (MCML): Interpretable Subgroups and Interaction Detection: Bridging the Gap between Local and Global Explanations
11:45-13:15 Lunch Break
13:15-13:55 Dr. Anita Keshmirian (MCMP): Causal Reasoning in Humans vs. Large Language Models: chains vs. common causes
13:55-14:35 Dr. Christoph Jansen (LMU Statistics): Machine learning under weakly structured information: a decision-theoretic perspective
14:35-15:15 Dr. Aras Bacho (MCML): Limitations of Digital Hardware
15:15-15:30 Coffee Break: Open Discussion
15:30-16:10 Dr. Levin Hornischer (MCMP): Semantics for Sub-symbolic Computation
16:10-16:50 Dr. Gunnar König (MCML): Improvement-Focused Causal Recourse
16:50-17:30 Prof. Hannes Leitgeb (MCMP): Vector Epistemology
18:30 Dinner at Chinesischer Turm on a dutch treat basis

Abstracts

Stefan Kolek: Mask-Based Explanations of Image Classifiers: A Computational Harmonic Analysis Approach

Image classification is a vital task in Machine Learning, and recent progress has enabled modern deep image classifiers to achieve human-level accuracy on challenging tasks. However, the lack of interpretability of their decisions may hinder their adoption in sensitive applications. In this talk, I will present a novel approach to mask explanations of image classifiers based on computational harmonic analysis. Our method optimizes a deletion mask in the wavelet or shearlet image representation to remove unnecessary information for the classification decision. We demonstrate the theoretical and experimental benefits over common pixel space methods. Additionally, we provide a new theory to quantify and analyze the quality of mask explanations. Finally, I will share my perspective on the future of explainable artificial intelligence and how we can close the gap between human-level explanations and current explanation tools in machine learning.

Tom Sterkenburg: Epistemology of learning theory

Generalization bounds in learning theory provide reason for claiming that certain standard learning algorithms are, in fact, good algorithms. That is, they provide a justification for certain learning algorithms; analogously to the project in formal philosophy of science of providing a justification for inductive inference. But in both cases, there arises a question of how such purely formal justification could be consistent with skeptical impossibility results and arguments. In this talk, I discuss how classical learning theory yields an analytic yet model-relative justification for learning algorithms, that fits naturally in a broader epistemological perspective on the nature of inquiry. I will further sketch how this is the basis for a means-ends justification for Occam's razor, the principle that a simplicity preference is conducive to good inductive reasoning. Finally, I will say a little bit about the recent debate about the failure of classical learning theory to explain the generalization of modern algorithms like deep neural networks, and the epistemological contours of a new theory.

Ignacio Ojea: Evaluating social generics on Twitter with NLP tools

In philosophy and linguistics, generics are generalizations (about things, people, etc.) that do not involve an explicit quantifier (e.g., ‘many’, ‘most’), and that refer to a whole category of things, people, etc. as such. They are people’s default way of expressing generalizations; and are pervasive even in scientific papers, implying general, timeless conclusions (DeJesus et al., 2019; Peters & Carman, 2023). Social generics can be resistant to counterevidence, express and facilitate stereotyping (Leslie, 2017), and they may feed into ‘us–versus–them’ polarization (Roberts, 2022). Despite their importance, social generics are under-researched in the NLP community. In this paper we (a) develop a training data set for social generics, (b) perform a classification task using standard and state of the art models, both pre-trained and not, and (c) evaluate the use of generics on Twitter, both in terms of their sentiment and their attention power.

Giuseppe Casalicchio: Interpretable Subgroups and Interaction Detection: Bridging the Gap between Local and Global Explanations

Model-agnostic interpretation methods in machine learning produce explanations for non-linear, non-parametric prediction models. Explanations are often represented in the form of summary statistics or visualizations, e.g. feature importance scores or marginal feature effect plots. Many interpretation methods either describe the behavior of a black-box model locally for a specific observation or globally for the entire model and input space. Methods that produce regional explanations and lie between local and global explanations are rare and not well studied, but can offer a flexible way to combine advantages of both types of explanations. Partial dependence plots visualize global feature effects but may lead to misleading interpretations when feature interactions are present. Here, we introduce regional effect plots with implicit interaction detection, a novel framework to detect interactions between a feature of interest and other features. The framework also quantifies the strength of interactions and provides interpretable and distinct regions in which feature effects can be interpreted more reliably, as they are less confounded by interactions.

Anita Keshmirian: Causal Reasoning in Humans vs. Large Language Models: chains vs. common causes

Bayesian Belief Networks (BBNs) provide a common approach to causal structure representation (Pearl, 1988). BBNs are graphs that depict interdependencies among the probabilities of different variables. The variables interconnect through directed arrows into acyclic structures that indicate their probabilistic dependencies. Bayes Theorem provides a prescriptive framework for evaluating outcomes in such networks, meaning deviation from any of its axioms would lead to demonstrably suboptimal reasoning (see Hartmann, 2021). We focus on two canonical BBN structures: Chains (A→C→B) and Common Cause (A←C→B) networks in which the probability of B should not depend on A if we know C.But humans systematically violate independence assumptions in their causal judgments (cf., Rehder, 2014; Park & Sloman, 2013). Recently, the scope of a cause, i.e., the number of distinct effects generated by it, has been studied as a source of perceived causal strength not prescribed by Bayesian theory (Sussman & Oppenheimer, 2020; Stephan et al., 2023). Sussman and Oppenheimer (2020) argued that the effect of valence depends on a target effect’s valence (B). For positive effects ("boons"), greater scope shows causal strength. When effects are harmful ("banes"), greater scope leads to lower strength instead. Stephan et al. (2023) questioned the validity of this "Bane-Boon effect". When scenarios were abstract enough to eliminate prior domain beliefs in participants, the effect of scope was uniform across positive and negative target effects.As it stands, the results of the two studies are incompatible. We directly compare the mutually exclusive accounts by controlling for valence and varying prior domain knowledge across different scenarios (N=300; https://osf.io/qaydt). As many violations of normative criteria in BNNs may have foundations in language, we also investigate the effect of causal structure on large language models (LLMs). Prior studies show suboptimal reasoning in causal contexts in LLMs (Binz & Schulz, 2023) across different models (Willig et al., 2022), but not for the particular deviations from normativity we have studied. We query GPT3.5-Turbo (OpenAI, 2022), GPT4 (OpenAI, 2023), and Luminous Supreme Control (Aleph Alpha, 2023) using queries identical to those in human experiments. To advance the debate about structural influences on perceived causal strength, we also examine whether differing perceptions of mechanisms in given structures affect causal strength judgments (Park and Sloman, 2013). If subjects accept the provided networks as the ground truth but base their causal judgments on structural features other than scope, such as providing a mechanism, we would expect the Chain condition to have higher causal power ratings in our particular design. Using hierarchical (Bayesian) mixed effect models, we show that - of the possibilities we set out to compare - the subjects’ intuition is only consistent with a preference for mechanistic chains. This suggests that the perceived boost in causal power in Chains may be due to the perception of the intermediate cause as a reliable mechanism for the network (Ahn et al., 1995). Our result was replicated in LLMS. We discuss the implications of our findings for causal representation theories in humans and LLMs.

Christoph Jansen: Machine learning under weakly structured information: a decision-theoretic perspective

Based on a number of joint projects of our group, the talk shows how recent developments in abstract decision theory can be utilized to tackle machine learning problems under weakly structured information. More specifically, we address the problem of constructing information-exhaustive stochastic orderings among random variables that take values in complexly ordered spaces. We additionally show how to statistically test for the derived orderings and discuss ways to robustify the proposed test by implementing ideas from the theory of imprecise probabilities. All presented concepts are exemplified by the (intuitively accessible) special case of multi-dimensional spaces with differently scaled dimensions. These structures allow, on the one hand, a completely natural analysis of many types of non-standard data and, on the other hand, a meta-framework for comparing machine learning algorithms with respect to multidimensional performance measures.

Aras Bacho: Limitations of Digital Hardware

Digital computers have become an integral part of modern society. The development of the modern computer can be traced back to Alan Turing's introduction of the Turing machine in 1936. In his seminal work "On Computable Numbers," Turing also identified the limitations of the Turing machine, which are indicative of the constraints faced by digital computers in general. In this presentation, we aim to highlight some of these boundaries as examples. Specifically, we will present certain results that demonstrate the non-computability or computational difficulty of certain classes of problems on digital hardware, regardless of the numerical methods or artificial intelligence techniques applied to solve these problems. In particular, we will illustrate the existence of input data within polynomial time complexity for which the solution of the Poisson equation and the heat equation cannot be computed in polynomial time, unless the condition FP = #P is satisfied. This inherent complexity persists regardless of the numerical method employed to solve the differential equation.

Levin Hornischer: Semantics for Sub-symbolic Computation

Despite its success, we are lacking a foundational theory of AI. We want to interpret and explain the ‘sub-symbolic’ computation performed by the neural networks that drive this success. For classical ‘symbolic’ computation, this problem was solved by semantics: it mathematically describes the meaning of program code. In this talk, we work towards an analogous semantics for sub-symbolic computation.

Gunnar König: Improvement-Focused Causal Recourse

Algorithmic recourse recommendations, such as Karimi et al.'s (2021) causal recourse (CR), inform stakeholders of how to act to revert unfavourable decisions. However, some actions lead to acceptance (i.e., revert the model's decision) but do not lead to improvement (i.e., may not revert the underlying real-world state). To recommend such actions is to recommend fooling the predictor. We introduce a novel method, Improvement-Focused Causal Recourse (ICR), which involves a conceptual shift: Firstly, we require ICR recommendations to guide towards improvement. Secondly, we do not tailor the recommendations to be accepted by a specific predictor. Instead, we leverage causal knowledge to design decision systems that predict accurately pre- and post-recourse. As a result, improvement guarantees translate into acceptance guarantees. We demonstrate that given correct causal knowledge, ICR, in contrast to existing approaches, guides towards both acceptance and improvement.

Hannes Leitgeb: Vector Epistemology

This talk will introduce the basic ingredients of a formal epistemology in which belief, evidence, and reason are represented by vectors, and where learning and induction are represented by vector
operations. The resulting system may be interpreted in Bayesian terms, and it can be used to rationally reconstruct some well-known topics and problems from cognitive science.

topOrganizers

  • Tom Sterkenburg (MCMP)
  • Thomas Meier (MCML)

Venue

Ludwig-Str. 33, Statistics, Seminar Room, 144