Munich Center for Mathematical Philosophy (MCMP)
print


Breadcrumb Navigation


Content

AI in Science: Foundations and Applications (9 - 10 June 2022)

Idea & Motivation

Artificial intelligence (AI) is all the rage these days, promising many new innovations that will make our lives easier. It is also significantly changing the way we do science, raising several fundamental and methodological questions, such as the role of bias, explainability, and the limits of empirical methods. Addressing these questions requires an interdisciplinary effort to which various sciences, from computer science to social science to philosophy, can contribute. This workshop brings together relevant researchers from Cambridge and LMU Munich to engage in the relevant discussions. It is part of the project "Decision Theory and the Future of AI", funded by the Cambridge-LMU Strategic Partnership Initiative. The workshop is also part of the Research Focus Next Generation AI at LMU’s Center for Advanced Studies (CAS).

Speakers include Timo Freiesleben (MCMP/LMU), Frauke Kreuter (Statistics/LMU), Jacob Stegenga (HPS/Cambridge), Tom Sterkenburg (MCMP/LMU), Apolline Taillandier (LFI/Cambridge), Daniel Gruen (Physics/LMU), Alexander Fraser (CIS/LMU) and Audrey Borowski (LMU/Oxford).

The workshop is organized by Timo Freiesleben (MCMP/LMU), Stephan Hartmann (MCMP/LMU), Huw Price (Cambridge/Bonn), Tom Sterkenburg (MCMP/LMU) and Rush Stewart (KCL).

The conference will be preceded by a Special Evening Lecture 8 June 2022:

Huw Price (Cambridge): "Time For Pragmatism"

Time 4.15 - 5.45 PM

Location: LMU Main Building, Geschwister-Scholl-Platz 1, Room F007, 80539 München

Registration

To register, please send a message to Timo Freiesleben timo.freiesleben@campus.lmu.de until 31 Mai 2022.

Day 1 (9 June 2022) - Location: Edmund-Rumpler-Straße 9, 80939 München, A 121

TimeEvent
09:00 Welcome
09:15 - 10:00 Frauke Kreuter: "Robust AI and Diversity in Data"
10:00 - 10:45 Jacob Stegenga and Hamed Tabatabaei Ghomi: "No Experience Necessary"
10:45 - 11:00 Coffee Break
11:00 - 11:45 Daniel Gruen: "Cosmological challenges for Artificial Intelligence"
11:45 - 12:30 Atoosa Kasirzadeh: "In Conversation with AI: Large Language Models and Value Alignment"
12:30 - 14:00 Lunch Break
14:00 - 14:45 Timo Freiesleben: "Scientific Inference With Interpretable Machine Learning"
14:45 - 15:30 Uwe Peters: "Regulative Reasons: On the Difference in Opacity between Algorithmic and Human Decision-Making"
15:30 - 16:00 Coffee Break
16:00 - 16:45 Tom Sterkenburg: "Machine Learning and the Philosophical Problem of Induction"
18:00 Conference Dinner @ Brunnwart, Biederstener Str. 78, 80805 München

Day 2 (10 June 2022) - Location: Edmund-Rumpler-Straße 13, 80939 München, B 257

TimeEvent
09:30 - 10:15 Alexander Fraser: "Some Open Problems in Multilingual Natural Language Processing"
10:15 - 11:00 Henry Shevlin: "Uncanny Believers: Chatbots, Beliefs, and Folk Psychology"
11:00 - 11:15 Coffee Break
11:15 - 12:00 Apolline Taillandier: "Feminist psychology and computer programming at MIT"
12:00 - 12:45 Audrey Borowski: "Searching after Openness"
12:45 - 14:00 Lunch Break
14:00 Discussion

Abstracts

Audrey Borowski (LMU/Oxford): Searching after Openness

AI systems have been with us for several decades now ever since the birth of cybernetic systems to the advent of Machine Learning and neural nets. In this talk I explore how AI systems have come to manipulate and cage us in, and influence our decision making processes – not without political and social consequences and biases. While we cannot realistically give these systems up, how can we, within this 'automatic society', break through calculation? In this talk, I draw on various sources from computing but also philosophy to explore this question. Examining the writings of Alan Turing, Gilles Deleuze and Bernard Stiegler, I set out to shed light on some models that have sought to renew with openness, indetermination and creativity within AI and computing.top

Alexander Fraser (LMU): Some Open Problems in Multilingual Natural Language Processing

Data-driven Machine Translation is an interesting application of machine-learning-based natural language processing techniques to multilingual data. Particularly with the recent advent of powerful neural network models, it has become possible to incorporate many
types of information directly into the model and to robustly model long-distance dependencies in the sequence of words being generated.

I will discuss three areas of work addressing important weaknesses of data-driven machine translation approaches. First, I will discuss the important problem of data sparsity in translation which is caused by rich morphology, and discuss extensive work we have carried out to overcome this. Second, I will discuss progress towards breaking the strong domain dependency between the data used to train supervised neural machine translation systems and the data that will be translated. Finally, time allowing, I will briefly present new research into building strong unsupervised machine translation systems, enabling the carrying out of high quality translation between pairs of languages for which no known source of parallel training data exists.
top

Timo Freiesleben (LMU): Scientific Inference With Interpretable Machine Learning

Interpretable machine learning (IML) is concerned with the behavior and the properties of machine learning models. Scientists, however, are only interested in the model as a gateway to understanding the modeled phenomenon. We show how IML methods must be developed to give insight into relevant phenomenon properties. We argue that for current IML methods it remains unclear if model interpretations have corresponding phenomenon interpretation because two goals of model analysis are conflated --- model audit and scientific inference. Building on statistical decision theory, we show that ML model analysis allows to describe relevant aspects of the joint data probability distribution. We provide a five-step framework for constructing IML descriptors that can help in addressing scientific questions, including a natural way to quantify epistemic uncertainty.top

Daniel Gruen (LMU): Cosmological challenges for Artificial Intelligence

Falsification of the cosmological 'standard model' of a dark energy and dark matter dominated, general relativity governed Universe, likely requires high experimental precision and/or a look into features that have so far escaped analytical modeling. To this end, artificial intelligence can be used in multiple ways. With suitably designed architectures and cost functions, it can be a tool for accurate calibration of data and control of systematic errors. Trained on numerical simulations, it can serve as a model for complex astrophysical systems that exceeds the abilities of human theorists in analytically expressing the expected distribution of their features. Replacing costly calculations by emulation, it can enable analyses that otherwise greatly exceed the computationally feasible.top

Atoosa Kasirzadeh (LMU): In Conversation with AI: Large Language Models and Value Alignment

Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case of these technologies, conversational agents, output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions including: what does it mean to align conversational agents with human values? Which values should they be aligned with? And how can this be done? In this paper, we propose a number of steps that help answer these questions. We start by developing a philosophical analysis of the building blocks of linguistic communication between conversational agents and human interlocutors. We then use this analysis to identify and formulate ideal norms of conversation that can serve as mechanisms governing linguistic communication between humans and conversational agents. Furthermore, we explore how these norms can be used to align conversational agents with human values across a range of different discursive domains. We conclude by examining some practical implications of our proposal for future research into the creation of aligned conversational agents. top

Frauke Kreuter (LMU): Robust AI and Diversity in Data

Artificial intelligence (AI) and Big Data offer enormous potential to explore and solve complex societal challenges. In the labor market context, for instance, AI is used to optimize bureaucratic procedures and minimize potential errors in human decisions. AI is also applied to identify patterns in digital data trails. Data trails are created when people use smartphones and IoT devices or browse the internet, for example. Unfortunately, the fact that all of this is dependent on social and economic contexts is often ignored when AI is used, and the importance of high-quality data is frequently overlooked. There is growing concern about the lack of fairness—an essential criterion for making good use of AI, and growing concern about the lack of diversity in data. Fairness in this context means the adequate consideration of different social groups in the data and in pattern recognition. Diversity in data addresses the challenge of capturing any and the appropriate pieces of information from diverse social groups.
This talk outlines the latest developments in the use of AI and Big Data with applications in economics, social research, and policy making. Frauke Kreuter explains the pitfalls around their application and how the scientific community can get a grip on matters of ethics and privacy without having to give up the possibility to reproduce and re-use the data. She also outlines the essential prerequisites for structuring the use of AI in a good way.top

Uwe Peters (Cambridge/Bonn): Regulative Reasons: On the Difference in Opacity between Algorithmic and Human Decision-Making

Many artificial intelligence (AI) systems used for decision-making are opaque in that the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here I contend that this argument overlooks that human decision-making is often significantly more transparent than algorithmic decision-making. This is because when people report the reasons for their decisions, their reports have a regulative function that prompts them to conform to these ascriptions. AI explanation systems lack this feature. Overlooking it when comparing algorithmic and human decision-making can result in underestimations of the transparency of human decision-making and in the development of explainable AI that may mislead people by activating generally warranted beliefs about the regulative dimension of reason giving.
top

Henry Shevlin (Cambridge): Uncanny Believers: Chatbots, Beliefs, and Folk Psychology

Recent developments in artificial intelligence research have revealed the power of large language models like GPT-3 to generate believable and flexible responses to user prompts. These systems are not generally intelligent in any robust sense, and would struggle to pass demanding formulations of the Turing Test. However, they can serve as believable if imperfect interlocutors in many contexts, and such systems are already seeing deployment as chatbots in a variety of roles ranging from customer service to social, entertainment, and therapeutic uses. In this paper, I examine a cluster of issues relating to these developments, and in particular to our tendency to anthropomorphise such systems and attribute beliefs to them. I argue that it is likely that people will (and indeed already are) attributing mental states like beliefs to such systems, and that this is likely to have significant ramifications for both cognitive science and society at large. Such attributions, I go on to argue, lack much support in contemporary theories of belief: even those theories in principle more open to beliefs in current AI systems would not straightforwardly attribute humanlike beliefs to near-future AI systems. I conclude with what I take to be an interesting dilemma for cognitive science and philosophy: to extent that the general public use psychological state terms more liberally than is supported by academic consensus, should it be our role to ‘re-educate’ the public, or should we instead be open to the idea that dominant models of belief and other mental states in academia no longer match ordinary usage of these terms?top

Jacob Stegenga and Hamed Tabatabaei Ghomi (Cambridge): No Experience Necessary

How reliable are first-person reports about type-level causal relations? For example, a physician prescribes a drug to a patient, and then the patient has subsequent changes to their symptoms. The physician make an inference that the drug caused those changes. Are those inferences reliable guides to the general causal relation in question? The evidence-based medicine movement says no, while some physicians and philosophers support such appeals to first-person experience. We develop a formal model and simulate such scenarios. We conclude that for typical clinical scenarios, first-person experience is not a reliable guide to causal inference. Our next step is to ask if AI systems can be more reliable for such inferences.top

Tom Sterkenburg (LMU): Machine Learning and the Philosophical Problem of Induction

Hume's classical argument says that we cannot justify inductive inferences. Impossibility results like the no-free-lunch theorems underwrite Hume's skeptical conclusion for machine learning algorithms. At the same time, the mathematical theory of machine learning gives us positive results that do appear to provide justification for standard learning algorithms. I show that there is no conflict here, but rather two different conceptions of formal learning methods, that lead to two different demands on their justification. I further discuss how these perspectives support broader epistemological outlooks on automated inquiry, and how they relate to contemporary proposals in the philosophy of inductive inference.

Apolline Taillandier (Cambridge/Bonn): Feminist psychology and computer programming at MIT

This paper studies how the computer language Logo, first developed at MIT in the late 1960s as an educational programme for teaching mathematics, came to be understood as a feminist tool. Logo was initially described as an "applied artificial intelligence" project (McCorduck, 2004) that would contribute to popularising a pluralist, democratic approach to programming. School experiments with Logo brought evidence that different kinds of children practised programming in different ways: while boys often developed a traditional 'hard' style of programming, girls often programmed in a style that Logo advocates called 'tinkering' or bricolage. Drawing from Seymour Papert and Sherry Turkle's writings and archival material, I trace how Logo was recast over the 1980s as a tool for undermining sexist norms within computer science. This sheds light on how feminist debates about epistemology and morality contributed to reshaping the terms of gender politics in US academia.

Location

Edmund-Rumpler-Straße 9, 80939 München, A 121 on June 9th

Edmund-Rumpler-Straße 13, 80939 München, B 257 on June 10th