Munich Center for Mathematical Philosophy (MCMP)
print


Breadcrumb Navigation


Content

Abstracts

Back to Workshop Webpage

Dan Christensen: The Interaction Between Homotopy Type Theory and Mathematics

I will begin with a brief introduction to type theory and will discuss some of its unique features: proofs in type theory can be formally verified by a computer; proofs in type theory have computational content; and theorems in type theory can be interpreted as statements about topological spaces, and more generally, facts about any infinity topos. These features have influenced mathematical practice, leading to formal proofs of major theorems, collaborations with computer scientists and logicians, and renewed interest in constructive reasoning. Moreover, the topological applications have influenced type theory by leading to the creation of homotopy type theory, which augments type theory with Voevodsky's univalence axiom and higher inductive types, tools that make type theory more convenient for formalizing mathematics.top

Robert Corless: Structured Backward Error and Condition Number: Tools from Numerical Analysis Interpreted in a Modelling Context

Since the pioneering work of J.H. Wilkinson, and before that of Alan Turing, and before that of von Neumann and Goldstine, backward error and condition number have been used in the numerical analysis of problems of linear algebra. As was known to Pete Henrici, these tools are actually of very wide utility. I discuss these tools in some fixed modelling contexts and show some of their strengths and weaknesses.top

Walter Dean: Computational Complexity, Gandy's Thesis, and Measurement Theory

Gandy (1980) proposed a mathematical analysis of mechanical computability with the intention of extending Turing's (1939) original argument for Church's Thesis. Gandy showed that as long as the basic operations of a model of computation satisfy very general physically motivated constraints, than the class of functions computed by the model will be recursive. The goal of this talk will be to assess whether the generalized model which arises from this analysis -- i.e. what has some to be called a Gandy machine -- can serve as an adequate foundation for computational complexity theory. I will begin by formulating this problem relative to a distinction between the so called first- and second-machine classes which is related to the properness of the polynomial hierarchy. After presenting some results which suggest that Gandy machines are too powerful to be an accurate gauge of computational complexity, I will outline a proposal for refining this model which is grounded in the framework of measurement theory (in the sense of Krantz et al. 1971).
top

Thomas De Saegher: On Equivalent Modificatory Solutions to the Measurement Problem

The space of possible modificatory solutions to the measurement problem is often structured by whether the solutions change the state space of quantum mechanics or dynamics through it or both. Instead, I consider a way of framing the modification options that is natural when quantum mechanics is reconstructed within the diagrammatic representational framework of Coecke and Kissinger (2017). I elaborate on the notion of a representational framework like this one, designed to make computation-like inferences more perspicuous. I then discuss what it means to regard two modificatory strategies as equivalent though one is expressed in a kinematic-dynamic framework and the other is expressed in the more computationally oriented framework.top

Benedict Eastaugh: Recursive Counterexamples and the Foundational Standpoint

Recursive counterexamples are classical mathematical theorems that are made false by restricting their quantifiers to computable objects. Historically, they have been important for analysing the mathematical limitations of foundational programs such as constructivism or predicativism. For example, the least upper bound principle is recursively false, and thus unprovable by constructivists. In this talk I will argue that while recursive counterexamples are valuable for analysing foundational positions from an external, set-theoretic point of view, the approach is limited in its applicability because the results themselves are not accessible to the foundational standpoints under consideration. This limitation can be overcome, to some extent, by internalising the recursive counterexamples within a theory acceptable to a proponent of a given foundation; this is, essentially, the method of reverse mathematics. I will examine to what extent the full import of reverse mathematical results can be appreciated from a given foundational standpoint, and propose an analysis based on an analogy with Brouwer’s weak and strong counterexamples. Finally, I will argue that, at least where the reverse mathematical analysis of foundations is concerned, the epistemic considerations above show that reverse mathematics should be carried out within a weak base theory such as RCA0, rather than by studying ω-models from a set-theoretic perspective.top

Laura Felline: The Measurement Problem in Interpretations of Quantum Theory as a Theory About Information

According to a time-honoured approach to the foundations of Quantum Theory, the oddities of this theory originate from the improper interpretation of its subject study as concerning properties and behaviour of material objects, being them corpuscles or waves, represented by the quantum state. Following this tradition, the advocates of information-based approaches claim that if quantum theory is interpreted as a theory about information, the conundrums of Quantum Theory are straightforwardly explained away.

In this talk I object to this claim, focusing in particular on the measurement problem. In order to illustrate the oversimplification at the basis of the above claim, I analyse some leading information-based interpretations of Quantum Theory and show that they strictly fail when confronted with scenarios where the measurement problem is relevant, showing in such a way that they don’t provide a genuine solution to the problem.

I conclude that interpreting Quantum Theory as a theory about information is not sufficient to explain away the measurement problem.top

Samuel Fletcher: Agential Abstraction/Representation Theory

Recently, Horsman et al. (Proc R Soc Lond A 470:20140182, 2014) have proposed a new framework for understanding and evaluating claims about unconventional or non-standard computation. Among its attractive features, the theory in particular implies a novel account of what is means for a physical system to implement a computation. In this paper, I build on and modify their work to outline Agential Abstraction/Representation Theory (AART), an account of physical computation that makes the concepts of intelligent agency and epistemic community central: what counts as a computation is relative to the cognitive faculties of an agent, and to the epistemic community to which the agent belongs, especially to represent an abstract problem, i.e., the values of certain mathematical functions. After explaining the role of representation, agents, and epistemic community for computation in AART, I also show how AART explains the successful modeling of non-computers by computers as useful fictions.top

Stephan Hartmann: Reasoning and Argumentation in Science: A Computational Perspective

Reasoning and argumentation play an important role in the practice of science. In this talk, I will identify a number of new types of reasoning and argumentation (such as the No Alternatives Argument) that are used in science and show how they can be assessed in a normative framework. This will help us to better understand which types or reasoning and argumentation are successful, and which need to be improved or discarded. As scientific reasoning and argumentation crucially involve uncertainties, a Bayesian (or probabilistic) approach suggests itself. I will present the Bayesian framework and focus on its normative foundations and computational aspects.top

Corey Maley: Analog Computation and Representation

The received view of analog computation--that it is computation involving continuous quantities--is false, as the history of analog computers shows. By drawing on the account of analog representation defended in (Maley, 2011), I show that analog computers are simply those computing mechanisms that operate on analog representations, where "analog" means that the property of the representation that does the representing covaries monotonically with what it represents, regardless of whether that variation is continuous or discrete. For example, the second hand of a clock is an analog representation of time, because the angle increases monotonically as the amount of time increases; this is true whether the hand sweeps continuously or ticks in discrete steps. This account has implications for other issues, including whether natural systems (such as the brain) literally compute, and how to create an account of computation in general that includes analog, digital, and perhaps other types of computation.

Robert Moir: Modeling Scientific Inference with Effective Logic

This paper considers the following problem: What is the best way to accurately model the structure of scientific inference in practice. Although formal logic might seem to provide the answer to this problem, I argue otherwise. Among other reasons, scientists are not provided with a set of inference rules to solve their problems, and restricting to deductive inference would make much of science impossible in practice. Using a range of illustrative examples, I describe an inferential modeling approach, presented as an informal generalized logic called effective logic, based on how scientists actually reason in mathematics, physics and computer science. I show how effective logic can accurately and directly represent the structure of reliable and efficient inference, accounting for methods used across a wide range of science. I conclude by showing how this approach can contribute to our understanding of explanation by adapting the Deductive-Nomological model of explanation to effective logic.top

Markus Mueller: Law Without Law — From Observer States to Physics Via Algorithmic Information Theory

According to our current conception of physics, any valid physicaltheory is assumed to describe the objective evolution of a uniqueexternal world. However, this assumption is challenged by quantumtheory, which suggests that physical systems should not always beunderstood as having objective properties which are simply revealed bymeasurement. Furthermore, as I will argue, several other conceptualpuzzles in the foundations of physics and related fields point topossible limitations of our current perspective and motivate theexploration of alternatives.
In this talk, I will describe a blueprint of such an alternativeapproach, formulated in terms of algorithmic information theory. Theapproach starts with a (suitably formalized) concept of the “state of anobserver” as its primary notion, and leads to an emergent notion ofexternal world that is in many respects consistent with ourobservations. I will argue that such approaches have the potential toresolve several of the aforementioned puzzles and paradoxes, and theypredict the appearance of a simple, probabilistic external world thatsatisfies some physical version of the Church-Turing thesis. I will alsodescribe some more surprising and fun consequences of such views,including a notion of “probabilistic zombies” and unexpectedconsequences for the computer simulation of agents. Finally, if timepermits, I will mention work with Michael Cuffaro which relates suchviews to philosophical positions like e.g. Carnap’s “Aufbau”.top

Wayne Myrvold: What, if Anything, Could it Mean to Say That the Laws of Physics are not Computable?”

Do problems whose solutions are not effectively calculable arise naturally in physics? In this talk, I will not answer this question but address how to pose it. Taking my cue from the work of Philippos Papagiannopoulos, who argues that representation is fundamental to computation, I will argue that addressing the computability of a problem depends crucially on how the problem is represented. I will survey some options, and argue that representations of physical problems that are most relevant to physics lead naturally to the notion of Type II computability. I will discuss the physical relevance of this.top

Allan Olley: Wallace J. Eckert on Scientific Machine Computation and the Machinery of Science

Wallace Eckert was an early pioneer in the use of punch card computing machinery to carry out scientific computation in the 1930s and 40s. Eckert was also at the forefront of applying large scale electronic calculating machines to scientific problems, being hire by IBM to head their new department of Pure Science in 1945. Like many others Eckert saw analogies between the new large calculating machines and existing devices like telescopes. However he made analogies not just between the telescope and computer as ways of increasing what human beings can do, but also in terms of the need to organize access to a limited resource, as with observation time on a large telescope. I will relate Eckert's ideas to other ideas he had about the organization of science, broader concepts of pure and applied science then current and continuing debates about the role of values in science.top

Philippos Papayannopoulos: Computing by Representing in Science and Mathematics: Where Shannon Meets Turing

We examine computing and its role in science and mathematics from both a historical and theoretical perspective. After reviewing some paradigmatic cases from history and practice, we provide a unified framework within which we are able to account for what makes them all instances of “computing”, as well as what makes some of them “analog” and others “digital”. We argue that, viewed in a historical context, computation is better conceptualized as an epistemic process relative to agents, wherein (scientific) representation plays a key role, and we propose an account of the distinction between analog and digital computation, based on the semantic and syntactic properties of the involved representations. We finally discuss how the developed framework is able to account for properties of different models of computation (from mechanical and electronic differential analyzers to modern PCs). We conclude by conceptually comparing analog computational modelling to analogue (scale) modelling.top

Gualtiero Piccinini: Computational Mind in a Computational World

Understanding the place of computation in the physical universe requires understanding the relation between computation and the mind. Does the nervous system perform computations? Does cognition involve computation? Does consciousness have a computational nature? These are difficult and controversial topics. We will build on recent work in this area to push the debate forward. We will argue that the nervous system performs computations and cognition involves computation. However, consciousness is unlikely to have a wholly computational nature.top

Paula Quinon: Deviant Encodings and What "Computing" Means

The core of the problem discussed in this paper is the following: the Church-Turing Thesis states that Turing Machines formally explicate the intuitive concept of computability. The description of Turing Machines requires description of the notation used for the input and for the output. The notation used by Turing in the original account and also notations used in contemporary handbooks of computability all belong to the most known, common, widespread notations, such as standard Arabic notation for natural numbers, binary encoding of natural numbers or stroke notation. The choice is arbitrary and left unjustified. In fact, providing such a justification and providing a general definition of notations, which are acceptable for the process of computations, causes problems. This is so, because the comprehensive definition states that such a notation or encoding has to be computable. Yet, using the concept of computability in a definition of a notation, which will be further used in a definition of the concept of computability yields an obvious vicious circle.

This argument appears in discussions about what is an adequate or correct conceptual analysis of the concept of computability. Its exact form depends on the underlying picture of mathematics that an author is working with. After presenting several contexts in which deviant encodings are problematized explicitly, I focus on philosophical examples where the phenomenon appears implicitly, in some “disguised” version, for instance in the analysis of the concept of natural number. In parallel, I develop the idea that Carnapian explications provide a much more adequate framework for understanding the concept of computation, than the classical philosophical analysis.top

Tomasz Steifer and Dariusz KalociƄski: What Computable Prediction Can(not) Do?

We consider an effective notion of prediction for deterministic (individual sequences) and indeterministic (stochastic processes) chains of binary events. Motivated by practical and cognitive aspects of prediction, we define predictor as a total computable (rather than arbitrary) function from binary words to the set of bits. A predictor observes an initial realization of the process and tries to guess the next bit. A natural measure of prediction error is given by the ratio of incorrect guesses made throught the realization of the process. We study the asymptotic behavior of prediction errors. In particular, we discuss sequences for which a) all prediction errors are undefined b) all prediction errors converge to 1/2 c) no predictor is optimal. Finally, we discuss the connection between predictors and measure estimators. Some open problems are formulated.top

Tom Sterkenburg: Putnam’s Diagonal Argument

I explore some underappreciated offshoots of Putnam's famous 1963argument against Carnap's inductive logic. The diagonal constructionthat is center to Putnam's argument can be understood as an algorithmthat, given any learning algorithm (including any effectively computableCarnapian confirmation function), generates an "adversarial" outcome sequence that the given learning algorithm cannot learn.
First, I show how Putnam's argument thwarts the proposed definition of auniversal prediction method from algorithmic information theory. Second,I discuss how Putnam's argument shows the limitations, not just ofCarnapian or Bayesian confirmation functions, but of any learningalgorithm that is fully specified from the start. This points at theneed to dynamically incorporate newly formulated hypotheses during thelearning process, specifically, the Bayesian problem of new theory.top

Gašper Štukelj: On Analog Neural Computation

Responding to Piccinini and Bahar's (2013) distinction between neural, analog, and digital computation, some authors have argued that neural computation could be characterized as analog computation qua manipulation of analog representations. I focus on a recent attempt by Maley (2018) to argue more generally against any attempt at grounding difference between analog or neural, and digital computation in the respective kinds of representations. However, I show how a concept of analog computers can inform the discussion on neural computation. By combining the discussion with some common premises from the literature on neuromorphic engineering, computational neuroscience, and complexity analyses of artificial neural networks, I present a novel architectural, or mechanistic characterization of the distinction between digital, analog, and neural computation.top

Máté Szabo: Kreisel on Transfinite Progressions as Theories of Humanly-effectiveComputation

Kreisel understood Church’s Thesis as proposing a mechanical theory ofhumanly-effective computation, and thus, as a hypothesis concerning thelimitations of human mathematical reasoning. Kreisel argued against suchinterpretations, and considered transfinite progressions as alternativetheories of humanly-effective reasoning that are possibly non-mechanistic.
As Kreisel held a quite peculiar view, one aim of this talk is toprovide a careful reconstruction of his position, from his generalphilosophical approach to mathematics and his notion of so-called“genetic theories” to his understanding of Church’s Thesis and itsphilosophical consequences. His ahistorical misinterpretation of Churchand Turing’s work will be pointed out as well. This will serve as abackground for a critical analysis of Kreisel’s philosophical positionon these issues.

Back to Workshop Webpage