The view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false.-Alan Turing

## Introduction:

It recently occurred to me that Turing’s analysis of the halting problem may be formulated as a fixed-point theorem. This formulation is motivated in large part by discussions with computational biologists concerning the question of whether Turing machines may be used to model dynamical systems. A common objection among members of this community is that dynamical systems with a finite energy budget always halt, unlike Turing machines.

However, a fixed-point formulation of Turing’s analysis of the halting problem demonstrates that the source of this asymmetry is not that dynamical systems may not be simulated by computers, it is that there are fundamental limits to what Turing machines(and therefore humans) can know about dynamical systems. Moreover, this formulation has another interesting feature as it clearly shows that that there is an intrinsic directionality in the information processing behaviour of Turing Machines.

## Turing’s theorem:

Given that an infinite loop is defined by a sequence of functions composed with each other $$f_{n+1} = f_n \circ f_{n-1}$$ such that:

$$\forall n, f_{n+1} \equiv f_n \implies f = f \circ f$$

Let $$H$$ be a computable function which takes as input $$f$$ from the space of computable functions $$F$$ and always returns 0(i.e. halts) or 1(i.e. an infinite loop):

$$H:F \rightarrow {0,1}$$

Then there exists a function $$P \in F$$ such that $$P \circ P$$ halts if and only if $$H(P)$$ is an infinite loop. In consequence, $$P$$ has itself as a fixed-point if and only if $$P$$ defines an infinite loop.

## Proof:

Let’s consider the function:

$$\forall x \in F, G \circ x = \text{Bool} \circ \{H(x) = 0\}$$

Given that $$H(P) = 1 \lor 0$$ we have two cases:

$$H(P) = 0 \implies G \circ P = 1$$

$$H(P) = 1 \implies G \circ P = 0$$

However, in boolean arithmetic both cases reduce to:

$$H(P) + G \circ P = 1$$

so $$P$$ enters an infinite loop either way or to be mathematically precise, $$P$$ is a fixed point of the computable function $$P$$.

## Turing’s fixed point theorem from a dynamical systems perspective:

From a dynamical systems perspective, the stopping criterion $$H$$ is equivalent to stating that for any deterministic transformation $$f$$ operating on a state-space $$X$$:

$$\forall x_n \in X, x_{n+1} = f \circ x_n$$

we may decompose the state-space into $$X = \Lambda \cup X \setminus \Lambda$$ such that the question of whether $$\Lambda$$ contains all the attractors of $$f$$ is a decidable problem.

Given the space of computable functions $$F$$, this decidable problem may be formulated in terms of the halting function $$H$$ so we have:

$$G \circ x = \text{Bool} \circ {H(x)=0}$$

such that $$H$$ is defined $$\forall f \in F$$ i.e. $$H$$ is a total function. However, as in (6) we have:

$$H(f) + G \circ f = 1$$

By showing that such a function $$H$$ does not exist, Turing showed that Turing Machines are constrained by a principle of limited omniscience. Turing’s fixed-point theorem essentially points to the necessity of computer simulations in order to predict the future behaviour of a dynamical system whereas the Butterfly Effect places limits on what can be known from these simulations due to high sensitivity to the initial conditions.

## Discussion:

Turing’s fixed point theorem demonstrates that the scientific method is fundamentally necessary. The reason why experiments are essential in the natural sciences is that even if all of science concerned deterministic systems whose evolution was defined by computable functions, Turing’s fixed point theorem shows that the behaviour of many of these systems would escape the analytical process of the most talented team of mathematicians independently of the computational resources at their disposal.

Finally, I would like to add that Universal Turing machines are strictly more powerful than dynamical systems(which may be simulated by Turing Machines) because they possess unbounded memory as well as a combinatorial language that is sufficiently expressive to simulate any dynamical system.

## References:

1. Church, Alonzo (1936). “An Unsolvable Problem of Elementary Number Theory”. American Journal of Mathematics. 58 (2): 345–363. doi:10.2307/2371045. JSTOR 2371045.
2. Alan Turing, On Computable Numbers, with an Application to the Entscheidungsproblem. A Correction, Proceedings of the London Mathematical Society, Series 2, Volume 43 (1938), pp 544–546
3. Steven H. Strogatz. Nonlinear Dynamics and Chaos. CRC Press; 2nd edition. 2015.