The view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false.-Alan Turing


It recently occurred to me that Turing’s analysis of the halting problem may be formulated as a fixed-point theorem. This formulation is motivated in large part by discussions with computational biologists concerning the question of whether Turing machines may be used to model dynamical systems. A common objection among members of this community is that dynamical systems with a finite energy budget always halt, unlike Turing machines.

However, a fixed-point formulation of Turing’s analysis of the halting problem demonstrates that the source of this asymmetry is not that dynamical systems may not be simulated by computers, it is that there are fundamental limits to what Turing machines(and therefore humans) can know about dynamical systems. Moreover, this formulation has another interesting feature as it clearly shows that that there is an intrinsic directionality in the information processing behaviour of Turing Machines.

Turing’s theorem:

Given \(f\) from the space of computable functions \(F\) and the metrisable space \(X\), if we define the dynamical system:

\begin{equation} \forall x_n \in X, x_{n+1} = f \circ x_n \end{equation}

then the existence of a general method \(H\) for deciding whether \(\Lambda \subset X\) contains all attractors of \(f\) implies the existence of the fixed-point:

\begin{equation} f \circ f^* = f^* \end{equation}

where \(f^* \in F\).

Unfortunately, it may be shown that there is no such general method $H$ which is guaranteed to halt within a finite number of operations.


Let’s suppose we have a dynamical system given by:

\begin{equation} \forall x_n \in X, x_{n+1} = f \circ x_n \end{equation}

where \(X\) is assumed to be a metrisable space and the nth term \(x_n \in X\) is given by the iterated composition of functions:

\begin{equation} x_n = f^n \circ x_0 \end{equation}

\begin{equation} \forall n \in \mathbb{N}, f^{n+1} = f \circ f^n \end{equation}

then (4) defines a finite loop provided that \(n < \infty\).

Now, if the space of attractors \(\Lambda \subset X\) is given by:

\begin{equation} \Lambda = \{x_0 \in X: \forall \epsilon > 0 \exists N \in \mathbb{N} \forall n \geq N, \lVert f^n \circ x_0 - f^{n+1} \circ x_0 \rVert < \epsilon \} \end{equation}

we may distinguish finite loops from infinite loops if there exists a general stopping criterion \(H\):

\begin{equation} H \circ f = \text{Bool} \circ \{\forall \epsilon > 0 \forall x_0 \in \Lambda \exists n \in \mathbb{N}, \lVert f^{\infty} \circ x_0 - f^n \circ x_0 \rVert < \epsilon \} \end{equation}

Assuming the existence of \(H\), the question of whether \(\Lambda\) contains all the attractors of \(f\) is a decidable problem. Moreover, given \(H\) we may define:

\begin{equation} f^* = \lim_{n \to \infty} f^n \end{equation}

and using (5) this results in the fixed-point:

\begin{equation} f \circ f^* = f^* \end{equation}

\begin{equation} f^*: \Lambda \rightarrow \Lambda \end{equation}

where \(f,f^* \in F\).

However, \(H\) requires an infinite number of function calls in its definition so we have:

\begin{equation} H \circ f = 0 \implies \text{infinite loop} \end{equation}

\begin{equation} H \circ f = 1 \implies \text{infinite loop} \end{equation}

and so we may conclude that there is no general method \(H\) that is guaranteed to halt within a finite number of operations.

By showing that such a function \(H\) does not exist, Turing showed that Turing Machines are constrained by a principle of limited omniscience. Turing’s fixed-point theorem essentially points to the necessity of computer simulations in order to predict the future behaviour of a dynamical system whereas the Butterfly Effect places limits on what can be known from these simulations due to high sensitivity to the initial conditions.


Turing’s fixed point theorem demonstrates that the scientific method is fundamentally necessary. The reason why experiments are essential in the natural sciences is that even if all of science concerned deterministic systems whose evolution was defined by computable functions, Turing’s fixed point theorem shows that the behaviour of many of these systems would escape the analytical process of the most talented team of mathematicians independently of the computational resources at their disposal.

Finally, I would like to add that Universal Turing machines are strictly more powerful than dynamical systems(which may be simulated by Turing Machines) because they possess unbounded memory as well as a combinatorial language that is sufficiently expressive to simulate any dynamical system.


  1. Church, Alonzo (1936). “An Unsolvable Problem of Elementary Number Theory”. American Journal of Mathematics. 58 (2): 345–363. doi:10.2307/2371045. JSTOR 2371045.
  2. Alan Turing, On Computable Numbers, with an Application to the Entscheidungsproblem. A Correction, Proceedings of the London Mathematical Society, Series 2, Volume 43 (1938), pp 544–546
  3. Steven H. Strogatz. Nonlinear Dynamics and Chaos. CRC Press; 2nd edition. 2015.