Motivation:

According to Feynman, the double-slit experiment contains the central mystery of Quantum Mechanics. However, in spite of numerous attempts, the quantum randomness observed in the double-slit experiment has remained a mystery until this day. That said, a potentially effective approach originally proposed by John Wheeler consists in using information theory to reformulate Quantum Mechanics.

An information-theoretic perspective requires interpreting light as a signal that carries information and not as a material particle or wave. Given the cosmological properties of light, this signal would operate within an event horizon. This interpretation is consistent with other clues.

We don’t live in an observer-independent universe as any cosmological model must account for the existence of billions of observers. Moreover, the scientific method itself is not observer-independent.

Reflections on the nature of light:

If we are to interpret light as a signal, I believe cosmological natural selection is a useful meta-framework.

In particular, if we assume that conscious life ultimately allows an exponential increase in cosmic progeny via technological singularities, we may make two remarks. A constant velocity of signal propagation allows decentralised coordination of cosmological morphogenesis. Unbounded signal propagation on the other hand would significantly diminish the potential diversity of cosmic forms as it would allow global coordination of cosmic events. The implicit assumption here is that electromagnetic waves are a crucial means of communication for technologically advanced civilisations.

While the theory of cosmological natural selection is not inconsistent with the anthropic principle, it allows cosmologists to derive the principle from familiar Darwinian arguments. But, how do we account for the quantum randomness observed in the double-slit experiment?

The observer-dependence theorems, or observer-dependence in the scientific method:

In order to make sense of the quantum randomness that is observed in the double-slit experiment, it is important to note that the scientific method is an algorithmic method that may be implemented by a Turing Machine and that scientists ultimately discover the relations between things and not the things themselves. Moreover, these relations are exactly those which lie within the Turing limit. These insights motivate the use of algorithmic information theory in order to understand the epistemic limits of the scientific method.

Given the family of probabilistic models \(P_M\), the Minimum Description Length of a dataset \(X\) of \(N\) samples from a discrete probability distribution \(P_X\) relative to the optimal model \(\Omega \in P_M\) is given by the First Observer Dependence Theorem:

\begin{equation} \mathbb{E}[K(X)] = H(X|\Omega) + H(\Omega) \end{equation}

where \(H(\Omega)\) is the inextricable epistemic uncertainty of \(\Omega\) concerning its own operation and \(H(X|\Omega)\) is the inextricable epistemic uncertainty of \(\Omega\) relative to \(P_X\).

In (1) I used the fact that \(\Omega\) is a probabilistic program so it makes sense to compute the expected Kolmogorov Complexity, as well as the fact that the expected Kolmogorov Complexity of a random variable equals the Minimum Description Length of that variable [1]. I also implicitly assumed that ergodic assumptions are satisfied, which is the case in the regime of repeatable scientific experiments.

If \(\Omega\) is identified with what physicists call an observer i.e. a system that collects data \(X\) and tests hypotheses concerning the probabilistic structure of \(X\) in order to discover \(P_X\) then we find that relative to this observer, incompressibility(in terms of memory requirements), algorithmic randomness and incompleteness(as defined) are all equivalent to \(\mathbb{E}[K(X)]\). We also find that the inextricable epistemic uncertainty of \(\Omega\) relative to \(P_X\) is bounded by:

\begin{equation} \mathbb{E}[K(X)] \geq H(\Omega) \end{equation}

This suggests that perceived randomness is not observer independent. More generally it suggests that the scientific method is not observer independent and it provides an information-theoretic formulation of Planck’s claim that:

Science can’t solve the ultimate mystery of nature. And that is because, in the last analysis, we ourselves are a part of the mystery that we are trying to solve.-Planck

Now, given the observer-dependence theorems and the fact that the most fundamental physical observations are expressions of fundamental relations between humans and their environment we are now ready to analyse the double-slit experiment.

Observer-dependence in the double-slit experiment:

Assuming that light is a signal that transmits information to a detector, we may interpret the double-slit experiment as a communication channel where the detector corresponds to an observer. This is consistent with John Wheeler’s proposal that every quantum observation corresponds to a bit of information. Furthermore, if we consider that each bit of information corresponds to a yes/no question what are the precise questions asked in the double-slit experiment?

  1. With a photo-sensitive screen, the question is ‘Where did the quantum object land?’

  2. On the other hand, if you have a detector behind each slit the question is ‘Which slit did the quantum go through?’

It is worth noting that the second question yields a bi-modal distribution as it contains two possibilities whereas the first question implies a much more complex multi-modal distribution. However, there remains one more mystery. If the double-slit experiment is observer-dependent and there is an ensemble of observers, how is it that all these observations are consistent?

This must be because the entire universe is identifiable with a single wave function. As for how a particular branch of the multiverse is chosen at any instant, that is a mystery that leaves something to the imagination. In fact, epistemological limit is precisely the origin of quantum randomness in the double-slit experiment.

Given the observer-dependence theorems, there is good reason to believe that we shall never know. Not unless we figure out how to engineer computers that go beyond the Turing limit.

Discussion:

While this article demonstrates that Cosmological Natural Selection and the observer-dependence theorems address the central mystery of quantum mechanics, I would like to add that these two theories are related via the Physical Church-Turing thesis. One important motivation for exploring this connection is that it is the scientific basis for speculations that we may be living in a computer simulation of a civilisation that mastered the principles of black-hole engineering. On this front there are two complementary approaches.

One approach involves the analysis of physical constraints on black-hole computers. Another interesting approach involves the consideration of limit-computable mathematical objects such as prime formulas that are not computable by Turing Machines.

References:

  1. Feynman. The Feynman Lectures in Physics. 1963.
  2. Wheeler. INFORMATION, PHYSICS, QUANTUM: THE SEARCH FOR LINKS. 1989.
  3. Aidan Rocke (https://cstheory.stackexchange.com/users/47594/aidan-rocke), Understanding the Physical Church-Turing thesis and its implications, URL (version: 2021-02-22): https://cstheory.stackexchange.com/q/48450
  4. L. Smolin, Did the Universe Evolve? Class. Quantum Grav. 9 (1992a) 173–191
  5. Jeffrey M Shainline. Does cosmological natural selection select for technology? Institute of Physics. 2020.