Observerdependence in the doubleslit experiment
Motivation:
According to Feynman, the doubleslit experiment contains the central mystery of Quantum Mechanics. However, in spite of numerous attempts, the quantum randomness observed in the doubleslit experiment has remained a mystery until this day. That said, a potentially effective approach originally proposed by John Wheeler consists in using information theory to reformulate Quantum Mechanics.
An informationtheoretic perspective requires interpreting light as a signal that carries information and not as a material particle or wave. Given the cosmological properties of light, this signal would operate within an event horizon. This interpretation is consistent with other clues.
We don’t live in an observerindependent universe as any cosmological model must account for the existence of billions of observers. Moreover, the scientific method itself is not observerindependent.
Reflections on the nature of light:
If we are to interpret light as a signal, I believe cosmological natural selection is a useful metaframework.
In particular, if we assume that conscious life ultimately allows an exponential increase in cosmic progeny via technological singularities, we may make two remarks. A constant velocity of signal propagation allows decentralised coordination of cosmological morphogenesis. Unbounded signal propagation on the other hand would significantly diminish the potential diversity of cosmic forms as it would allow global coordination of cosmic events. The implicit assumption here is that electromagnetic waves are a crucial means of communication for technologically advanced civilisations.
While the theory of cosmological natural selection is not inconsistent with the anthropic principle, it allows cosmologists to derive the principle from familiar Darwinian arguments. But, how do we account for the quantum randomness observed in the doubleslit experiment?
The observerdependence theorems, or observerdependence in the scientific method:
In order to make sense of the quantum randomness that is observed in the doubleslit experiment, it is important to note that the scientific method is an algorithmic method that may be implemented by a Turing Machine and that scientists ultimately discover the relations between things and not the things themselves. Moreover, these relations are exactly those which lie within the Turing limit. These insights motivate the use of algorithmic information theory in order to understand the epistemic limits of the scientific method.
Given the family of probabilistic models \(P_M\), the Minimum Description Length of a dataset \(X\) of \(N\) samples from a discrete probability distribution \(P_X\) relative to the optimal model \(\Omega \in P_M\) is given by the First Observer Dependence Theorem:
\begin{equation} \mathbb{E}[K(X)] = H(X\Omega) + H(\Omega) \end{equation}
where \(H(\Omega)\) is the inextricable epistemic uncertainty of \(\Omega\) concerning its own operation and \(H(X\Omega)\) is the inextricable epistemic uncertainty of \(\Omega\) relative to \(P_X\).
In (1) I used the fact that \(\Omega\) is a probabilistic program so it makes sense to compute the expected Kolmogorov Complexity, as well as the fact that the expected Kolmogorov Complexity of a random variable equals the Minimum Description Length of that variable [1]. I also implicitly assumed that ergodic assumptions are satisfied, which is the case in the regime of repeatable scientific experiments.
If \(\Omega\) is identified with what physicists call an observer i.e. a system that collects data \(X\) and tests hypotheses concerning the probabilistic structure of \(X\) in order to discover \(P_X\) then we find that relative to this observer, incompressibility(in terms of memory requirements), algorithmic randomness and incompleteness(as defined) are all equivalent to \(\mathbb{E}[K(X)]\). We also find that the inextricable epistemic uncertainty of \(\Omega\) relative to \(P_X\) is bounded by:
\begin{equation} \mathbb{E}[K(X)] \geq H(\Omega) \end{equation}
This suggests that perceived randomness is not observer independent. More generally it suggests that the scientific method is not observer independent and it provides an informationtheoretic formulation of Planck’s claim that:
Science can’t solve the ultimate mystery of nature. And that is because, in the last analysis, we ourselves are a part of the mystery that we are trying to solve.Planck
Now, given the observerdependence theorems and the fact that the most fundamental physical observations are expressions of fundamental relations between humans and their environment we are now ready to analyse the doubleslit experiment.
Observerdependence in the doubleslit experiment:
Assuming that light is a signal that transmits information to a detector, we may interpret the doubleslit experiment as a communication channel where the detector corresponds to an observer. This is consistent with John Wheeler’s proposal that every quantum observation corresponds to a bit of information. Furthermore, if we consider that each bit of information corresponds to a yes/no question what are the precise questions asked in the doubleslit experiment?

With a photosensitive screen, the question is ‘Where did the quantum object land?’

On the other hand, if you have a detector behind each slit the question is ‘Which slit did the quantum go through?’
It is worth noting that the second question yields a bimodal distribution as it contains two possibilities whereas the first question implies a much more complex multimodal distribution. However, there remains one more mystery. If the doubleslit experiment is observerdependent and there is an ensemble of observers, how is it that all these observations are consistent?
This must be because the entire universe is identifiable with a single wave function. As for how a particular branch of the multiverse is chosen at any instant, that is a mystery that leaves something to the imagination. In fact, epistemological limit is precisely the origin of quantum randomness in the doubleslit experiment.
Given the observerdependence theorems, there is good reason to believe that we shall never know. Not unless we figure out how to engineer computers that go beyond the Turing limit.
Discussion:
While this article demonstrates that Cosmological Natural Selection and the observerdependence theorems address the central mystery of quantum mechanics, I would like to add that these two theories are related via the Physical ChurchTuring thesis. One important motivation for exploring this connection is that it is the scientific basis for speculations that we may be living in a computer simulation of a civilisation that mastered the principles of blackhole engineering. On this front there are two complementary approaches.
One approach involves the analysis of physical constraints on blackhole computers. Another interesting approach involves the consideration of limitcomputable mathematical objects such as prime formulas that are not computable by Turing Machines.
References:
 Feynman. The Feynman Lectures in Physics. 1963.
 Wheeler. INFORMATION, PHYSICS, QUANTUM: THE SEARCH FOR LINKS. 1989.
 Aidan Rocke (https://cstheory.stackexchange.com/users/47594/aidanrocke), Understanding the Physical ChurchTuring thesis and its implications, URL (version: 20210222): https://cstheory.stackexchange.com/q/48450
 L. Smolin, Did the Universe Evolve? Class. Quantum Grav. 9 (1992a) 173–191
 Jeffrey M Shainline. Does cosmological natural selection select for technology? Institute of Physics. 2020.