Random Walks and the Riemann Hypothesis

Starting from the observation that the statistical behaviour of the Mertens function is reminiscent of a one-dimensional random walk on non-zero values of the Möbius function, we demonstrate that the Riemann Hypothesis implies an entropy-bound on the Mertens function.

Aidan Rocke https://github.com/AidanRocke
11-23-2023

The statistical behaviour of a random walk:

Let’s suppose we define a random walk in terms of the sequence of independent random variables \(X_i\) where:

\[\begin{equation} P(X_i = 1) = P(X_i = -1) = \frac{1}{2} \tag{1} \end{equation}\]

so we may define the sum:

\[\begin{equation} S_n = \sum_{i=1}^n X_i \tag{2} \end{equation}\]

Due to symmetry, we have:

\[\begin{equation} \mathbb{E}[S_n] = \sum_{i=1}^n \mathbb{E}[X_i] = 0 \tag{3} \end{equation}\]

and if we measure the standard deviation of \(S_n\), we find:

\[\begin{equation} \sigma_{S_n} = \sqrt{\text{Var}[S_n]} = \sqrt{\mathbb{E}[S_n^2]-(\mathbb{E}[S_n])^2} = \sqrt{n} \tag{4} \end{equation}\]

Upon closer inspection, we may deduce that:

\[\begin{equation} \exists m,M \in \mathbb{N}, \lvert S_n \rvert \leq M \cdot \sigma_{S_n} \tag{5} \end{equation}\]

\(\forall n \geq m > 0\) almost surely; a statistical law that shall illuminate our analysis of the Riemann Hypothesis.

Information-theoretic analysis:

The information gained from observing a measurement:

The random walk \(S_n\) defines a map:

\[\begin{equation} S_n = \sum_{i=1}^n X_i \tag{6} \end{equation}\]

\[\begin{equation} S_n : \{0,1\}^n \rightarrow [-n,n] \tag{7} \end{equation}\]

so there are \(2^n\) possible events but at most \(2n+1\) distinct measurements.

Hence, if we use a fair coin to randomly generate each move in the random walk then we require \(n\) bits to describe a typical event and on the order of \(\sim \log_2 n\) bits to describe a typical measurement.

Information gained from observing the magnitude of a measurement:

Indexing a random walk generated by fair coin tosses, we may define the set \(R\) of positive tosses and its complement \(B\):

\[\begin{equation} I = R \cup B \land R \cap B = \emptyset \tag{8} \end{equation}\]

\[\begin{equation} S_n = \sum_{k=1}^n X_k = \sum_{k \in I} X_k \tag{9} \end{equation}\]

Based on a frequentist analysis, we may deduce that:

\[\begin{equation} \mathbb{E}[\lvert R \rvert] = \mathbb{E}[\lvert B \rvert] = \frac{n}{2} \tag{10} \end{equation}\]

As the typical measurement \(|S_n| \in \mathcal{A_n}^{\epsilon}\) satisfies:

\[\begin{equation} P(|S_n| \in \mathcal{A_n}^{\epsilon}) \sim \sqrt{\frac{2}{\pi \cdot n}} \tag{11} \end{equation}\]

we may deduce that the information gained from observing the magnitude of a measurement is on the order of:

\[\begin{equation} \log_2 |S_n| \sim -\log_2 P(S_n \approx 0) \sim \log_2 \sqrt{n} \tag{12} \end{equation}\]

where we applied Levin’s Coding theorem and given (10), it suffices to estimate the middle term of the binomial which dominates all other terms:

\[\begin{equation} P(S_n \approx 0) \sim \frac{1}{2^n} {n \choose \lfloor \frac{n}{2} \rfloor} \sim \frac{1}{2^n} \frac{\sqrt{2 \pi n}}{\pi n}\cdot \frac{(\frac{n}{e})^n}{(\frac{n}{2e})^n} \sim \sqrt{\frac{2}{\pi \cdot n}} \tag{13} \end{equation}\]

The statistics of the Mertens function:

We may define the Möbius function \(\mu\) as follows:

\[\begin{equation} \mu: \mathbb{N} \rightarrow \{-1,0,+1\} \tag{14} \end{equation}\]

where \(\mu(n) = (-1)^k\) if the integer \(n \in \mathbb{N}\) has \(k\) distinct prime factors, and \(\mu(n)=0\) if \(n\) is not square-free.

Given our definition of the Möbius function, we may define the Mertens function:

\[\begin{equation} \Phi(n) = \sum_{k=1}^n \mu(k) \tag{15} \end{equation}\]

whose statistical behaviour is curiously reminiscent of the random walk.

As the proportion of integers that are square-free is given by Euler’s product formula for the Riemann Zeta function:

\[\begin{equation} \frac{1}{\zeta(2)} = \prod_{p \in \mathbb{P}} \big(1-\frac{1}{p^2}\big) \tag{16} \end{equation}\]

and since Euler proved that:

\[\begin{equation} \frac{1}{\zeta(2)} = \frac{6}{\pi^2} \approx 60\% \tag{17} \end{equation}\]

we may infer that there is an approximately \(1-\frac{1}{\zeta(2)} \approx 40\%\) chance that \(\forall x \sim U([1,N]), \mu(x)=0\) as \(N \to \infty\).

Furthermore, assuming that the Mertens function’s statistical behavior is indistinguishable from that of a random walk:

\[\begin{equation} \mathbb{E}[\Phi(\zeta(2) \cdot n)] = 0 \tag{18} \end{equation}\]

\[\begin{equation} \sigma_{\Phi(\zeta(2) \cdot n)} = \sqrt{\mathbb{E}[\Phi(\zeta(2) \cdot n)^2]} = \sqrt{n} + \mathcal{O}(1) \tag{19} \end{equation}\]

where the validity of formulas (18) and (19) may be ascertained from the information-theoretic derivation of the Erdős-Kac Law [2].

Thus, we may infer that:

\[\begin{equation} \lvert \Phi(n) \rvert = \mathcal{O}(\sqrt{n}) \tag{15} \end{equation}\]

almost surely.

An entropy bound on the Mertens function:

From an information-theoretic perspective, (19) implies that the normal order of \(K_U(\lvert \Phi(n) \rvert)\) is given by:

\[\begin{equation} K_U(\lvert \Phi(n) \rvert) \sim \frac{\log_2 n}{2} \tag{20} \end{equation}\]

Hence, we may deduce an entropy bound on the Mertens function:

\[\begin{equation} \forall \epsilon > 0, \lim_{n \to \infty} P\big(\frac{\log_2 \lvert \Phi(n) \rvert}{\log_2 n} > \frac{1}{2} + \epsilon \big) = 0 \tag{19} \end{equation}\]

and if (19) is true, it follows that:

\[\begin{equation} \forall \epsilon > 0, \Phi(n) = \mathcal{O}(n^{\frac{1}{2} + \epsilon}) \tag{20} \end{equation}\]

almost surely, so the Riemann Hypothesis is almost surely true.

Random Walks and the Riemann Hypothesis:

A direct connection between the growth-rate of the Mertens function and the Riemann Hypothesis may be established as follows.

Using the Euler product formula for the Riemann Zeta function, we have:

\[\begin{equation} \frac{1}{\zeta(s)} = \prod_{p} \big(1-p^{-s}\big) \tag{21} \end{equation}\]

and due to the Unique Prime Factorisation theorem,

\[\begin{equation} \frac{1}{\zeta(s)} = \sum_{n=1}^{\infty} \frac{\mu(n)}{n^s} \tag{22} \end{equation}\]

which converges for \(\text{Re}(s) > 1\).

Expressing (22) as a Stieltjes integral:

\[\begin{equation} \frac{1}{\zeta(s)} = \int_{0}^{\infty} x^{-s} d \Phi(x) \tag{23} \end{equation}\]

and integrating by parts, (23) simplifies to:

\[\begin{equation} \frac{1}{\zeta(s)} = x^{-s} \Phi(x) \Big|_{0}^{\infty} + s \int_{0}^{\infty} \Phi(x) x^{-s-1} dx = s \int_{0}^{\infty} \Phi(x) x^{-s-1} dx \tag{24} \end{equation}\]

so the reciprocal of the Riemann Zeta function may be expressed as a Mellin transform:

\[\begin{equation} \frac{1}{s \zeta{s}} = \big\{M \Phi\big\}(-s) = \int_{0}^{\infty} x^{-s-1}\Phi(x) dx \tag{25} \end{equation}\]

and using the inverse Mellin transform, we have:

\[\begin{equation} \Phi(x) = \frac{1}{2\pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} \frac{x^s}{s \zeta(s)} ds \tag{26} \end{equation}\]

which is well-defined for \(\frac{1}{2} < \sigma < 2\) assuming that the Riemann Hypothesis is true.

Finally, from (26) we may conclude that:

\[\begin{equation} \forall \epsilon > 0, \Phi(n) = \mathcal{O}(n^{\frac{1}{2}+\epsilon}) \tag{27} \end{equation}\]

QED.

References:

  1. Leonhard Euler. Remarques sur un beau rapport entre les séries des puissances tant directes que réciproques. Mémoires de l’académie des sciences de Berlin. 1749.

  2. Sasha Kolpakov & Aidan Rocke. On the Impossibility of learning a formula for primes using AI. arXiv. October 2023.

  3. Rocke (2022, March 8). Kepler Lounge: The Von Neumann Entropy and the Riemann Hypothesis. Retrieved from keplerlounge.com

Citation

For attribution, please cite this work as

Rocke (2023, Nov. 23). Kepler Lounge: Random Walks and the Riemann Hypothesis. Retrieved from keplerlounge.com

BibTeX citation

@misc{rocke2023random,
  author = {Rocke, Aidan},
  title = {Kepler Lounge: Random Walks and the Riemann Hypothesis},
  url = {keplerlounge.com},
  year = {2023}
}