Using the theory of Algorithmic Probability, we demonstrate that Archimedes’ Constant is absolutely normal.
Using the theory of Algorithmic Probability, we demonstrate that Archimedes’ Constant is absolutely normal. Upon closer examination, this analysis implies that Archimedes’ Constant is a physical constant.
Prime encodings \(X_N = \{x_n\}_{n=1}^N \in \{0,1\}^N\) where \(x_n = 1\) if \(n\) is prime and \(x_n=0\) otherwise are algorithmically random sequences.
Based on an information-theoretic demonstration of the Erdős-Kac theorem [1] and the Riemann Hypothesis [4], the Algorithmic Probability with which a prime number of magnitude \(p \in \mathbb{N}\) is observed is on the order of:
\[\begin{equation} m(p) = \lim_{N \to \infty} \forall X \sim U([1,N]), P(X \bmod p = 0) = \frac{1}{p} \tag{1} \end{equation}\]
As a result, using Levin’s Coding theorem, the Kolmogorov Complexity of \(p \in \mathbb{N}\) is on the order of:
\[\begin{equation} K_U(p) = -\log_2 m(p) = \log_2 p \tag{2} \end{equation}\]
which implies that a prime \(p \in \mathbb{N}\) might as well be generated by \(\log_2 p\) coin flips.
In the following analysis, we shall demonstrate that Archimedes’ Constant is finite-state incompressible using the Three Master Keys for Probabilistic Number Theory [2].
Given the Euler product,
\[\begin{equation} \frac{\pi}{4} = \big(\prod_{p \equiv 1(\bmod 4)} \frac{p}{p-1}\big) \cdot \big(\prod_{p \equiv 3(\bmod 4)} \frac{p}{p+1}\big) = \frac{3}{4} \cdot \frac{5}{4} \cdot \frac{7}{8} \cdot \frac{11}{12} ... \tag{3} \end{equation}\]
we may reformulate this product as follows:
\[\begin{equation} \frac{\pi}{4} = \prod_{p \in \mathbb{P} \setminus \{2\}} \frac{p}{f(p)} \tag{4} \end{equation}\]
where \(f(X) = 4 \cdot (\text{argmin}_{\lambda \in \mathbb{N}} \lvert X - 4 \cdot \lambda \rvert)\).
Now, considering that \(K_U(f)= \mathcal{O}(1)\) for any computable function \(f\) we have:
\[\begin{equation} K_U(\frac{\pi}{4}) = \lim_{N \to \infty} K_U \big(\prod_{n=2}^N p_n \big) + \mathcal{O}(1) \tag{5} \end{equation}\]
Furthermore, considering the information-theoretic derivation of the Prime Number Theorem [3], the statistical distribution of primes \(p_n\) may be modeled by independent random variables \(\widehat{p_n} \sim U([1,N])\). Using the fact that if \(f\) is invertible and \(X,Y\) are independent random variables, the entropy:
\[\begin{equation} H(f(X,Y)) = H(X) + H(Y) \tag{6} \end{equation}\]
so we have:
\[\begin{equation} \mathbb{E}[K_U \big(\prod_{n=2}^N \widehat{p_n} \big)] \sim \sum_{n=2}^N \mathbb{E}[K_U(\widehat{p_n})] \sim \sum_{n=2}^N H(\widehat{p_n}) \tag{7} \end{equation}\]
where invertibility is guaranteed by the Unique Factorization Theorem.
It follows that we may derive the asymptotic relation:
\[\begin{equation} \sum_{n=2}^N H(\widehat{p_n}) \sim \pi(N) \cdot \big(\sum_{p \leq N} m(p) \cdot \ln p\big) \sim \pi(N) \cdot \big(\sum_{p \leq N} \frac{1}{p} \cdot \ln p\big) \tag{8} \end{equation}\]
which may be deduced from the Lemma.
Using the Shannon Source coding theorem, the expression (8) may be identified with:
\[\begin{equation} \mathbb{E}[K(X_N)] \sim \pi(N) \cdot \big(\sum_{k =1 }^N \frac{1}{k} \big) \sim \pi(N) \cdot \ln N \sim N \tag{9} \end{equation}\]
where \(X_N\) is the prime encoding of length \(N\).
Now, using the Asymptotic Equipartition Property we may note that the average information gained from observing a prime number of unknown magnitude in the interval \([1,N]\) is dominated by the typical probability \(\frac{1}{N}\):
\[\begin{equation} \frac{1}{N} \cdot -\ln \prod_{k=1}^N P(x_k = 1) = \frac{-\ln \prod_{k=1}^N \frac{1}{k}}{N} = \frac{\ln N!}{N} \sim \frac{N \cdot \ln N - N}{N} \sim \ln N \tag{10} \end{equation}\]
Hence, a representation of Archimedes’ Constant of length \(N\) relative to any description language \(U\) corresponds to \(\pi(N)\) incompressible strings \(\widehat{x_p} \in \{0,1\}^*\) of length \(\sim \ln N\) where each string occurs with Algorithmic Probability:
\[\begin{equation} m(\widehat{x_p}) \sim e^{-\ln N} = \frac{1}{N} \tag{11} \end{equation}\]
As the entropy of Archimedes’ Constant is dominated by the product of uniformly distributed random variables, its entropy is invariant to permutations of these variables. It follows that the Algorithmic Probability of observing the entire string of length \(N\) is given by:
\[\begin{equation} P(\widehat{x}_{p_1},...,\widehat{x}_{p_{\pi(N)}}) = \prod_{p \leq N} m(\widehat{x_p}) \sim \big(\frac{1}{N}\big)^{\pi(N)} = e^{-\pi(N) \cdot \ln N} = e^{-N + \mathcal{o}(1)} \tag{12} \end{equation}\]
which allows us to determine the expected information gained:
\[\begin{equation} -\ln P(\widehat{x}_{p_1},...,\widehat{x}_{p_{\pi(N)}}) \sim -\ln \big(\frac{1}{N}\big)^{\pi(N)} \sim \pi(N) \cdot \ln N \sim N \tag{13} \end{equation}\]
Hence, we may conclude that Archimedes’ Constant has a Maximum Entropy distribution such that randomly sampled substrings of equal length have equal entropy. Therefore, Archimedes’ Constant is absolutely normal.
Rocke (2022, Jan. 11). Kepler Lounge: An information-theoretic proof of the Erdős-Kac theorem. Retrieved from keplerlounge.com
Rocke (2022, Jan. 15). Kepler Lounge: Three master keys for Probabilistic Number Theory. Retrieved from keplerlounge.com
Rocke (2022, Jan. 12). Kepler Lounge: An information-theoretic derivation of the Prime Number Theorem. Retrieved from keplerlounge.com
Rocke (2022, March 8). Kepler Lounge: The Von Neumann Entropy and the Riemann Hypothesis. Retrieved from keplerlounge.com
For attribution, please cite this work as
Rocke (2022, April 11). Kepler Lounge: Archimedes' Constant is absolutely normal. Retrieved from keplerlounge.com
BibTeX citation
@misc{rocke2022archimedes', author = {Rocke, Aidan}, title = {Kepler Lounge: Archimedes' Constant is absolutely normal}, url = {keplerlounge.com}, year = {2022} }