Information-theoretic foundations for Probabilistic Number Theory.

The following article introduces fundamental information-theoretic notions for Probabilistic Number Theory.

Without strong priors for the appropriate choice of base for the Entropy of a prime number, we first introduce the notion of Algorithmic Probability. This notion has the advantage of being parameterisation-invariant unlike Entropy.

Having said this, the correct choice of base emerges as a natural consequence of the Prime Number Theorem as demonstrated in the information-theoretic derivation of the Prime Number Theorem [1]. For the sake of brevity, we shall assume that the reader has taken this demonstration into account.

Given the prime \(p \in \mathbb{P}\) with representation in terms of the binary string \(x_p \in \{0,1\}^\infty\) we have:

\[\begin{equation} P(x_p) \sim \frac{1}{p} \tag{1} \end{equation}\]

as defined in [2].

\[\begin{equation} \forall \widehat{x_p} \sim U([1,N]), P(\widehat{x_p}) \sim \frac{1}{N} \tag{2} \end{equation}\]

as demonstrated in [1] and [3].

In light of the information-theoretic formulation of the Prime Number Theorem,

\[\begin{equation} \forall \widehat{x_p} \sim U([1,N]), H(\widehat{x_p}) \sim \frac{\log_2(2^N)}{\pi(N)} \sim \ln N \tag{3} \end{equation}\]

which determines that the natural base is the correct base for the entropy of a prime number.

Hence, we may define:

\[\begin{equation} H(\widehat{x_p}) \sim -\ln P(\widehat{x_p}) \sim \ln N \tag{4} \end{equation}\]

Likewise, we may define:

\[\begin{equation} \lim_{N \to \infty} \forall X \sim U([1,N]), H(X \bmod p = 0) \sim -\ln P(x_p) \sim \ln p \tag{5} \end{equation}\]

A straight-forward definition may be deduced from Mertens’ second theorem:

\[\begin{equation} \forall X \sim U([1,N]), H(X \in \mathbb{P}) \sim -\ln \prod_{p \leq N} \big(1-\frac{1}{p}\big) \sim \sum_{p \leq N} \frac{1}{p} \sim \ln \ln N \tag{6} \end{equation}\]

An alternate definition may use the density of the primes \(\frac{\pi(N)}{N} \sim \frac{1}{\ln N}\):

\[\begin{equation} \forall X \sim U([1,N]), H(X \in \mathbb{P}) \sim -\ln \big(\frac{1}{\ln N}\big) \sim \ln \ln N \tag{7} \end{equation}\]

These information-theoretic elements of Probabilistic Number Theory have already found concrete applications in proofs of Chebyshev’s theorem, the Erdős-Kac theorem, and the Prime Number Theorem.

Rocke (2022, Jan. 12). Kepler Lounge: An information-theoretic derivation of the Prime Number Theorem. Retrieved from keplerlounge.com

Rocke (2022, Jan. 7). Kepler Lounge: The Algorithmic Probability of a Prime Number. Retrieved from keplerlounge.com

Rocke (2022, Jan. 9). Kepler Lounge: Chebyshev’s theorem via Occam’s razor. Retrieved from keplerlounge.com

For attribution, please cite this work as

Rocke (2022, Jan. 15). Kepler Lounge: Three master keys for Probabilistic Number Theory. Retrieved from keplerlounge.com

BibTeX citation

@misc{rocke2022three, author = {Rocke, Aidan}, title = {Kepler Lounge: Three master keys for Probabilistic Number Theory}, url = {keplerlounge.com}, year = {2022} }