# The differential entropy of the Erdős-Kac distribution

Building upon the work of Billingsley, an entropy formula for the distribution of prime factors is carefully developed. This allows us to compare the entropy of the normal order of prime factors, which is constant, relative to extreme values where the entropy is unbounded.

Aidan Rocke https://github.com/AidanRocke
01-19-2022

Building upon the work of Billingsley, who carefully considered the connection between random prime factorisations and Brownian Motion , the Erdős-Kac distribution may be formulated as follows:

$\begin{equation} \forall X \sim U([1,\lfloor e^N \rfloor]), P\Big(X: \alpha \leq \frac{w(X)-\ln N}{\sqrt{\ln N}} \leq \beta\Big) \rightarrow \frac{1}{\sqrt{2\pi}} \int_{\alpha}^{\beta} e^{-u^2/2} du \tag{1} \end{equation}$

so $$w(X)$$ is a normally distributed random variable as $$N \to \infty$$.

Given the distribution (1) we may define its differential entropy as a measure of hidden information. But, relative to what base?

From the vantage point of the Asymptotic Equipartition Theorem, the Prime Number Theorem asserts that the average information gained from observing a prime number in the interval $$[1,N]$$ is given by :

$\begin{equation} \frac{\log_2(2^N)}{\pi(N)} \sim \ln N \tag{2} \end{equation}$

and by applying Occam’s razor we may deduce that the natural base is necessary and sufficient for measuring information hidden in the distribution of primes. We may experiment with other bases but the axioms of information theory as with other branches of mathematics must not be multiplied beyond necessity.

Having established that the natural base is necessary and sufficient (2), we may calculate the entropy of (1) using the differential entropy formula:

$\begin{equation} h(X) = \mathbb{E}[-\ln f(X)] = -\int_{X} f(x) \cdot \ln f(x) dx \tag{3} \end{equation}$

Applied to the standard normal distribution, we find that its differential entropy corresponds to:

$\begin{equation} h(f) = \frac{\mathbb{E}[u^2]}{2} + \frac{1}{2} \ln 2 \pi = \frac{1}{2} + \frac{1}{2} \ln 2 \pi = \frac{1}{2} \ln 2 \pi e \tag{4} \end{equation}$

which is a beautiful constant that has no dependence on the length of the interval $$\sim e^N$$.

Relatively speaking, the entropy of the normal order of prime factors of a typical integer is insignificant compared to extreme values. In fact, given the asymptotic formula for the primorial:

$\begin{equation} N\# = \prod_{n = 1}^{\pi(N)} p_n = e^{(1+ o(1))N} \tag{5} \end{equation}$

we may consider an extreme deviation from the normal order $$\sim \ln N$$:

$\begin{equation} \forall X \sim U([1,\lfloor e^N \rfloor]), P\big(w(X) \sim \pi(N)\big) \sim \prod_{p \leq N} \frac{1}{p} = e^{-N(1+ o(1))} \tag{6} \end{equation}$

Hence, the entropy of the extreme value distribution (6) is given by:

$\begin{equation} -\ln P\big(w(X) \sim \pi(N)\big) \sim N \tag{7} \end{equation}$

which is equivalent to stating that the expected information gained from observing $$\sim \pi(N)$$ primes in the interval $$[1,\lfloor e^N \rfloor]$$ is on the order of $$\sim N$$ bits of information.

1. Hardy, G. H.; Ramanujan, S. (1917), “The normal number of prime factors of a number n”, Quarterly Journal of Mathematics.

2. Erdős, Paul; Kac, Mark (1940). “The Gaussian Law of Errors in the Theory of Additive Number Theoretic Functions”. American Journal of Mathematics.

3. Rocke (2022, Jan. 12). Kepler Lounge: An information-theoretic derivation of the Prime Number Theorem. Retrieved from keplerlounge.com

4. Cover, T. M. and Thomas, J. A. Elements of Information Theory. New York: Wiley, 1991.

5. P. Billingsley, “Prime numbers and Brownian motion”, American Mathematical Monthly 80 (1973) 1099.