## An upper-bound that is consistent with Cramer’s conjecture:

While going over a related question, it occurred to me to define the expected information gained from observing large primes that are consecutive:

$$\Delta I_N = \mathbb{E}[K_U([1,P_{N+1}])] - \mathbb{E}[K_U([1,P_{N}])] \geq 0 \tag{1}$$

where the expectation is calculated over prefix-free Universal Turing Machines $$U \in \{U_n\}_{n \in \mathbb{N}}$$. By the Invariance theorem for Kolmogorov Complexity [3], we may choose any computable probability distribution $$P$$ over $$\{U_n\}_{n \in \mathbb{N}}$$ without affecting the asymptotics of $$\Delta I_N$$.

Now, given Chaitin’s theorem that almost all integers are incompressible, for large $$N$$ we may deduce that:

$$\mathbb{E}[K_U([1,P_{N}])] \sim P_N \cdot \ln P_N - P_N \tag{2}$$

$$\mathbb{E}[K_U([1,P_{N+1}])] \sim P_{N+1} \cdot \ln P_{N+1} - P_{N+1} \tag{3}$$

which gives us useful asymptotic bounds:

$$0 \leq \Delta I_N \leq P_{N+1} \cdot \ln P_{N+1} - P_{N} \cdot \ln P_{N} \tag{4}$$

$$0 \leq P_{N+1} - P_N \leq P_{N+1} \cdot \ln P_{N+1} - P_{N} \cdot \ln P_{N} \tag{5}$$

Considering (5), we may observe that:

$$P_N \sim \pi(N) \cdot \ln P_N = N \cdot \ln P_N \tag{6}$$

which allows us to deduce:

$$P_N \cdot \ln P_N \sim N \cdot (\ln P_N)^2 \tag{7}$$

and as Bertrand’s postulate implies $$\ln P_N \leq \ln P_{N+1} < \ln P_{N+1} + 1$$ we also have:

$$P_{N+1} \cdot \ln P_{N+1} \sim (N+1) \cdot (\ln P_N)^2 \tag{8}$$

By combining (5), (7) and (8) we obtain the following upper-bound on prime gaps that is consistent with Cramer’s conjecture [5]:

$$0 \leq P_{N+1} - P_N \leq (\ln P_N)^2 \tag{9}$$

## An important Corollary:

If the prime encoding $$X_N = \{x_n\}_{n=1}^N$$ is defined such that $$x_n =1$$ if $$n$$ is prime and $$x_n = 0$$ otherwise, then due to the derivation in [8] the information gained from observing each prime number in $$[1,N]$$ is given by:

$$\mathbb{E}[K_U(X_N)] \sim \pi(N) \cdot \ln N \sim N \tag{14}$$

so if we define the difference in information gained:

$$\Delta I_N’ = \mathbb{E}[K_U(X_{P_{N+1}})] - \mathbb{E}[K_U(X_{P_{N}})] \sim P_{N+1} - P_N \tag{15}$$

an important consequence of (4) and (9) is that:

$$\max (\Delta I_N, \Delta I_N’) \leq (\ln P_N)^2 \tag{16}$$

I think this is a key information-theoretic insight.

## References:

1. A. N. Kolmogorov. Three approaches to the quantitative definition of information. Problems of Information and Transmission, 1(1):1–7, 1965
2. G. J. Chaitin, A theory of program size formally identical to information theory, Journal of the ACM 22, (1975) 329–340.
3. Ming Li & Paul Vitányi. Kolmogorov Complexity and its Applications. Handbook of Theoretical Computer Science, Volume A: Algorithms and Complexity. 1990.
4. Peter Grünwald and Paul Vitányi. Shannon Information and Kolmogorov Complexity. 2010.
5. Cramér, H. “On the Order of Magnitude of the Difference Between Consecutive Prime Numbers.” Acta Arith. 2, 23-46, 1936.
6. Granville, A. “Harald Cramér and the Distribution of Prime Numbers.” Scand. Act. J. 1, 12-28, 1995.
7. K. Ford, B. Green, S. Konyagin, and T. Tao, Large gaps between consecutive prime numbers. Ann. of Math. (2) 183 (2016), no. 3, 935–974
8. Aidan Rocke (https://mathoverflow.net/users/56328/aidan-rocke), information-theoretic derivation of the prime number theorem, URL (version: 2021-04-08): https://mathoverflow.net/q/384109