Motivation:

One of my strongest intuitions for the Manifold Hypothesis in machine learning from the perspective of dynamical systems theory is that if we can collect large amounts of data from a particular process then this process must be stable.

Now, why would stable physical processes tend to have low-dimensional phase-spaces?

A probabilistic analysis of the Largest Lyapunov Exponent:

For a deterministic dynamical system with evolution equation in an n-dimensional phase-space:

\begin{equation} \dot{x} = f \circ x \end{equation}

the spectrum of Lyapunov exponents \(\{\lambda_i\}_{i=1}^n\) generally depends on the initial condition \(x_0 \in S \subset \mathbb{R}^n\).

Now, if we assume that our dynamical system was randomly generated we have:

\begin{equation} \forall i, P(\lambda_i \leq 0) = \frac{1}{2} \end{equation}

and since the Largest Lyapunov Exponent strongly determines whether or not the system is chaotic:

\begin{equation} P(\max_{i} \lambda_i \leq 0) = \prod_{i=1}^n P(\lambda_i \leq 0) \sim \frac{1}{2^n} \approx 0 \end{equation}

for large \(n\).

Discussion:

I will note that in order to rationalise (3) we rely upon the observation that almost all random vectors are orthogonal in a high-dimensional space. On the other hand, as most scientists know, every scientific model has a compositional structure and may ultimately be understood in terms of the fundamental constants which is the case with the International System of Units. So it is highly improbable that two distinct signals from the same physical system would be uncorrelated.

What is interesting about the analysis I have shared is that it shows that even if have relatively weak priors about the natural world, the Manifold Hypothesis would be true given is known about deterministic dynamical systems.