## Motivation:

One of my strongest intuitions for the Manifold Hypothesis in machine learning from the perspective of dynamical systems theory is that if we can collect large amounts of data from a particular process then this process must be stable.

Now, why would stable physical processes tend to have low-dimensional phase-spaces?

## A probabilistic analysis of the Largest Lyapunov Exponent:

For a deterministic dynamical system with evolution equation in an n-dimensional phase-space:

$$\dot{x} = f \circ x$$

the spectrum of Lyapunov exponents $$\{\lambda_i\}_{i=1}^n$$ generally depends on the initial condition $$x_0 \in S \subset \mathbb{R}^n$$.

Now, if we assume that our dynamical system was randomly generated we have:

$$\forall i, P(\lambda_i \leq 0) = \frac{1}{2}$$

and since the Largest Lyapunov Exponent strongly determines whether or not the system is chaotic:

$$P(\max_{i} \lambda_i \leq 0) = \prod_{i=1}^n P(\lambda_i \leq 0) \sim \frac{1}{2^n} \approx 0$$

for large $$n$$.

## Discussion:

Assuming that the Universe we live in is generally deterministic, the consideration of randomly generated dynamical systems allows us to deduce that non-chaotic dynamical systems are typically low-dimensional.