microcanonical distributions

I’m currently trying to go through the material for my statistical physics course. For this I decided to start with an introductory text, ‘Statistical Physics’ by Daijiro Yoshioka. It’s very good for developing an intuition about the subject but many important results are stated without proof. So I have to try and fill in the gaps myself. Below is my attempt to prove one of these results.

The scenario where every possible microscopic state is realised with equal probability is called the microcanonical distribution. Now, the author states without proof that in this scenario almost all microscopic states will be realised as the state of the system changes temporally. We are given the following facts:

1. We consider a system of solid, liquid, or gas enclosed by an adiabatic wall.
2. We assume fixed Volume($V$),fixed number of molecules ($N$), and fixed but total energy($E$) with uncertainty $\delta E$.
3. The total number of microscopic states allowed under the macroscopic constraints is given by

\begin{aligned} W=W(E,\delta E,V,N) \end{aligned}

Assuming that each micro-state is realised with equal probability, we have for any micro-state $m_i$:

\begin{aligned} P(m_i | t_n) = \frac{1}{W} \implies \sum_{n=1}^{\infty}P(m_i | t_n)=\infty \end{aligned}

where we assume that the total number of micro-states is finite. We can then show using the Borel-Cantelli theorem that for any $m_i$, $E_n=\{m_i | t_n\}$ occurs infinitely often.

There’s something that bothers me about my proof. Somehow it assumes that the probability of transition between any two states, no matter how different, is always the same. For this reason, I now think that this proof might require more than the Borel-Cantelli theorem and that the ‘equal probability’ assumption might not hold for state transitions. It seems reasonable that in the immediate future some transitions would be more likely than others. Maybe I can use Borel-Cantelli for ‘large-enough’ time intervals.

Assuming that the idea of ‘large-enough’ time intervals can work, if I define a function $f$ which maps time intervals to microscopic states I can construct a sequence of weakly correlated events that’s Lebesgue integrable:

\begin{aligned} E_t = \{m_i \in f(I_t) \} \end{aligned}

where some of the intervals $I_t$ might be degenerate.`

I wasn’t taught this variant in my measure theory class but after some reflection I managed to fill in the details. This integrable variant will be the subject of my next blog post.

Two interesting questions to consider after this one, assuming I resolve this question soon, are whether the expected time to cover 50% or more of the state-space is finite and if so whether the rate at which the state-space is ‘explored’ is constant. My intuition tells me that the rate would probably be given by a sigmoid curve(i.e. the rate is decreasing).