In a statistical physics course, I often encountered the following approximation due to Stirling:

\begin{equation} \ln(N!) \approx N\ln(N) - N \end{equation}

It was very useful but my professor didn’t explain how good the approximation was. The derivation I found turned out to be very simple so I can present it in a few lines here.


If we define

\begin{equation} S = \sum_{n=1}^{N} \ln(n) \end{equation}

we have an upper-Riemann sum with . So we basically have the following approximation:

\begin{equation} S = \sum_{n=1}^{N} \ln(n) \Delta x \approx \int_{1}^N \ln(x) dx = N\ln(N) - N \end{equation}

Now, by the intermediate-value theorem

\begin{equation} \forall n \in \mathbb{N} \exists c_n \in (n,n+1), S’ = \sum_{n=1}^N \ln (c_n) \Delta x = \int_{1}^{N} \ln (x) dx \end{equation}


We may easily check how good this approximation is by bounding the error-term:

\begin{equation} |S-S’| \leq |\sum_{n=1}^{N} \ln(n)-\sum_{n=1}^N \ln (n+1)| = \ln(N+1) \end{equation}

This error grows very slowly. In fact, if i.e. the number of molecules in a glass of water, which is a minuscule error relative to the number of molecules.