## Motivation:

While living organisms are better described by non-equilibrium thermodynamics, I show that if we take an ergodic perspective on the Second Law of Thermodynamics a direct implication of the Second Law, namely the Principle of Minimum Energy, is applicable to all physical processes occurring within a living system.

It follows that the Principle of Minimum Energy is applicable to information processing costs that may be identified with animal cognition.

## The Second Law:

The thermodynamic arrow of time(entropy) is a measurement of disorder within a system. Denoted as $$\Delta S_{A}$$, the change of entropy indicates that time is asymmetric and that a system will become increasingly disordered/ergodic as time goes by.

To understand how entropy changes, it is useful to decompose entropy into two parts that must be considered simultaneously. We may do this using basic notions from algebraic topology of a manifold $$A$$, its boundary $$\partial A$$ and the interior of the manifold $$\text{Int}(A)$$. In this setting, the entropy change of the surroundings $$\Delta S_{\partial A}$$ is identified with the boundary $$\partial A$$ and the entropy change of the system itself $$\Delta S_{\text{Int}(A)}$$ is identified with the interior of $$A$$. It follows that the entropy of any system is given by:

\begin{equation} \Delta S_{A}= \Delta S_{\partial A} + \Delta S_{\text{Int}(A)} \end{equation}

where $$\Delta S_A$$ equals zero if its processes are completely reversible.

One particular consequence of (1) is that biological evolution is in agreement with the Second Law if:

\begin{equation} \frac{dS_{Sun}}{dt} + \frac{dS_{Life}}{dt} \geq 0 \end{equation}

However, biological evolution is better described by non-equilibrium thermodynamics so we should be skeptical of (2) besides the fact that it is impossible to calculate.

## The Second Law as a Principle of Maximum Entropy:

If $$V$$ is an extensive variable, $$U$$ is the energy available for work and $$S$$ is the entropy we have:

\begin{equation} S = S(U,V) \end{equation}

and given (1) we may deduce that:

\begin{equation} \forall t, \frac{dS}{dt} \geq 0 \end{equation}

so entropy is maximised at equilibrium. In the asymptotic limit this system will be in thermal equilibrium with its surroundings so if this system is isolated and contains a finite amount of energy $$E$$ we have:

\begin{equation} \lim_{t \to \infty} S(t) = k_B \ln (\Omega(E)) \end{equation}

where $$\Omega(E)$$ is the number of quantum states associated with $$E$$. At thermal equilibrium, or the heat death of the system, all state transitions are equally likely on average which means that no useful work is happening as the system is not biased towards evolving in one way or another.

Now, a direct implication of the Principle of Maximum Entropy is that at equilibrium $$S(U,V)=\text{Cst}$$ so we have:

\begin{equation} \Big(\frac{\partial S}{\partial V} \Big)_U = 0 \end{equation}

\begin{equation} \Big(\frac{\partial^2 S}{\partial V^2} \Big)_U = 0 \end{equation}

and so by the first law energy is conserved and by the second law between two states nature will always take the path of least action from states that have high potential energy to states with low potential energy.

## The Second Law implies the Principle of Minimum Energy:

We may note that for any function $$F$$ that satisfies $$F(U,S,V) = \text{Cst}$$ we have the exact differential:

\begin{equation} dU = \Big(\frac{\partial U}{\partial S} \Big)_V \cdot dS + \Big(\frac{\partial U}{\partial V} \Big)_S \cdot dV \end{equation}

which is exactly the first and second laws combined:

\begin{equation} dU = T \cdot dS - P \cdot dV \end{equation}

Now, the Principle of Minimum Energy implies that $$dU = 0$$ so we have:

\begin{equation} \Big(\frac{\partial U}{\partial S} \Big)_V \cdot dS + \Big(\frac{\partial U}{\partial V} \Big)_S \cdot dV = 0 \end{equation}

which implies:

\begin{equation} \Big(\frac{\partial U}{\partial V} \Big)_S = -T \cdot \Big(\frac{\partial S}{\partial V} \Big)_U = 0 \end{equation}

since $$\Big(\frac{\partial S}{\partial V} \Big)_U = 0$$ at equilibrium (6).

If we now define:

\begin{equation} \Phi := \Phi(U,V) = -T \cdot \Big(\frac{\partial S}{\partial V} \Big)_{U} \end{equation}

we may compute the exact differential:

\begin{equation} d\Phi = \frac{\partial \Phi}{\partial U} \cdot dU + \frac{\partial \Phi}{\partial V} \cdot dV \end{equation}

and this implies that the potential energy is at a minimum when entropy is maximised:

\begin{equation} \frac{\partial \Phi}{\partial V} = \frac{\partial \Phi}{\partial U} \cdot \frac{\partial U}{\partial V} + \frac{\partial \Phi}{\partial V} = -T \cdot \Big(\frac{\partial^2 S}{\partial X^2} \Big) = \Big(\frac{\partial^2 U}{\partial V^2} \Big)_{S} > 0 \end{equation}

## Is the Second Law of Thermodynamics applicable to living systems?

A living system is never actually in equilibrium so it is best described by non-equilibrium thermodynamics. However, given that an organism transitions between different equilibria where homeostasis is maintained, at each equilibrium we may apply the principle of minimum energy to the totality of its information-processing costs. This is a key insight in the application of the Expected Kolmogorov Complexity to animal cognition.

For readers who are familiar with the Free Energy Principle of Karl Friston, I will admit that there are superficial similarities. However, while his theory predicts that living systems have a tendency to reduce disorder, I take it for granted that every day that passes in the life of an animal is a day closer to its death so entropy is increasing over time. This is a basic fact unlike Friston’s negentropy theory which may be traced back to Schrödinger .

Unlike Friston’s theory, the Second Law is universal for all physical systems. In particular, it is consistent with the theory of Self-Organised Criticality which states that most living systems are at any instant in a maximally disordered state relative to the maintenance of operations that are necessary for the living system to function.