Introduction:

Physical sciences have often made progress by identifying useful coordinate transformations. Whether we are using Euclidean Geometry for Newton’s Calculus or Riemannian Geometry for General Relativity, the objective is to find a useful and parsimonious parametrization that allows us to disentangle the variables responsible for the data-generating process. In the past this required human ingenuity but today thanks to the invention of computers we can run algorithms that allow us to discover useful parametrizations for a given dataset.

If our data happens to be organised into rows and columns, like a matrix, one very powerful algorithm is the Singular Value Decomposition. As a concrete application, we may approximate the eigenfunctions of the Koopman Operator via the Singular Value Decomposition algorithm. This is possible because all linear operators in principle have a matrix representation.

Dynamic Mode Decomposition:

Given a time series of data \(X\), if we split \(X\) into matrices of snap-shots \(X_n\) and \(X_{n+1}\), the goal of DMD is to identify the linear operator \(\Lambda\)(in matrix form) such that:

\begin{equation} X_{n+1} = \Lambda \circ X_n \end{equation}

The SVD approach to DMD is as follows:

\(1.\) Compute the SVD of \(X_n\) so we have \(X_n = U \Sigma W^T\)

\(2.\) In order to calculate the approximation of \(\Lambda\) that minimises the Frobenius norm, we shall use the SVD of \(X_n\):

\begin{equation} \Lambda = \arg\min_{\Lambda} \lVert X_{n+1} - \Lambda \circ X_n \rVert \implies \Lambda = U^T \cdot X_{n+1} \cdot W \cdot \Sigma^{-1} \end{equation}

and compute the eigenvalues \(\lambda_i\) and eigenvectors \(y_i\) of \(\Lambda\).

\(3.\) The ith DMD eigenvalue is then \(\lambda_i\) and the ith DMD mode(\(\approx\) Koopman Mode) is \(U y_i\).

Derivation of the Singular Value Decomposition:

Given a matrix \(X \in \mathbb{R}^{m \times n}\) we may form the correlation matrices \(X \cdot X^T \in \mathbb{R}^{m \times m}\) and \(X^T \cdot X \in \mathbb{R}^{n \times n}\). Now, since these correlation matrices are both square they both have eigendecompositions and we may show that they have the same eigenvalues.

Indeed, let’s suppose:

\begin{equation} \exists \vec{u} \in \mathbb{R}^{m},X \cdot X^T \cdot \vec{u} = \lambda \vec{u} \end{equation}

then we have:

\begin{equation} \vec{v} = X^T \vec{u} \in \mathbb{R}^n \end{equation}

\begin{equation} X^T \cdot X \cdot \vec{v} = \lambda \vec{v} \end{equation}

so the eigenvectors of \(X \cdot X^T\) and \(X^T \cdot X\) are linear transformations of each other and we may deduce that they have the following eigendecompositions:

\begin{equation} \exists V \in \mathbb{R}^{n \times m}, X^T \cdot X \cdot V = V \cdot \text{diag}(\vec{\lambda}) \end{equation}

\begin{equation} \exists U \in \mathbb{R}^{m \times m}, X \cdot X^T \cdot U = U \cdot \text{diag}(\vec{\lambda}) \end{equation}

so \(\text{diag}(\vec{\lambda}) \in \mathbb{R}^{m \times m}\).

Furthermore, we may show that all the eigenvalues \(\lambda_i\) are non-negative since \(X^T \cdot X\) is positive-semidefinite:

\begin{equation} \forall z \in \mathbb{R}^n, z^T (X^T \cdot X) z = (Xz)^T (Xz) = \lVert Xz \rVert^2 \geq 0 \end{equation}

and from this it follows that \(\text{diag}(\vec{\lambda})= \Sigma^2\) for some \(\Sigma \in \mathbb{R_{+}}^{m \times m}\).

Now, we may note that \(X \cdot X^T\) is real and symmetric so we have:

\begin{equation} X \cdot X^T = U \Sigma^2 U^{-1} = (U \Sigma^2 U^{-1})^T \implies U^T = U^{-1} \end{equation}

and so we may deduce that \(U\) and \(V\) are both unitary matrices.

Combining these results, we have:

\begin{equation} X \cdot X^T = U \Sigma^2 U^T = (U \Sigma V^T) \cdot (V \Sigma U^T) = (U \Sigma V^T) \cdot (U \Sigma V^T)^T \end{equation}

Finally, we claim that \(X = U \Sigma V^T\) since by (1) and (2) we have:

\begin{equation} X \cdot \vec{v_i} = \lambda_i \vec{u_i} \implies \lVert X \vec{v_i} \rVert^2 = \lambda_i = \sigma_i^2 \end{equation}

which implies:

\begin{equation} \vec{u_i}^T \cdot X \cdot \vec{v_i} = \sigma_i \end{equation}

so in matrix form we have:

\begin{equation} U^T \cdot X \cdot V = \Sigma \implies X = U \Sigma V^T \end{equation}

and this concludes our existence proof that a singular value decomposition exists for rectangular matrices.

References:

  1. Bernard Koopman. Hamiltonian systems and Transformations in Hilbert Space.

  2. Hassan Arbabi. Introduction to Koopman operator theory for dynamical systems. MIT. 2020.

  3. Hassan Arbabi, Igor Mezić. Ergodic theory, Dynamic Mode Decomposition and Computation of Spectral properties of the Koopman operator. 2017.