Introduction:

A classical result in the theory of random matrices states that any random matrix sampled from a continuous distribution is nonsingular with probability one. This has many important consequences.

One fundamental consequence is that almost all linear models with square jacobian matrices are invertible.

The spectral norm and the largest eigenvalue:

For random matrices , if is the largest eigenvalue of and the corresponding unit eigenvector:

\begin{equation} \lVert A \rVert_{2} \geq \lVert A \upsilon \rVert_{2} = \lVert A \lambda \rVert_{2} = | \lambda | \lVert \upsilon \rVert_{2} = | \lambda | \end{equation}

where

\begin{equation} \lVert A \rVert_{2} = \sqrt{\lambda_{max}(A^T A)} = \sigma_{max}(A) \end{equation}

so the largest eigenvalue of is less than or equal to its spectral norm.

Hölder’s inequality and the determinant:

Now, by Hölder’s inequality we have:

\begin{equation} \lVert A \rVert_{2} \leq \sqrt{\lVert A \rVert_{1} \lVert A \rVert_{\infty}} \end{equation}

where is the maximum column sum and is the maximum row sum:

\begin{equation} \lVert A \rVert_{1} = \max_{1 \leq j \leq n} \sum_{i=1}^m |a_{ij}| \end{equation}

\begin{equation} \lVert A \rVert_{\infty} = \max_{1 \leq i \leq m} \sum_{j=1}^n |a_{ij}| \end{equation}

Given that the determinant of a matrix is the product of its eigenvalues we may use (1) and (3):

\begin{equation} \det(A) = \prod_{i=1}^n \lambda_i \leq \lVert A \rVert_{2}^n \leq \sqrt{\lVert A \rVert_{1} \lVert A \rVert_{\infty}}^n \leq (\sqrt{n}N)^n \end{equation}

and using (6) we may assert the following equivalence relation:

\begin{equation} \forall A \in [-N,N]^{n \times n} \exists \vec{\lambda} \in [-\sqrt{n}N,\sqrt{n}N]^n, A \sim \vec{\lambda} \implies \det(A-\vec{\lambda}I_n) = 0 \end{equation}

so the determinant mapping is defined:

\begin{equation} \det: [-N,N]^{n \times n} \longrightarrow [-(\sqrt{n}N)^n,(\sqrt{n}N)^n] \end{equation}

and we can also show that this mapping is analytic.

The determinant maps sets of positive measure to sets of positive measure:

The determinant is a polynomial in the coordinates of :

\begin{equation} \det(a_{ij}) = \sum_{\sigma \in S_n} \big( \text{sgn}(\sigma) \prod_{i=1}^n a_{i,\sigma_i} \big) \end{equation}

so if we define the set of singular matrices:

\begin{equation} S = \{A \in [-N,N]^{n \times n}: \det(A)=0\} \end{equation}

we note that is a set of measure zero and since analytic functions map sets of positive measure to sets of positive measure, must be a set of measure zero.