The extrinsic geometry of deep rectifier networks
Introduction:
Let’s suppose that one day you could somehow visualize what was going on inside deep neural networks(with relu activation of course) while they were being fed data at an insane rate by GPUs. What would you see?
If you embed the neural network calculations in some highdimensional affine space, i.e. lacking an origin, then you would probably see the equivalent of highdimensional particles hurtling at an insane rate. Information collapsing into dot products at the nodes of one layer before fanning out again in unpredictable directions due to the affine transformations at the next layer. Unpredictable to yourself because you are an external observer that has no idea what’s going on…but with this extrinsic perspective it’s still possible to infer a lot even if your choice of ambient space might be a nonsensical parametrization with respect to the neural network.
As a result of careful analysis you might realise the following:

The latent space of a deep rectifier network is an Orthogonal Function space and functions by decorrelating input signals . This became clear to me a few weeks ago.

Besides the algebraic structure you might also notice how lower levels of the network may be identified with reusable geometric transformations that are used exponentially more often than expressions at the higher levels of the network. So the deep rectifier network may be identified with a special kind of geometric decision tree.

In higherdimensional spaces geometric transformations correspond to Jacobians if we can justify the transition from discretetime to continuous timeandspace geometric decision trees.

From the perspective of function approximation I show that this transition is justified and also that function approximation implies manifold learning. I also explain that information concerning the manifold must be encoded by sequences of affine transformations which are trajectories of information in latent space.

Finally, I show that the trajectory formalism leads to a natural statistical relation between linear interpolations in parameter space and the nonconvexity of .
Given that many readers of this article may not have read my two previous articles on deep rectifier networks, my objective is to first summarize the Function Space and Geometric Decision Tree perspective before motivating the trajectoryinlatentspace perspective. I then show that the structure of trajectories in latent space natural statistical relation between linear interpolations in parameter space and the nonconvexity of . This explanation is accompanied by calculations using sequences of random matrices in Julia Lang.
The Orthogonal Function Space structure of the latent space:
Mathematically, we may introduce deep rectifier networks as follows:
where the parameter space is defined as follows:
\begin{equation} \theta = \big\{\ W_l \in \mathbb{R}^{ n_l \times n_{l1}},b_l \in \mathbb{R}^{ n_l}: l \in [L] \big\} \end{equation}
so if there are layers and nodes you may observe that:
 The ReLU activation serves as a gating mechanism for a deep network with nodes.
 This gating mechanism decomposes the latent space of a deep rectifier network into linear feature maps .
 Each of these feature maps have compact support and their domains are pairwise disjoint.
 It follows that the latent space of forms an orthogonal function space.
 The continuity and compact support of the implies that the are square integrable so the latent space of forms a Hilbert space.
This is interesting because many natural signals, ex. sounds, inhabit Hilbert spaces.
Deep Rectifier Networks as Geometric Decision trees:
The Hilbert Space perspective is insightful as it shows that deep rectifier networks function by decorrelating input signals but this algebraic perspective doesn’t explain why depth or width of a rectifier network may be important. For this we need to venture beyond algebraic structure and think in terms of geometry.
A colleague suggested that I have a look at ‘Tropical Geometry of Deep Neural Networks’ [2] which supposedly explains why deeper networks are exponentially more expressive but as I have little familiarity with tropical geometry I decided to try and follow my own reasoning first. Here’s what I found:

A deep rectifier network may be identified with an level decision tree for solving geometric problems in a sequential manner with lower layers of the network forming reusable subexpressions.

Each expression is an affine transformation and all geometric transformations in Euclidean space are affine. It follows that deep rectifier networks may be viewed as geometric programs where each subexpression is a geometric operation.

At each layer of the network the candidate expressions are a subset of the power set of distinct nodes at that layer. Hence the importance of network width for versatile geometric reasoning.

Furthermore, each subexpression at layer may be reused at most times by subexpressions at higher levels of the decision tree. Montufar [1] gives a similar argument but he identifies expressions with ‘folds’ which is incorrect in my opinion.

The importance of the last statement is clear when we think in terms of geometric transformations. Deep rectifier networks permit an exponential number of possible sequences of geometric transformations of length and therefore the complexity of the set of possible transformations of the latent space is proportional to network depth.
In summary depth gives us geometric complexity and width gives us versatility. But, how should we think of geometric transformation in higherdimensional space?
Geometric transformations in higherdimensional space:
The best way to understand spatial deformations in higherdimensional affine spaces is to go to the continuous space limit where we may think in terms of the Jacobian and the determinant of the Hessian, which tells us something about local curvature.
Given an affine transformation from :
\begin{equation} x \overset{T}\mapsto Ax + b \end{equation}
the Jacobian of is simply .
The reason why this continuousspace approximation is valid is that as network depth increases we may think in terms of smooth trajectories in latent space. The geometry of these trajectories is essential for both function approximation and manifold learning.
From discretetime to continuoustimeandspace geometric decision trees:
Sequences of affine transformations, which channel trajectories in latent space, may not be reduced to a single affine transformation because the former tells you how space was traversed whereas the latter tells you only the origin and destination of the sequence of affine transformations. We can make this argument more precise by quantifying the information gained at each level of the geometric decision tree.
Given that the output at level tells you everything you need to know about the remaining nodes, the information gained at the ith level of the geometric decision tree is exactly:
\begin{equation} \log_2(2^{N \sum_{k=1}^i n_k})= N \sum_{k=1}^i n_k \end{equation}
For these reasons we may argue that trajectories capture important relative information and in the next section I explain that they encode information concerning the manifold of .
Function approximation as manifold learning, or how to see the forest from the trees:
Thinking in terms of functions and function spaces is generally more useful than reasoning about networks with particular parametrizations. One reason for this is that if we assume that a fullyconnected network has layers and nodes per layer, there are:
\begin{equation} \prod_{i=1}^N n_i ! \end{equation}
layerwise permutations that result in functions equivalent to . This is clear when you think about how dot products encode no information about summation order.
Another reason why the function approximation perspective is insightful is that it allows us to reason abou the deformation in latent spaces as we increase network depth. These intricate deformations in latent space are responsible for the usefulness of the Hilbert Space structure of the latent space.
In this context, let denote an affine transformation and denote a signal randomly sampled from . Then:
\begin{equation} X_n := T_n \circ X_{n1} \end{equation}
denotes an element from an affine space.
Using (6) we may define a sequence of affine transformations in latent space:
\begin{equation} T^N := T_N \circ T_{N1} \circ … \circ T_1 \end{equation}
If we assume that function approximation capacity improves with network depth these sequences converge pointwise:
\begin{equation} \lim_{N \to \infty} T^N(X_0) = F(X_0)=Y_0 \end{equation}
where is the function to be approximated.
Now, from a statistical perspective function approximation requires minimising the conditional entropy:
\begin{equation} H(YX) = H(Y,X)  H(X) \geq 0 \end{equation}
and this means that learning the joint distribution requires learning everything there is to know about the structure of . Intuitively this makes sense as exploiting the structure of spatiotemporal signals makes it easier to decorrelate the signals in a useful way. It follows that discovering informative trajectories in latent space is equivalent to discovering a useful but not necessarily unique Hilbert Space structure for the latent space. This is how I see the forest from the trees.
Having motivated the importance of trajectories in latent space, I’d like to show how they explain why deep networks with layers have increasingly nonconvex learning behaviour as becomes large.
The statistical relation between linear variations in parameter space and the nonconvexity of :
Let’s suppose denotes a particular feature map in an layer deep rectifier network:
\begin{equation} \phi_i := T_{N} \circ T_{N1} \circ … \circ T_1 \end{equation}
and let’s suppose denotes an affine mapping:
\begin{equation} x \overset{\hat{T}}\mapsto Ax + b \end{equation}
If we consider the scenario that can we conclude that:
\begin{equation} \phi_i \rvert_{x_0 \in \bar{X}} \equiv \hat{T} \rvert_{x_0 \in \bar{X}} \end{equation}
well, the answer is no for two important reasons:
 First, we lose essential information about the trajectory of in latent space due to the sequence of affine transformations in .
 Second, if we perturb the parameters of we will get a linear effect with probability 1.0, so is convex w.r.t. its parameters, whereas a slight perturbation of the parameters of using backprop for example leads to a nonlinear effect with probability tending to 1.0 as becomes large.
The second point implies that is unlikely to be convex with respect to linear interpolations in paramter space as becomes large. Using Julia Lang, I found this to be the case by constructing chains of Xavierinitialised 3x3 matrices and checking whether a weak version of the usual convexity inequality was satisfied:
\begin{equation} \forall \theta_1, \theta_2 \in \theta, \lVert T_{t \theta_1 + (1t)\theta_2}(n) \rvert_{x \in X} \rVert_{\infty} \leq \lVert tT_{\theta_1}(n)\rvert_{x \in X} + (1t)T_{\theta_2}(n)\rvert_{x \in X} \rVert_{\infty} \end{equation}
where and is identified with a sequence of Xavierinitialised, i.e. random, affine transformations where the bias term was set to a constant vector of 0.1:
\begin{equation} T_\theta := X_{n} \circ X_{n1} \circ … \circ X_1 \end{equation}
and as expected the probability that this inequality was satisfied decreased quickly as the length of the sequence was allowed to increase from to .
Note: The code is available as a gist on Github.
Discussion:
I think that there is a false axiom embedded in this whole convex vs. nonconvex discussion. We are somehow assuming that there exists a canonical parametrization of neural networks and that this parametrization is linear. Yet, this extrinsic view merely reflects the observer’s Euclidean bias.
It makes more sense to take an intrinsic view and understand how the neural network reparametrizes the latent space in order to make it linear with respect to the intrinsic geometry that is appropriate for the manifold on which happens to reside. The learning process essentially involves approximating the structure of spatiotemporal signals.
In fact, this line of reflection leads me to the following conjecture concerning deep learning:
 Convexification: The early stages of learning involves exploring and selecting useful parametrizations from a large number of possible parametrizations of the space.
 Optimisation: The later stages of learning involve finetuning a chosen parametrization.
Furthermore, I suspect that these two learning regimes have different dynamics that can be analysed and that such analysis is key to developing a powerful theory of intrinsic geometry that can be applied to a variety of spatiotemporal signals. This will certainly lead to more powerful learning algorithms and statistical models whose learning and inferential mechanisms we shall actually understand. The reason being that we have a sequence of geometric transformations that may potentially lead to arbitrarily complex trajectories in latent space.
I’d like to end this discussion by noting that the history of physics has made considerable progress largely by identifying suitable parametrizations for general classes of natural signals. In the case of macroscopic motions through space Galilean reference frames were sufficiently general. For General Relativity on the other hand, Riemannian geometry proved to be essential. But, that only became clear because Einstein, Lorentz and others made an enormous effort to understand what was going on.
References:
 Montufar, G. et al. On the Number of linear Regions of Deep Neural Networks. 2014.
 L. Zhang, G. Naitzat & LekHeng Lim. Tropical Geometry of Deep Neural Networks. 2018.