Dendrite morphologies exhibit considerable variation(taken from [8])

## Introduction:

Due to the compositional structure of neural computation, if dendritic trees in single neurons couldn’t compute interesting functions the brain wouldn’t be able to generate any complex behaviour. Now, in order to figure out what dendrites can compute we need to figure out the expressive power, computational complexity and function spaces associated with dendritic computation. Eventually we would also need to take dynamics into account to determine when exactly these computations should happen.

As function space identification is very hard we may reduce the complexity of our task by identifying scaling laws for the time-complexity and expressive power of dendritic computation.

## Dendrites as functions computable on binary trees:

To a first-order approximation the typical dendrite has the morphology of a random binary tree [6]. However, as shown in the illustration this allows considerable diversity in the range of dendrite morphologies. So how should we proceed with our analysis?

It’s worth noting that every random binary tree is embedded in a complete binary tree i.e. a binary tree where each node has two children. Furthermore, if functions computable on a binary tree are the composition of simpler functions computable at each node then any function computable on a random binary tree is also computable on a complete binary tree.

Complete binary trees are also special in the sense that functions computable on complete binary trees have maximal expressive power. Assuming our demonstration is correct, to figure out what functions are computable by dendrites we may focus on the analysis of functions computable on complete binary trees.

## The expressiveness of functions computable on trees:

Let’s define a function computable on a $k$-ary tree as a function composed with simpler computable functions defined at each node such that a function of this kind defined on a binary tree of depth $N$ receives $2^{N-1}$ inputs:

A tree function

Intuitively, if there are $k^n$ functions at the nth level the expressiveness for $F_k^N$ on a tree of depth $N$ should grow on the order of:

$$\sim k^N$$

### Probabilistic argument using Kolmogorov Complexity:

If $F_k^N$ is a composition of functions in $S$ where $\lvert S \rvert = \frac{k^{N}-1}{k-1}$ and $K(\cdot)$ denotes Kolmogorov Complexity then we may define:

$$Q = \min_{f_i \in S} K(f_i)$$

and we may show that for almost all $F_k^N$ we must have:

$$K(F_k^N) \geq \frac{Q}{2} \cdot k^{N-1}$$

Proof:

Let’s suppose each $f_i \in S$ has an encoding as a binary string so $\forall i, f_i \in \{0,1\}^*$. If we compress each $f_i$ then $F_k^N$ is reduced to a program of length greater than:

$$n= Qk^{N -1}$$

Now, the number of programs of length less than or equal to $\frac{n}{2}$ is given by:

$$\sum_{l=1}^{\frac{n}{2}} 2^l \leq 2^{\frac{n}{2}+1}-1$$

and so, using the principle of maximum entropy(i.e. uniform distribution) [4], we find that:

$$\lim_{n \to \infty} P(K(F_k^N) \geq \frac{n}{2}) \geq \lim_{n \to \infty} 1 - \frac{2^{\frac{n}{2}}}{2^n} = 1$$

At this point the careful reader may remark that the typical dendritic tree doesn’t form a complete binary tree. I suspect the reason for this is that besides computational considerations dendritic trees must take into account spatial, energetic and developmental constraints.

One reason for node sparsity may be the fact that for complete binary trees computational cost scales exponentially with tree depth.

## Energy as a robust proxy measure for total computational cost:

Consider that software running on given hardware must consume energy in order to perform computations so energy consumption scales with computational cost. It follows that energy is a robust proxy measure for computational cost. We may also observe that if functions are defined at the nodes of dendritic trees and the energy consumption, per neural action potential, at each node is bounded by $C$ joules then we may model the computational cost of evaluating functions computable on dendritic trees.

In particular, it may be easily shown that the total computational cost of evaluating functions defined on complete binary trees with depth $N$ generally scales with:

$$\sim 2^N \cdot \text{Joules}$$

## The asymptotic time-complexity of functions computable on trees:

Given a function $F_2^N$ defined on a complete binary tree with depth $N$ we may ask what is the fastest way to evaluate this function. In other words, what is the fastest way to perform inference using $F_2^N$?

If we proceed in a sequential manner, starting with the base of the tree, the temporal cost or time-complexity will scale with the energy cost so we will have:

$${Time}(F_2^N) \sim \mathcal{O}(2^N)$$

and if we note that the dimension of input to $F_2^N$ equals the number of nodes at its base, $2^N$, we can say that serial computation yields:

$${Time}(F_2^N(n)) \sim \mathcal{O}(n)$$

However, if we assume parallel execution of functions at each level of $F_2^N$ then the time-complexity scales with tree depth:

$${Time}(F_2^N) \sim \mathcal{O}(N)$$

which yields a time-complexity that scales logarithmically with dimension of the inputs:

$${Time}(F_2^N(n)) \sim \mathcal{O}(\ln n)$$

It turns out that such an algorithm for evaluating $F_2^N$ is not only highly efficient, it is also biologically plausible [1].

## How algorithmic information scales with compute time:

Given equations (3) and (10) we may deduce a very interesting scaling law:

$$K(F_2^N) \sim 2^{{Time}(F_2^N)}$$

which means that the complexity of the functions $F_2^N$ that are evaluated grows exponentially in the running time. This is comparable to running all $2^N$ programs of length $N$ within $\sim N$ seconds.

This may scale exponentially with energy (7) so there may be an inherent tradeoff between evaluating functions with great expressive power and minimising energy consumption. From a biological perspective, node-sparsity may be a practical necessity.

## Discussion:

The above analysis raises a question that may be explored through computer simulations on carefully chosen benchmarks. To be precise, what functions defined on binary trees do we obtain if we simultaneously maximise expressive power and minimise energy consumption? I think these are both biologically plausible and take maximal advantage of dendritic tree structure.

My hunch is that dendritic trees are a form of parse tree where the nodes represent mathematical expressions. I also think that dendritic trees implement some kind of temporal logic and that this timing information comes from neural dynamics. For these reasons I think we need to unify the dynamical systems and computer science perspectives of single-neuron computation.

Finally, the reader may note that I have said nothing about the time complexity of learning in dendritic trees. That is a problem for another day.

## References:

1. Michael London & Michael Häusser. Dendritic Computation. Annu. Rev. Neurosci. 2005. 28:503–32
2. M. Li and P. Vitányi. An Introduction to Kolmogorov Complexity and Its Applications. Graduate Texts in Computer Science. Springer, New York, second edition, 1997.
3. Roozbeh Farhoodi, Khashayar Filom, Ilenna Simone Jones, Konrad Paul Kording. On functions computed on trees. Arxiv. 2019.
4. Edwin Jaynes. Information Theory and Statistical Mechanics. The Physical Review. Vol. 106. No 4. 620-630. May 15, 1957.
5. Cuntz H, Borst A, Segev I.Optimization principles of dendritic structure. Theor Biol Med Model. 2007 Jun 8;4:21.
6. Alexandra Vormberg ,Felix Effenberger,Julia Muellerleile,Hermann Cuntz. Universal features of dendrites through centripetal branch ordering. PLOS Biology. 2017.
7. Hermann Cuntz ,Friedrich Forstner,Alexander Borst,Michael Häusser. One Rule to Grow Them All: A General Theory of Neuronal Branching and Its Practical Application. PLOS Biology. August 5, 2010.
8. Duncan E. Donohue,Giorgio A. Ascoli. A Comparative Computer Simulation of Dendritic Morphology. PLOS Biology. June 6, 2008.
9. WARREN S. MCCULLOCH AND WALTER PITTS. A LOGICAL CALCULUS OF THE IDEAS IMMANENT IN NERVOUS ACTIVITY. Bulletin of Mothemnticnl Biology Vol. 52, No. l/2. pp. 99-115. 1990.
10. Ujfalussy BB, Makara, Lengyel, Branco.Global and Multiplexed Dendritic Computations under In Vivo-like Conditions. Neuron. 2018 Nov 7.
11. Lars Buesing & Wolfgang Maass. A Spiking Neuron as Information Bottleneck. Neural Comput. 2010 Aug.
12. Anthony Zador & Barak A. Pearlmutter. VC Dimension of an Integrate-and-Fire Neuron Model. 1996.
13. Vladimir I Arnold. Representation of continuous functions of three variables by the superposition of continuous functions of two variables. Collected Works: Representations of Functions, Celestial Mechanics and KAM Theory, 1957–1965, pages 47–133, 2009.