As an increasing amount of money is being allocated to computational neuroscience I sometimes wonder whether neuroscience theory will manage to keep up with progress in neuroscience simulations. The recent progress in deep learning, an example of neuromorphic computation, appears to suggest otherwise. Basically, we are in a situation where highly-nonlinear connectionist models work quite well and we have no idea why. I must add that compared to the human brain these connectionist models are quite simple.
In this context, I gathered five questions which I would personally like to have answered in a world where we have access to increasingly realistic and embodied(in VR perhaps) human brain simulations.
Caveat: When scientists say ‘what you can’t build you can’t understand’ they mean that a simulation is necessary but they don’t imply that it’s sufficient.
What if we could simulate the human brain?:
Would our understanding of the brain validate the good regulator theorem?
This theorem which dates back to a paper by Conant & Ashby  states that:
Every good regulator of a system must be a model of that system.
In simple english this theorem is saying that because the organism and its environment form a coupled dynamical system they must in some sense form mirror images of each other. I must add that providing a sound mathematical basis for this theorem remains an open problem. The reader might also be interested in the recent review by Daniel McNamee and Daniel Wolpert .
I think this is a very important problem as it directly links brains to behaviour and behaviour with respect to the environment in particular. In fact, I would argue that it provides a sensible path to a non-anthropomorphic definition of intelligent behaviour that potentially applies to all organisms.
Would we understand how uncertainty is represented and computed in the human brain?
In very complex environments that may or may not be deterministic, the epistemic and statistical uncertainty of the organism implies a probabilistic knowledge representation. It follows that from an algorithmic perspective, intelligent reasoning comes down to having good models and algorithms for uncertainty representation and computation.
What sources of randomness does the brain use for sampling?
This sub-question remains an open problem.
Would we have a detailed understanding of the wiring optimisation problem and how it relates to neurogenesis?
Another fundamental problem that interests me and that I would easily rank among the 23 most important unsolved problems in neuroscience is the wiring optimisation which dates back to S. Ramón y Cajal who postulated that brains are arranged to minimise wire length . I must add that this problem can be approached from different perspectives and each perspective is essentially a different formulation of the wiring optimisation problem.
In  Dmitri Chklovskii and Charles Stevens formulate this problem as follows:
Wiring a brain presents formidable problems because of the extremely large number of connections: a microliter of cortex contains approximately 105 neurons, 109 synapses, and 4 km ofaxons, with 60% of the cortical volume being taken up with “wire”, half of this by axons and the other half by dendrites. [ 1] Each cortical neighborhood must have exactly the right balance of components; if too many cell bodies were present in a particular mm cube, for example, insufficient space would remain for the axons, dendrites and synapses. Here we ask “What fraction of the cortical volume should be wires (axons + dendrites)?”
I am motivated by its potential impact on different areas of neuroscience. In particular, the areas of developmental neuroscience, network neuroscience and biophysics(i.e. the energetic constraints on information processing in human brains).
Would we understand what makes the brain energy-efficient and use this understanding to build neuromorphic computers and advanced neural prostheses?
Although Google DeepMind recently accomplished an amazing feat by building AlphaGo Zero which could defeat the world’s best human Go players, an even more amazing fact is that the human brain uses ~20 Watts compared to the 200 KiloWatts used by 5000 TPUs to power AlphaGo Zero during training . In other words, AlphaGo Zero was ten thousand times less energy efficient than a human being for a comparable result. How is this possible?
At present neuroscientists and computer scientists still have very little idea but I think that understanding neurogenesis and wiring trade-offs in the human brain shall be key. I also think that it will require building models of computation that relate computational complexity to thermodynamic costs of computation.
Once we have a good theory of biologically-plausible and energy-efficient computer architecture I expect that we shall have a revolution in neuromorphic computing which shall lead to advanced neural prostheses. For more information, I highly recommend the review of Brad Aimone on ‘Neural Algorithms and Computing Beyond Moore’s Law’.
Will we automatically derive a theory of collective intelligence(ex. economics, social networks…etc.)?
My understanding of the history of statistical mechanics suggests otherwise but this leads me to a related question. Is there a coarse-grained model for every collective intelligence model that isn’t coarse? In other words, I suspect that due to small-world phenomena we will be able to approximate the actual model up to epsilon accuracy while giving up significant amounts of information.
I don’t know how many of these questions we will be able to answer in the next thirty years. But, my hope is that I will be able to work on all of them within the next 15 years. Meanwhile, I look forward to hearing from other scientists working on behaviour, cognition, and/or neuroscience.
Most of the problems posed above are listed as issues on Github. The reader is welcome to open an issue and add a different problem provided that they link to a question asked on a stackexchange site. Reasonable candidates include:
- The Psychology and Neuroscience stackexchange: for the formulation of problems in Neuroscience and/or Cognitive science
- The Biology stackexchange: for questions concerning biology including biophysics
- The CrossValidated stackexchange: for statistical questions that might concern Markov models and/or machine learning
- The Theoretical Computer Science stackexchange: for algorithmic analysis of computational models
- The MathOverflow: for mathematical insights into computational models
My rationale is that this would allow other scientists to participate in gathering intelligence on the problem, i.e. references, as well as consider different formulations of the problem, potentially relevant open-source software and open-access datasets.
- Roger C. Conant and W. Ross Ashby, Every good regulator of a system must be a model of that system), International Journal of Systems Science 1 (1970), 89–97.
- D. McNamee and D. Wolpert. Internal Models in Biological Control. Annual Review of Control, Robotics, and Autonomous Systems. 2019.
- U. Maoz et al. Noise and the two-thirds power law. 2006.
- M. Richardson & T. Flash. Comparing Smooth Arm Movements with the Two-Thirds Power Law and the Related Segmented-Control Hypothesis. 2002.
- Wei Ji Ma, J. Beck, P. Latham & A. Pouget. Bayesian inference with probabilistic population codes. Nature Neuroscience. 2006.
- Andre Longtin. Neuronal noise. Scholarpedia. 2013.
- D. Dold et al. Stochasticity from function - why the Bayesian brain may need no noise. Arxiv. 2018.
- R. Cannon , C. O’Donnell , M. Nolan . Stochastic Ion Channel Gating in Dendritic Neurons: Morphology Dependence and Probabilistic Synaptic Activation of Dendritic Spikes. PLOS. 2010.
- D. Chklovskii, C. Stevens. Wiring optimization in the brain. NIPS. 2000.
- D. Van Essen. A tension-based theory of morphogenesis and compact wiring in the nervous system. Nature. 1997.
- G. Shepherd, A. Stepanyants, I. Bureau, D. Chklovskii and K. Svoboda. Geometric and functional organization of cortical circuits. Nature Neuroscience. 2005.
- M. Kaiser & C. Hilgetag. Nonoptimal Component Placement, but Short Processing Paths, due to Long-Distance Projections in Neural Systems. PLOS. 2006.
- A. Stepanyants, L. Martinez, A. Ferecskó , and Z. Kisvárda. The fractions of short- and long-range connections in the visual cortex. PNAS. 2008.
- Q. Wena, A. Stepanyants, G. Elstonc, A. Grosberg, and D. Chklovskii. Maximization of the connectivity repertoire as a statistical principle governing the shapes of dendritic arbors. PNAS. 2009.
- C. Cherniak. Neural Wiring Optimization. 2011.
- E. Bullmore , O. Sporns.The economy of brain network organization. Nat Rev Neurosci. 2012.
- M. Hofman. Evolution of the human brain: when bigger is better. Frontiers in Neuroanatomy. 2014.
- A. Gushchin, A. Tang. Total Wiring Length Minimization of C. elegans Neural Network: A Constrained Optimization Approach. PLOS. 2015.
- J. Niven. Neuronal energy consumption: biophysics, efficiency and evolution. 2016.
- I. Wang & T.Clandinin. The Influence of Wiring Economy on Nervous System Evolution. Current Biology. 2016.
- S. Srinivasan, C. Stevens. Scaling principles of distributed circuits. biorxiv. 2018.
- J. Stiso & D. Bassett. Spatial Embedding Imposes Constraints on the Network Architectures. Arxiv. 2018. of Neural Systems
- D. Silver et al. Mastering the game of Go without human knowledge. 2017.
- Aidan Rocke. The true cost of AlphaGo Zero. Kepler Lounge. 2019.