Introduction:

Around November 2018 I went through an existential crisis regarding general theories of intelligence. The conversation I had with Alex Gomez-Marin, a behavioural neuroscientist, back in April 2018 rattled in my head as I struggled to find ways to justify their existence. The main issues were that from the perspective of an organism, all the theories of general intelligence I had surveyed whether this was Friston’s Free Energy Principle [3], Polani’s information-theoretic Empowerment theory, Wissner-Gross’ statistical mechanical Causal Entropic Forces theory [4] or Hutter’s compression-based AIXI theory were simultaneously computationally intractable and epistemologically unsound.

I realised that it’s also possible to spend decades working on toy models of these theories to demonstrate a ‘proof of concept’ but these implementations on toy problems couldn’t possibly scale for reasons that were probably known to the authors beforehand. Alternatively, certain authors might use ‘approximations’ of these theories but none of these researchers ever tried to quantify the quality of their approximation. I also realised that I had no desire to become either kind of researcher.

After a couple weeks I decided to step back from dreams of a general theory and focus my attention on essential questions for intelligent behaviour in organisms.

Subjective Physics?:

A few months after my conversation with Gomez-Marin I went over ‘Neuroscience Needs Behavior: Correcting a Reductionist Bias’[1] which made a strong case for the role of behavioural neuroscience in designing neural interventions. As with ‘Could a Neuroscientist Understand a Microprocessor?’[4] by Jonas and Kording they pointed out that an astronomical amount of neural data wouldn’t be enough to isolate the causal neural mechanisms that determine the behaviour of an organism. In order for neural interventions to have any meaning you must close the loop by designing ecologically realistic behavioural experiments. The purpose of the brain after all is to generate meaningful behaviour.

That said, this opened the door to many approaches. Which one should I choose? By some chance occurrence I was going through the blog of Romain Brette, an excellent neuroscientist, and stumbled upon an article of his on Subjective Physics. His 43 page text was motivated by the following thought experiment:

Imagine a naive organism who does not know anything about the world. It can capture signals through its sensors and it can make actions. What kind of knowledge about the world is accessible to the organism?

I thought Romain Brette’s text was wonderful. It made sense from both a behavioural science as well as cybernetics perspective. On the one hand, all theories of physics and complex systems have a compositional structure where powerful methods emerge from simpler principles. On the other hand, by the Good Regulator Theorem intelligent organisms must try to learn an internal model of their environment in order simulate the evolution of key physical variables. What statistical models might be useful for learning such physical models with re-usable components and how might such models be learned in an intrinsically motivated manner?

A behaviorist account:

Before continuing, I’d like to start by introducing a behaviorist framework that is universally accepted among many AI researchers. Specifically, let’s suppose that all intelligent organisms do reinforcement learning:

There are several outstanding problems:

  1. Where do the rewards come from?
  2. What is the relationship between embodiment and learning?
  3. Are observations objective?

Is the organism trying to reliably and perceptibly control its environment? If so, the good regulator theorem implies an internal model of the environment and I’d argue that a consistent internal model implies the existence of physical laws. But, what do I mean by internal model, consistent and physical law?

The consistency criterion for Intrinsic Physics:

What is the consistency criterion?

By physical law I mean a forward model which takes as input initial conditions and may be used to simulate future states of an environment. Let’s take Newton’s laws of motion in for a concrete example:

\begin{equation} \sum F = 0 \iff \frac{dv}{dt} = 0 \end{equation}

\begin{equation} F = \frac{d \vec{p}}{dt} = 0 \end{equation}

\begin{equation} F_{AB} = -F_{BA} \end{equation}

It can be demonstrated that neither of these laws contradict each other. In that sense they are consistent with each other. You can also show that they are necessary and sufficient for simulating any conservative mechanical system(whose kinematics may be described using Newton’s Calculus). But, that’s a different matter.

So this is what I mean by consistency. Now, what do I mean by internal model and how might the consistency criterion apply to an internal model?

What is an internal model?

Within the Probabilistic Graphical Model framework, an internal model is a directed graph whose nodes are variables and edges are probabilistic relations between the nodes. The main function of an internal model is to simulate the future of an agent’s environment as explained in ‘World Models’ [8].

For concreteness let’s consider the deep autoregressive model for the decoder distribution used in ‘Variational Information Maximisation for Intrinsically Motivated Reinforcement Learning’ [9]. This model takes as input the initial position of a flat organism as well as a future state and outputs a probability distribution over trajectories which may be used for sampling paths.

Here’s the explicit mathematical description:

\begin{equation} q_{\xi}(\vec{a}|x,x’) = q(a_1|x,x’)\prod_{k=2}^K q(a_k|f_{\xi}(a_{k-1}|x,x’)) \end{equation}

where the per-action distribution is defined as follows:

\begin{equation} q(a_k|f_{\xi}(a_{k-1}|x,x’)) = \mathcal{N}(a_k|\mu_{\xi}(a_{k-1},x,x’),\sigma_{\xi}^2(a_{k-1},x,x’)) \end{equation}

\begin{equation} \mu_{\xi}(a_{k-1},x,x’) = g(W_\mu \eta + b) \end{equation}

\begin{equation} \log \sigma_{\xi}(a_{k-1},x,x’) = g(W_\sigma \eta + b) \end{equation}

\begin{equation} \eta = l(W_2g(W_1 x + b_1)+b_2) \end{equation}

where refers to the relu activation.

Incidentally, I spent a few months trying to implement this particular model with Ildefons Magrans without too much success. The main difficulty was that the learning algorithm wasn’t very stable.

Consistent Internal Model:

Now, I posit that if an internal model is consistent and has a graphical representation then it must be in some sense fully connected. To be a bit more precise, I think a necessary and sufficient requirement for the internal model to be consistent is that there exist a sequence of directed edges between each node. These edges represent a probabilistic relation that is modelled by a parametric model whose parameters may be modified subject to the agent’s experience.

The main idea behind this requirement is that if there is a single probabilistic logic that is being used for communication between nodes in the internal model then the agent can’t simultaneously pursue two contradictory policies. Therefore it’s ontology, i.e. knowledge representation, will be consistent in an intuitive sense.

Motivating questions and further discussion:

I’d like to emphasise that my Intrinsic Physics framework is pretty light on formalism. In fact, I think this is essential in the early stages of any scientific theory. On the contrary, if the history of science is any guide we find that new theories which were incredibly complicated at their inception tended to have a harder time recovering from their flaws. Moreover, in order for the framework to stay on the right track I propose the following motivating questions:

  1. Epistemology: What does my RL agent really know vs. what kinds of physics can be known to an organism?
  2. Learning: What kinds of statistical models might allow an organism to gain such knowledge in a Popperian sense?
  3. Morphogenesis: Relationship between structure and function(ex. learning)? How does development complement the learning paradigm?
  4. Uncertainty: How might the organism represent and compute uncertainty?
  5. Information flows: Causal influence of an agent on its environment as opposed to the causal influence of the environment on the agent. (i.e. control as inference)

Finally, there is an unwritten rule. Always work on the simplest systems you don’t understand. You might also work on more complex systems but keep the simpler systems like microprocessors and C. Elegans in mind. These will keep your theory honest and make sure that the most fundamental intellectual constructs on which your theory is founded aren’t merely diversions. For this reason I decided to start by applying the Intrinsic Physics framework to neural networks as far as structure and function is concerned.

One more thing. Feedback is definitely welcome especially on the consistency criterion for intrinsic physics. You can reach me via email: aidanrocke@gmail.com

References:

  1. J. Krakauer, A. Ghazanfar, A. Gomez-Marin, M. MacIver, and D. Poeppel. Neuroscience Needs Behavior: Correcting a Reductionist Bias. 2017.
  2. R. Brette. Subjective physics. 2013.
  3. E. Jonas & K. Kording. Could a Neuroscientist Understand a Microprocessor? 2017.
  4. K. Friston. The free-energy principle: a rough guide to the brain? 2009.
  5. C. Salge, C. Glackin, D. Polani. Empowerment – an Introduction. 2013.
  6. A. D. Wissner-Gross & C. E. Freer. Causal Entropic Forces. 2013.
  7. Marcus Hutter. Universal Algorithmic Intelligence: A mathematical top→down approach. 2007.
  8. D. Ha & J. Schmidhuber. World Models. 2018.
  9. D. Rezende & S. Mohamed. Variational Information Maximisation for Intrinsically Motivated Reinforcement Learning. 2015.