A brief history of AI

The majority of today’s intellectuals and visionaries including Nick Bostrom and Demis Hassabis have a very curious belief that the quest for strong artificial intelligence is a recent phenomenon. In fact, if one thinks carefully this goal has actually been seriously pursued in the last 200 years. It is very far from a recent phenomenon but perhaps it might help if I clearly state what I mean by artificial intelligence. 

In 2007, Shane Legg, the Chief Scientist at DeepMind, came up with a good list of definitions of artificial intelligence due to different AI researchers and eventually he distills this into a single definition:

“Intelligence measures an agent’s ability to achieve goals in a wide range of environments.” -S. Legg and M. Hutter

Using this definition, I will use concrete examples to show that there have been at least three important attempts to develop strong artificial intelligence at varying degrees of abstraction in the last two hundred years and these systems have actually been applied to important problems even large numbers of people. 

1. Laplace’s Demon:

The goal of any grand unified theory in physics is to develop practical principles and algorithms that are capable of predicting the behaviour of any physical system. Now, in the early 1800s many scientists including Laplace believed that the joint development of classical mechanics and perturbation theory were sufficiently powerful to predict the behaviour of any observable system. This belief is summed up by Laplace as follows:

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.

This entity which future scientists and philosophers called Laplace’s demon hasn’t quite lived up to expectations. Granted, Hamiltonian and Lagrangian methods are used for simulating a large number of physical systems today ranging from molecules to celestial bodies. However, the big obstacle facing this approach is not only the amount of data required but the fact that we have very few closed systems and almost all closed systems eventually behave in a chaotic(i.e. unpredictable) manner. To be precise, they have a finite Lyapunov time.

2. Communist Central Planning:

The most advanced versions of Communism involve a large number of enlightened and benevolent technocrats that make decisions for the rest of the population in order to guarantee economic equality. The basic idea is that if you get a lot of clever and well-intentioned people together the aggregate decisions will be much better than the accumulated economic decisions of the entire populace. This is not how Communism is usually introduced but this is how it’s always carried out in practice.

In the early 20th century this seemed like a brilliant idea but empirically it turned out to be a catastrophic failure. There are also very sound theoretical reasons for its failure. First, it leads to a monolithic structure that doesn’t adapt to market signals because they are non-existent. Second, the average person is not an idiot and “good technocrats” are simply conceited people that are too stubborn to change their minds. Third, while it theoretically guarantees that everybody does “equally well” it doesn’t guarantee that people do well at all. In fact, if you take into account the first two points the fact that a Central Planning system fails to adapt means that eventually everybody does “equally badly”.

The failure of Central Planning leads me to the next AI system.

3. Free markets:

The Free market is essentially a black box boosting algorithm unlike a Central Planning system. Instead of a well-defined group of elite decision makers you have a large number of agents of variable information processing ability which constitute what Adam Smith would call the Invisible Hand.

Proponents of free market economics argue that the “Free Market” has a number of very important theoretical properties. First, it takes into account market signals which means that it’s adaptable and in theory everybody is commensurately rewarded. Second, it’s regulated by a democratically-elected government to prevent non-competitive behaviour.

However, this system faces many theoretical and practical difficulties:

a) unpriced externalities: unpriced damage done to the environment among other things
b) wealth distribution: There’s no guarantee that the gini index is close to zero.
c) information asymmetry: No guarantee that every agent has access to reliable information. In fact, with the issue of big data today and who owns it this problem is becoming increasingly important.
d) black box: No economist can predict anything with any precision about the future behaviour of a free market economy. There have been unpredictable market crashes in the past and there’s nothing to prevent such catastrophic events in the future. 

The four points given above would cause alarm if I associated them with an AI system which would replace the “Free market”. AI theorists would quickly throw up their hands and say “What about goal alignment!?” However humans in Free Market economies and most economists are surprisingly comfortable with the current situation.

More importantly, the main point I’m trying to drive home is semantic in nature. There is no hard and fast rule that AI has to be digital or that it must be programmed via a laptop. The key thing is that there are universal design principles for building substrate-independent AI systems. 

Meanwhile, there are many warning signs that the free market system is in danger of imminent collapse. In fact, AI risks lie in the present and not the future as many suggest. The omnipresent AI risk is that we fail to build a more robust AI system to handle the economy while the Invisible Hand falls apart.

 Note: Surprisingly economists haven’t made a formal connection between boosting algorithms and free market systems but I promise to write a blog post on this subject in the near future. 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s