Common confusion within the AI predictions

I have been following all kinds on AI discussions at many places. The confusion and high degree of inaccuracy have bordered me a lot. So I decided that I will also give my contribution to this.

When people use concepts like robot, artificial intelligence machine learning and so forth all the concepts seem to be very fuzzy and without clear indication what they exactly means with them. The whole IT has been created within 65 years and it is extremely short time. I have been working with computer from early 1970’s when I learned my first programming language FORTRA and the basics of the computer. When I compare computer at that time late last millennium and today, they all differ from each other in some many ways that they hardly are examples of the same concept. One thing that I have learned during these 45 years is that in very early state of the development no-one has been even near prediction their structure or behaviour in advance. Actually we have started to understand their true meaning and role really late after their appearance.

I will start from words and their meaning.

Robot is a device, that is capable autonomous movement and influence toward its environment. A simple example could be a robot vacuum cleaner. The most import aspect of this term is that it does state anything of the device’s intelligence. Actually most robots today have none! They are fixed industrial robots with usually one powerful arm and programmed behaviour.

Artificial Intelligence ( AI ) is a term used at least from late 1970’s, but the meaning of the term have changed during these years. During last millennium the overwhelming majority thought that these will be kind of expert systems where the ‘intelligence’ was in detail programmed by programming team. Finally nearly all understood that the task was far too complex to be ever solved this way. There were some pioneers like Teuvo Kohonen in Finland that started to experiment with traditional computers where they had programmed very primitive neural nets that were able to learn to do simple classification. Some from the industry laughed at them. The rest thought the were and will always be toys.

Then Google lab decided to increase the number of neuron layers and created self-learning neural net and this was a beginning of modern Deep Learning construct. Today the number of deep learning solution is exploding. Based on the victories in this field has crated the lively discussions of automation and technical unemployment.

Now wider and wider groups have been reporting and discussing about this. Here is however one persistent mistake repeated in this discussion over and over again. This is in the predictions. Most writes will tell you that the advance in this field is based upon the speeding up the circuities in the traditional von Neumann computer. This is actually completely wrong conclusion. The von-Neuman architecture traditional computers never reach such a performance that these prediction requires. The solution is not in that direction. The von Neumann architecture is designed in a way that it can do massive parallel computing and this is necessary requirement of big deep learning machine.

New neuro-synaptic chips: If this would be the end of the story we shouldn’t be worried about job lost so much, but IBM did a historical invention and created a circuit that has similarities with a biological brain tissue. This happened in IBM Almaden lab. The fist circuit four years ago on 2011 when it was released 1. time contain 256 neurons and 1 million synapses. On 2014 it had grown to one million neurons and 256 million synapses. (see: http://goo.gl/VAvIcO )

This circuit is about thousand time faster than equivalent traditional computer and it will need a fraction of the size and energy than the tradition computer. IBM has demonstrated that this new machine can be taught same way as Deep Learning algorithms. (see: http://goo.gl/FSMAoX )

These machine are NOT in the commercial market yet, but they will be soon and when they come they will revolutionise the IT-market totally. The effect of this is larger that that of computers and mobile devices together.

These machines can be taught very complex tasks in a very short time. It is most probable that traditional programming is done solely by these machines after 5 years from their commercialization.

( see: IBM Deep Dive 1 – 4 from YouTube ( the 1. video: https://goo.gl/JAdpJe ))

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: