Slow technological evolution

From time to time some technology cats up hype state, like smart phones a decade ago or AI today. For some time there is an omnipotent euphoria and if feels like the world history is changing in an eye-blick.  

Normally after this come a hangover and at least the enlightened individuals feels empasserst. In most cases after this drop the real development and changing the world begins. There is still another aspect to this. The organisation that produces the fundamental change seldom is capable to create any business out of it.


Here is an revealing example. Eastman Kodak was one of the largest players in the film and photography industry. It was able to develop a digital camera, where the old celluloid film was replaced by image sensor. But the company was not ready to start to develop new business model base on new technology. This was done by Japanese companies like Sony and Canon. Eastman Kodak went bankrupt in 2012 after strong decline in businesses from year 2000. This shows clearly that companies – big and small – are shaped by their technologies. Organisational structures, people’s’ skills work processes all are tuned to the technologie. So the whole business logic is heavily leaning on the chosen technologies. This is quite universal phenomenon. Almost the same happened to the Japanese camera companies as the pocket cameras moved into smartphones.  


At the change of millenium the ideas Object-Oriented design and programming hyped in the IT community. The explosive usage growth in the OO-language usage especially Sun Microsystems Java. Sun promoted successfully it 3-tier server-client architecture with clear logical division of duties. The word look very bright from our – OO-enthusiastics point of view.

Grady Booch said in some OO-conference in the beginning the 2000s that “ in 5 years time there will not be any OO-conferences, because OO has become mainstream ”. Well the prediction was correct, but the reasoning behind was not. The OO-languages became mainstream, but OO-analysis or design did not. If this had happened, it would had made life much more easy for many, but the step or the change was too big and too difficult for most in the industry and as so many times in the history, the big short term interests defeated the long term benefits. Here one again the big companies with huge economical power were facing the threat of being destroyed, because the size of the change was far too big for them to make it. This is why big database companies like Oracle and IBM introduced SOA  (Service Oriented Architecture) , which was actually very rude Trojan horse  to turn the history backward at least two spets. The whole concept is so complicated and fuzzy, that it was clear from the beginning the it  was there just to muddy water and at the same time it appealed to masses of older developers that had big difficulties to understand the OO-concept. This is a big lost of opportunities that are gone at least for long time.  Perhaps we get our revance when the neural networks take over the application design and implementation in the future.


This fact is good to keep in mind, when following new hypes. The current hype is AI. It can be seen everywhere. When everyone is trying to get their share of this, the boundaries and interpretation of what is actually AI are heavily stretched. The current deep learning implementation with current computes are poor, because the computer architecture doesn’t  match at all with neural networks. The real breakthru is yet to been seen, because it requires neurons on chip design. IBM’s TrueNorth neurochips has demonstrated that the trick can be done. They have made the first workable version, but as it was a army project, it is likely that we don’t hear about that any more. Other organisation thriving to that goal are so much behind, that is we are not able to predict the timetable. Current achievement with computers are good and we can benefit from them in many ways in everyday life. At the same time we should understand the even if not today but just round the corner the neuromachines are waiting to be used. They will be from 100 to 1000 times faster than the current solution and the size of their neuronetwork will be magnitudes bigger that the current solutions. All this makes it totally impossible to predict their capabilities and understanding. Those remains to be seen when the time goes by.

Quick introduction to neural network applications and smart machines

Neural networks will in relatively very short time revolutionize the whole of mankind development. The English world  uses also the term cognitive computing. This change will be more significant than the entire information technology achievements to date, modern mobile technology including.
For this phenomenon, there are three perspectives through which things can be perceived.

  1. Neural Network theory and algorithmic implementations on the  conventional von Neumann computers and the current Deep Learning implementations. The core of this technological innovation is self-learn software applications.
  2. Neurosynaptic circuit discovery and development to this day, and this technology in the future of
  3. Socio-economic implications of all the above.

Neural networks theory and its evolution

Neural networks mathematical models have been around since the 1950s. One of the first computer-simulated theoretical neural networks have been implemented in the 1980s. In Finland, those developed by Professor Teuvo Kohonen. At that point, computer processing capacity was not sufficient to present those solutions in full force. Thus, this research was started again in 2006 with Google.

At the moment, these neural networks are getting into really spectacular stuff. The most important development is the self-learning systems (also called Deep Learning)

The situation is described in a very clear and easy to understand way in Jeremy Howard Ted Talk:

The wonderful and terrifying implications of computers That can learn (a TED talk)


Neurosynaptic circuits and machines

The most traditional computer equipment and algorithms have achieved a lot, but the game that will blow the bank is neurosynaptic circuit technology. In this development, IBM is clearly ahead of the others, and thus to lead the way. IBM founded with partners Synapse project (may 2008). The following Project leader D Modhan presentation of the situation in the year 2011:

Dr. Dharmendra S. Modha

KEYNOTE: Cognitive Computing: Neuroscience, Super Computing, Nanotechnology

The current state of the  project was presented in IBM Deep Dive seminar:

Synapse Deep Dive 1 06.11.2015

Synapse Deep Dive 2 06/11/2015

Synapse Deep Dive 3 06/11/2015

Synapse Deep Dive 4 06/11/2015

This socio-economic impacts

With all the above there will be a dramatic acceleration effect already in this millennium ongoing technological unemployment. This conversation actually started by MIT researcher in their book: Race Against the Machine

The MIT professor Erik Brynjolfsson and Andrew McAfee researcher’s book:

Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy

The following two performances will broaden the perspective:

Chrystalla Freeland: The rise of the new global super-rich

Humans Need Not Apply

Finally, own my texts


Common confusion within the AI predictions

I have been following all kinds on AI discussions at many places. The confusion and high degree of inaccuracy have bordered me a lot. So I decided that I will also give my contribution to this.

When people use concepts like robot, artificial intelligence machine learning and so forth all the concepts seem to be very fuzzy and without clear indication what they exactly means with them. The whole IT has been created within 65 years and it is extremely short time. I have been working with computer from early 1970’s when I learned my first programming language FORTRA and the basics of the computer. When I compare computer at that time late last millennium and today, they all differ from each other in some many ways that they hardly are examples of the same concept. One thing that I have learned during these 45 years is that in very early state of the development no-one has been even near prediction their structure or behaviour in advance. Actually we have started to understand their true meaning and role really late after their appearance.

I will start from words and their meaning.

Robot is a device, that is capable autonomous movement and influence toward its environment. A simple example could be a robot vacuum cleaner. The most import aspect of this term is that it does state anything of the device’s intelligence. Actually most robots today have none! They are fixed industrial robots with usually one powerful arm and programmed behaviour.

Artificial Intelligence ( AI ) is a term used at least from late 1970’s, but the meaning of the term have changed during these years. During last millennium the overwhelming majority thought that these will be kind of expert systems where the ‘intelligence’ was in detail programmed by programming team. Finally nearly all understood that the task was far too complex to be ever solved this way. There were some pioneers like Teuvo Kohonen in Finland that started to experiment with traditional computers where they had programmed very primitive neural nets that were able to learn to do simple classification. Some from the industry laughed at them. The rest thought the were and will always be toys.

Then Google lab decided to increase the number of neuron layers and created self-learning neural net and this was a beginning of modern Deep Learning construct. Today the number of deep learning solution is exploding. Based on the victories in this field has crated the lively discussions of automation and technical unemployment.

Now wider and wider groups have been reporting and discussing about this. Here is however one persistent mistake repeated in this discussion over and over again. This is in the predictions. Most writes will tell you that the advance in this field is based upon the speeding up the circuities in the traditional von Neumann computer. This is actually completely wrong conclusion. The von-Neuman architecture traditional computers never reach such a performance that these prediction requires. The solution is not in that direction. The von Neumann architecture is designed in a way that it can do massive parallel computing and this is necessary requirement of big deep learning machine.

New neuro-synaptic chips: If this would be the end of the story we shouldn’t be worried about job lost so much, but IBM did a historical invention and created a circuit that has similarities with a biological brain tissue. This happened in IBM Almaden lab. The fist circuit four years ago on 2011 when it was released 1. time contain 256 neurons and 1 million synapses. On 2014 it had grown to one million neurons and 256 million synapses. (see: )

This circuit is about thousand time faster than equivalent traditional computer and it will need a fraction of the size and energy than the tradition computer. IBM has demonstrated that this new machine can be taught same way as Deep Learning algorithms. (see: )

These machine are NOT in the commercial market yet, but they will be soon and when they come they will revolutionise the IT-market totally. The effect of this is larger that that of computers and mobile devices together.

These machines can be taught very complex tasks in a very short time. It is most probable that traditional programming is done solely by these machines after 5 years from their commercialization.

( see: IBM Deep Dive 1 – 4 from YouTube ( the 1. video: ))



IBM ‘brain machine’ a new kind of computer built around neurochip reaching maturity

Screenshot 2015-06-12 19.20.05

IBM’s Almaden laboratory where IBM is developing a new kind of computer based on a neural network simulating that of the human brain.

This development has begun in 2008 and as a result of that 2011 was the True North – chip, with 256 programmable nerve cells and 262 114 synapse. 2014 This IBM published a chip , which consisted of 16 cores . This consists of 1 million nerve cells and 4 billion (thousand million) synapses.The demos driven by this circuit are already quite spectacular.

In addition to this IBM has developed “programming environment” for this new machine. As described the architecture of this new machine is completely different than today’s computers. Thus, the machine can not be programmed using traditional programming languages and programming doesn’t resemblance to the programming of the current computers. Programming takes place in the same way as programming neural networks. In fact, the whole ‘programming’ the term is quite misleading. Rather I could talk – and we talk about – teaching.

I’ve always believed in this development from 2011 when I heard about this. Now, IBM has released four videos (11.06.2015)) on YouTube titled IBM Synapse Deep Dive (IBM Research Colloquium Cognitive Systems: Brain-Inspired Computing at IBM Research – Almaden in San Jose, CA.)

These provide a very comprehensive picture of the current status of development of the device. Everything seems to be ready for a commercial launch of a set of products. Similarly, the whole technology development trend. IBM has developed a True North at the same time for mobiles and at the same time its planned supercomputer. Currently, IBM plans to create a 4096 the core rack with 1,064 * 10 ^^ 12 = 1 064 000 000 000 is a one billion synapses. The next step is to create 100 times the previous size, and then reached 4096 * 100 machine (= 1.064 * 10 ^^ 14 = 106 400 000 000 000 hundred trillion synapses.)

This last device is already in the human brain size.

The four new YouTube videos:

No 1. Describes in the initially theoretical starting point for the device imitating the human brain and its basic structure and mode of operation.

No 2. Starts with as many Deep Learning application with visual recognition process followed with a small example how to train TrueNorth circuit to recognize hand written numbers with just 5 TrueNorth cores. This allows it becomes clear that the production device applications are quite easy to train.

At the same time this team has developed a set of equipment based on a conventional computer, which the team calls Compas- system. This works in the same way externally as the True North and but this neural network is thus carried out in a traditional computer and can be used in the same way as other traditional Deep Learning networks.

The two most important results TrueNorth are performance and power consumption. When it is compared with neural networks implemeted with traditional von Neumann computers the findings are that the new device is 1000 times more powerful than neural network implementation with the conventional means and a power consumption of only 1 / 400 000 part of a conventional.

This means that when the current device implemented Deep Learning systems in performance like a bikes the new equipment performance meets the supersonic jet fighter.

When this equipment is almost ready for mass production, this means a complete total socio-economic revolution within 5 years!

This circuit technology will enable the self-driving cars and intelligent robots.

IBM Synapse Deep Dive Part 1

IBM Synapse Deep Dive Part 2

IBM SyNAPSE Deep Dive Part 3

IBM SyNAPSE Deep Dive Part 4

Screenshot 2015-06-12 19.25.18

2014 in review

The stats helper monkeys prepared a 2014 annual report for this blog.

Here’s an excerpt:

A New York City subway train holds 1,200 people. This blog was viewed about 6,000 times in 2014. If it were a NYC subway train, it would take about 5 trips to carry that many people.

Click here to see the complete report.

Effective & high quality as an end result of applying my method

My ADDD application method is a very pure continuation of OOA – OOI development tradition.

I do not know any other quite similar. The starting point of my method is of course 3-tier architecture, but this is nothing special and is dominant in all OO approaches. This is more of a result of more fundamental aspects.

The cornerstone of the method is the idea of abstract domain object model. This fundamental carries with it two dimension: a special processing sequence and the architecture of the outcome.

The time dimension of the process is very important. It is very important to create a domain model with domain experts. The whole model creation process in a very delicate matter. The domain OO model creation should ideally start from scratch. This however requires a very experienced modeler because according to my long experience it ts quite difficult to do it well. For this reason I have created here very general a few business field domain model so that people with less experience could do better studying that first of using id bases and modifying their of starting from that.

Anyway this domain creation process should consist of half to full day session and at least two sessions per week. The modeling group should remain the same during the whole modeling phase. It is very difficult to take new member on boar later in the process! This activity shouldn’t take longer that totally 20 working sessions. If it is taking considerably more, it is a sign of very bad mistake: creating too detailed model. The final abstract model most important feature is to give that structure or topology of the reality. So the fundamental ( = abstract) structure of realty is reflected in the class model of the domain.

The domain OO model consists of n + 1 elements. One class model diagram with 30 – 60 domain classes and n collaboration diagrams describing the most important domain processes. One important point of this model is that is stops too much detail surfing too early. The most daunting enemy and evil of software development is too much detail too early! This is everything that comes out of the modeling.

Use cases are also consider useless and harmful! The have at least two vicious features. First the forces into design before analysis is completed and second they tend to push details up to surface.

After the domain model is created the it can be implemented and tested. O yes way before any application layer is design! The nice thing in domain implementation is that quite a lot of code can be generated from the model itself.

An important aspect of this cornerstone is that in implementation the middle domain object layer is completely and totally isolated from the rest of the implementation. This means that this layer is completely unaware of the other layer and don’t know who is using it and why!

After all this is completed we can proceed to design the application layer. Here the emphasize is on the word design. This is where we create new and design the work-flows in the application to make the work as easy and straight as possible. Of course this layer have to know all about domain class structure and all the association that it needs. This is also true for the services the objects at hand can give.

Here the use of use cases is not that harmful as it was in the previous step, but it is not necessary either far from it. Actually one can easily do also this with collaboration diagrams

The third layer is of course the object persistence. I am advocating OO- databases but it depends on circumstances and one can do it with ORM also.

Finally why this is true. Well here is the theoretical foundation of this. It is the balance between the amount of coherence and the number of entities and their relations. Here is a graph that gives you the overall complexity minimum:


A real CRM system

I know very little about commercial CRM- system. I have been very doubtful about them since they appeared on the marketplace. The only reason is that customership is not a set of attributes in a company or a person. It is in a simple form a relationship between a company or a person and a set of our (my) products.

When a relationship get a bit complex is needs an abstract even to give it flesh. In this way we can associate more things to it.

This is why a real CRM- system always contains the whole product portfolio and in fully integrated domain driven model also the whole production system to some extent. In this way CRM is always part of ERP!

I created a quite abstract model around the customership event object. Here it is:



The moto is: There is no customership without a product!