Slow technological evolution

From time to time some technology cats up hype state, like smart phones a decade ago or AI today. For some time there is an omnipotent euphoria and if feels like the world history is changing in an eye-blick.  

Normally after this come a hangover and at least the enlightened individuals feels empasserst. In most cases after this drop the real development and changing the world begins. There is still another aspect to this. The organisation that produces the fundamental change seldom is capable to create any business out of it.

 

Here is an revealing example. Eastman Kodak was one of the largest players in the film and photography industry. It was able to develop a digital camera, where the old celluloid film was replaced by image sensor. But the company was not ready to start to develop new business model base on new technology. This was done by Japanese companies like Sony and Canon. Eastman Kodak went bankrupt in 2012 after strong decline in businesses from year 2000. This shows clearly that companies – big and small – are shaped by their technologies. Organisational structures, people’s’ skills work processes all are tuned to the technologie. So the whole business logic is heavily leaning on the chosen technologies. This is quite universal phenomenon. Almost the same happened to the Japanese camera companies as the pocket cameras moved into smartphones.  

 

At the change of millenium the ideas Object-Oriented design and programming hyped in the IT community. The explosive usage growth in the OO-language usage especially Sun Microsystems Java. Sun promoted successfully it 3-tier server-client architecture with clear logical division of duties. The word look very bright from our – OO-enthusiastics point of view.

Grady Booch said in some OO-conference in the beginning the 2000s that “ in 5 years time there will not be any OO-conferences, because OO has become mainstream ”. Well the prediction was correct, but the reasoning behind was not. The OO-languages became mainstream, but OO-analysis or design did not. If this had happened, it would had made life much more easy for many, but the step or the change was too big and too difficult for most in the industry and as so many times in the history, the big short term interests defeated the long term benefits. Here one again the big companies with huge economical power were facing the threat of being destroyed, because the size of the change was far too big for them to make it. This is why big database companies like Oracle and IBM introduced SOA  (Service Oriented Architecture) , which was actually very rude Trojan horse  to turn the history backward at least two spets. The whole concept is so complicated and fuzzy, that it was clear from the beginning the it  was there just to muddy water and at the same time it appealed to masses of older developers that had big difficulties to understand the OO-concept. This is a big lost of opportunities that are gone at least for long time.  Perhaps we get our revance when the neural networks take over the application design and implementation in the future.

 

This fact is good to keep in mind, when following new hypes. The current hype is AI. It can be seen everywhere. When everyone is trying to get their share of this, the boundaries and interpretation of what is actually AI are heavily stretched. The current deep learning implementation with current computes are poor, because the computer architecture doesn’t  match at all with neural networks. The real breakthru is yet to been seen, because it requires neurons on chip design. IBM’s TrueNorth neurochips has demonstrated that the trick can be done. They have made the first workable version, but as it was a army project, it is likely that we don’t hear about that any more. Other organisation thriving to that goal are so much behind, that is we are not able to predict the timetable. Current achievement with computers are good and we can benefit from them in many ways in everyday life. At the same time we should understand the even if not today but just round the corner the neuromachines are waiting to be used. They will be from 100 to 1000 times faster than the current solution and the size of their neuronetwork will be magnitudes bigger that the current solutions. All this makes it totally impossible to predict their capabilities and understanding. Those remains to be seen when the time goes by.

Advertisement

Echoes from the golden age of OO

OO-languages rules the programming language domain sovereignly. OOA method is quite evidently far the best way to begin application development process. How ever this fact was somehow lost in the middle of 2000’s.

Here is a short a retrospective of application analysis development methodologies from 1980 onwards. Behind this development and the case of it is the ultra rapid development of computers. Computers are latest category of technic evolutions and its rate of change has been phenomenal. Following this exponential technical development and understanding the development as a howle is extremely difficult because all the development in the past seems quite minimal compared to recent. So it is very difficult to give real credit to those steps in past as they seems so tiny compared to later ones.One example of this development is the following: IBM PC publisti in 1980 had a processor with 4,5 MHz clock rate and today the PC:s clock rates are around 2 GHz  ( =2000 MHz). This is about 500 time the original speed. This enormous change has very significant impact on the methods of working.

The development took us from relational databases to object-orientation. In late 1970’s Allan Kay and his colleagues created the first object-oriented language Smalltalk in Xerox’s development center Park Place. The language was commercialized in early 1980’s. This created development was continued with the rise of Object-Oriented Analysis. In this approach the world was viewed in terms of object and their relations.This phase started at early 1990’s. Among the first pioneers were Grady Booch, Peter Coad, Adele Goldberg and Jim Rumbaugh just to name a few.

I run into OOA in October 1989 in Peter Coad’s 2 day training in Stockholm. This occasion changed the course of my professional life. After half day’s lecture I realized that these people had understood something fundamentally important. Something that solves easily most of the acute application problems and this first impression proved to be true.

A couple years went and then we acquired the only solid commercial Smalltalk language from ParcPlace. We started to study OO-programming with Smalltalk. It wasn’t quite easy because it was very difficult to get any training, so the advance was based on selfstude.

When we developed our Smalltalk I continue to study OOA. To the end of 1990’s when I had  stated to work in software house and I started to give trainings on OOA. During the first half of 2000 decade I created a lot of AAO-models and develop the process to more abstract and agile direction.

 

My ADDD ( Abstract Domain Driven Development )

The OOA/OOD mature during first half of 1990’s. ( see:https://bit.ly/2K9THFH ) One of the latest and best book on this field is Grady Boosh’s Object Solution (1996). This is more or less a testament and of that whole field.

As a whole this was one of the greatest achievement in the history of computing. In spite of this I saw there a significant weakness that was revealed be the efforts to use this brand new methodology. That is the lack of separation of concerns. This methodology analysed application as a whole. This will increase complexity enormously. When OO-application development method on the other hand emphasised the 3-tier architecture where the domain and the application views were considered as separate layer. This led my to experiment to do the separation wright at the beginning of the analysis. That is why I called my method Domain Analysis. The difference was considering the domain – the interesting part of reality – as the object of the analysis. This way the application part was completely dropped. Then the second step is to create the object model of the application and this model was connected to the domain model by messages – as object always do. This allowed the domain analysis – and also the implementation – to be done without any knowledge. This meant a significant reduction of the overall complexity. This also allowed several application layer designs and implementation for instance for separate usages or implementation technologies.

 

Situation today

I wrote my my blog a tutorial covering ADDD : IT-Dinosaurus in WordPress from 2009 to 2014  I got a considerable amount of readers during those years. The peak number of visitor came between 2010 and 2014. One notable thing is also that most visitors came from India, US and Canada! Very surprising is the fact that there was still in 2016 the number of visitor was still 4500 and ranking of the countries was Canada, US and India.

In mid 1990’s I ( and also Grady Booch) thought that OOA/OOD/OOP would by the standard most common way to develop application, but it didn’t turn out that way. In early 2000’s the big database companies were afraid of losing their businesses to OO-databases and they started a big campaign to stop OOA usage and they came out with SOA, which was devastating for software industry development and took us backwards several step to history, but the old dinosauruses – the relational databases were able to maintain the market shares.

The thing that surprises me today is the amount of visitor to my old blog. It shows me that there are still a lot of enlightened persons in this field, that has rediscovered the true value and meaning of OO today.

Echoes from the golden age of OO

The state of OO -development in 2017

I have started this blog on OO application development in 2009 and posted the last post on this issue in 2014.

The core of my approach is centered on abstract domain OO – model. To created these models one needs deep understanding of the underling theoretical concept. This can not be achieved just following a set of rules. The divider is the abstraction level and thus the model size. If the abstraction level is too high the model has no value. It dos not promote the understanding of the domain. If the abstraction level is too low, it is even worse because too much detail prevent the possibility to separate important from unimportant and the just isn’t a way to return. So too detail model poisons the whole process.

A just right model create a great opportunity not just a good and flexible domain implementation but also for user interface part of the application, because early separation of concerns will dramatically reduce complexity of both sides.

I started to write the posts at late 2009. It has gathered 48 000 hits all together. I have posted my last OO article early 2014. The OO did not concur the world as I and Gready Booch predicted around 1995. For this reason I am gladly surprised the current hits at my blog currently. For most of the time my readers came from US, Canada and India, but almost the whole world was covered.

I have retired 3 years ago and I don’t have a hands on touch in the current development trends. My current surprise is the activity around my blog in 2015 and 2016 there were about 4300 views and 2000 visitor yearly and most visitor came from US, Canada and India. Even this year there has been about 1000 views and 580 visitor from the same countries. I am both happy and honored about this.

I would be curious to know the motives for these visits and the current state of OO development in the field.

Quick introduction to neural network applications and smart machines

Neural networks will in relatively very short time revolutionize the whole of mankind development. The English world  uses also the term cognitive computing. This change will be more significant than the entire information technology achievements to date, modern mobile technology including.
For this phenomenon, there are three perspectives through which things can be perceived.

  1. Neural Network theory and algorithmic implementations on the  conventional von Neumann computers and the current Deep Learning implementations. The core of this technological innovation is self-learn software applications.
  2. Neurosynaptic circuit discovery and development to this day, and this technology in the future of
  3. Socio-economic implications of all the above.

Neural networks theory and its evolution

Neural networks mathematical models have been around since the 1950s. One of the first computer-simulated theoretical neural networks have been implemented in the 1980s. In Finland, those developed by Professor Teuvo Kohonen. At that point, computer processing capacity was not sufficient to present those solutions in full force. Thus, this research was started again in 2006 with Google.

At the moment, these neural networks are getting into really spectacular stuff. The most important development is the self-learning systems (also called Deep Learning)

The situation is described in a very clear and easy to understand way in Jeremy Howard Ted Talk:

The wonderful and terrifying implications of computers That can learn (a TED talk)

http://goo.gl/V6zOh4

 

Neurosynaptic circuits and machines

The most traditional computer equipment and algorithms have achieved a lot, but the game that will blow the bank is neurosynaptic circuit technology. In this development, IBM is clearly ahead of the others, and thus to lead the way. IBM founded with partners Synapse project (may 2008). The following Project leader D Modhan presentation of the situation in the year 2011:

Dr. Dharmendra S. Modha

KEYNOTE: Cognitive Computing: Neuroscience, Super Computing, Nanotechnology

http://bit.ly/18y66z7

The current state of the  project was presented in IBM Deep Dive seminar:

Synapse Deep Dive 1 06.11.2015

https://goo.gl/zc1LFL

Synapse Deep Dive 2 06/11/2015

https://goo.gl/32F2n

Synapse Deep Dive 3 06/11/2015

https://goo.gl/nat0w8

Synapse Deep Dive 4 06/11/2015

https://goo.gl/6OgVQL

This socio-economic impacts

With all the above there will be a dramatic acceleration effect already in this millennium ongoing technological unemployment. This conversation actually started by MIT researcher in their book: Race Against the Machine

The MIT professor Erik Brynjolfsson and Andrew McAfee researcher’s book:

Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy

http://amzn.to/15aDgjz

The following two performances will broaden the perspective:

Chrystalla Freeland: The rise of the new global super-rich

http://goo.gl/3Po0zX

Humans Need Not Apply

https://goo.gl/8piZSM

Finally, own my texts

https://itkritiikki.wordpress.com/

 

Common confusion within the AI predictions

I have been following all kinds on AI discussions at many places. The confusion and high degree of inaccuracy have bordered me a lot. So I decided that I will also give my contribution to this.

When people use concepts like robot, artificial intelligence machine learning and so forth all the concepts seem to be very fuzzy and without clear indication what they exactly means with them. The whole IT has been created within 65 years and it is extremely short time. I have been working with computer from early 1970’s when I learned my first programming language FORTRA and the basics of the computer. When I compare computer at that time late last millennium and today, they all differ from each other in some many ways that they hardly are examples of the same concept. One thing that I have learned during these 45 years is that in very early state of the development no-one has been even near prediction their structure or behaviour in advance. Actually we have started to understand their true meaning and role really late after their appearance.

I will start from words and their meaning.

Robot is a device, that is capable autonomous movement and influence toward its environment. A simple example could be a robot vacuum cleaner. The most import aspect of this term is that it does state anything of the device’s intelligence. Actually most robots today have none! They are fixed industrial robots with usually one powerful arm and programmed behaviour.

Artificial Intelligence ( AI ) is a term used at least from late 1970’s, but the meaning of the term have changed during these years. During last millennium the overwhelming majority thought that these will be kind of expert systems where the ‘intelligence’ was in detail programmed by programming team. Finally nearly all understood that the task was far too complex to be ever solved this way. There were some pioneers like Teuvo Kohonen in Finland that started to experiment with traditional computers where they had programmed very primitive neural nets that were able to learn to do simple classification. Some from the industry laughed at them. The rest thought the were and will always be toys.

Then Google lab decided to increase the number of neuron layers and created self-learning neural net and this was a beginning of modern Deep Learning construct. Today the number of deep learning solution is exploding. Based on the victories in this field has crated the lively discussions of automation and technical unemployment.

Now wider and wider groups have been reporting and discussing about this. Here is however one persistent mistake repeated in this discussion over and over again. This is in the predictions. Most writes will tell you that the advance in this field is based upon the speeding up the circuities in the traditional von Neumann computer. This is actually completely wrong conclusion. The von-Neuman architecture traditional computers never reach such a performance that these prediction requires. The solution is not in that direction. The von Neumann architecture is designed in a way that it can do massive parallel computing and this is necessary requirement of big deep learning machine.

New neuro-synaptic chips: If this would be the end of the story we shouldn’t be worried about job lost so much, but IBM did a historical invention and created a circuit that has similarities with a biological brain tissue. This happened in IBM Almaden lab. The fist circuit four years ago on 2011 when it was released 1. time contain 256 neurons and 1 million synapses. On 2014 it had grown to one million neurons and 256 million synapses. (see: http://goo.gl/VAvIcO )

This circuit is about thousand time faster than equivalent traditional computer and it will need a fraction of the size and energy than the tradition computer. IBM has demonstrated that this new machine can be taught same way as Deep Learning algorithms. (see: http://goo.gl/FSMAoX )

These machine are NOT in the commercial market yet, but they will be soon and when they come they will revolutionise the IT-market totally. The effect of this is larger that that of computers and mobile devices together.

These machines can be taught very complex tasks in a very short time. It is most probable that traditional programming is done solely by these machines after 5 years from their commercialization.

( see: IBM Deep Dive 1 – 4 from YouTube ( the 1. video: https://goo.gl/JAdpJe ))

 

 

IBM ‘brain machine’ a new kind of computer built around neurochip reaching maturity

Screenshot 2015-06-12 19.20.05

IBM’s Almaden laboratory where IBM is developing a new kind of computer based on a neural network simulating that of the human brain.

This development has begun in 2008 and as a result of that 2011 was the True North – chip, with 256 programmable nerve cells and 262 114 synapse. 2014 This IBM published a chip , which consisted of 16 cores . This consists of 1 million nerve cells and 4 billion (thousand million) synapses.The demos driven by this circuit are already quite spectacular.

In addition to this IBM has developed “programming environment” for this new machine. As described the architecture of this new machine is completely different than today’s computers. Thus, the machine can not be programmed using traditional programming languages and programming doesn’t resemblance to the programming of the current computers. Programming takes place in the same way as programming neural networks. In fact, the whole ‘programming’ the term is quite misleading. Rather I could talk – and we talk about – teaching.

I’ve always believed in this development from 2011 when I heard about this. Now, IBM has released four videos (11.06.2015)) on YouTube titled IBM Synapse Deep Dive (IBM Research Colloquium Cognitive Systems: Brain-Inspired Computing at IBM Research – Almaden in San Jose, CA.)

These provide a very comprehensive picture of the current status of development of the device. Everything seems to be ready for a commercial launch of a set of products. Similarly, the whole technology development trend. IBM has developed a True North at the same time for mobiles and at the same time its planned supercomputer. Currently, IBM plans to create a 4096 the core rack with 1,064 * 10 ^^ 12 = 1 064 000 000 000 is a one billion synapses. The next step is to create 100 times the previous size, and then reached 4096 * 100 machine (= 1.064 * 10 ^^ 14 = 106 400 000 000 000 hundred trillion synapses.)

This last device is already in the human brain size.

The four new YouTube videos:

No 1. Describes in the initially theoretical starting point for the device imitating the human brain and its basic structure and mode of operation.

No 2. Starts with as many Deep Learning application with visual recognition process followed with a small example how to train TrueNorth circuit to recognize hand written numbers with just 5 TrueNorth cores. This allows it becomes clear that the production device applications are quite easy to train.

At the same time this team has developed a set of equipment based on a conventional computer, which the team calls Compas- system. This works in the same way externally as the True North and but this neural network is thus carried out in a traditional computer and can be used in the same way as other traditional Deep Learning networks.

The two most important results TrueNorth are performance and power consumption. When it is compared with neural networks implemeted with traditional von Neumann computers the findings are that the new device is 1000 times more powerful than neural network implementation with the conventional means and a power consumption of only 1 / 400 000 part of a conventional.

This means that when the current device implemented Deep Learning systems in performance like a bikes the new equipment performance meets the supersonic jet fighter.

When this equipment is almost ready for mass production, this means a complete total socio-economic revolution within 5 years!

This circuit technology will enable the self-driving cars and intelligent robots.

IBM Synapse Deep Dive Part 1

http://bit.ly/1e9jRsl

IBM Synapse Deep Dive Part 2

http://bit.ly/1MPv3px

IBM SyNAPSE Deep Dive Part 3

http://bit.ly/1IOCoHf

IBM SyNAPSE Deep Dive Part 4

http://bit.ly/1BbPsnd

Screenshot 2015-06-12 19.25.18

2014 in review

The WordPress.com stats helper monkeys prepared a 2014 annual report for this blog.

Here’s an excerpt:

A New York City subway train holds 1,200 people. This blog was viewed about 6,000 times in 2014. If it were a NYC subway train, it would take about 5 trips to carry that many people.

Click here to see the complete report.

Effective & high quality as an end result of applying my method

My ADDD application method is a very pure continuation of OOA – OOI development tradition.

I do not know any other quite similar. The starting point of my method is of course 3-tier architecture, but this is nothing special and is dominant in all OO approaches. This is more of a result of more fundamental aspects.

The cornerstone of the method is the idea of abstract domain object model. This fundamental carries with it two dimension: a special processing sequence and the architecture of the outcome.

The time dimension of the process is very important. It is very important to create a domain model with domain experts. The whole model creation process in a very delicate matter. The domain OO model creation should ideally start from scratch. This however requires a very experienced modeler because according to my long experience it ts quite difficult to do it well. For this reason I have created here very general a few business field domain model so that people with less experience could do better studying that first of using id bases and modifying their of starting from that.

Anyway this domain creation process should consist of half to full day session and at least two sessions per week. The modeling group should remain the same during the whole modeling phase. It is very difficult to take new member on boar later in the process! This activity shouldn’t take longer that totally 20 working sessions. If it is taking considerably more, it is a sign of very bad mistake: creating too detailed model. The final abstract model most important feature is to give that structure or topology of the reality. So the fundamental ( = abstract) structure of realty is reflected in the class model of the domain.

The domain OO model consists of n + 1 elements. One class model diagram with 30 – 60 domain classes and n collaboration diagrams describing the most important domain processes. One important point of this model is that is stops too much detail surfing too early. The most daunting enemy and evil of software development is too much detail too early! This is everything that comes out of the modeling.

Use cases are also consider useless and harmful! The have at least two vicious features. First the forces into design before analysis is completed and second they tend to push details up to surface.

After the domain model is created the it can be implemented and tested. O yes way before any application layer is design! The nice thing in domain implementation is that quite a lot of code can be generated from the model itself.

An important aspect of this cornerstone is that in implementation the middle domain object layer is completely and totally isolated from the rest of the implementation. This means that this layer is completely unaware of the other layer and don’t know who is using it and why!

After all this is completed we can proceed to design the application layer. Here the emphasize is on the word design. This is where we create new and design the work-flows in the application to make the work as easy and straight as possible. Of course this layer have to know all about domain class structure and all the association that it needs. This is also true for the services the objects at hand can give.

Here the use of use cases is not that harmful as it was in the previous step, but it is not necessary either far from it. Actually one can easily do also this with collaboration diagrams

The third layer is of course the object persistence. I am advocating OO- databases but it depends on circumstances and one can do it with ORM also.

Finally why this is true. Well here is the theoretical foundation of this. It is the balance between the amount of coherence and the number of entities and their relations. Here is a graph that gives you the overall complexity minimum:

 Image

A real CRM system

I know very little about commercial CRM- system. I have been very doubtful about them since they appeared on the marketplace. The only reason is that customership is not a set of attributes in a company or a person. It is in a simple form a relationship between a company or a person and a set of our (my) products.

When a relationship get a bit complex is needs an abstract even to give it flesh. In this way we can associate more things to it.

This is why a real CRM- system always contains the whole product portfolio and in fully integrated domain driven model also the whole production system to some extent. In this way CRM is always part of ERP!

I created a quite abstract model around the customership event object. Here it is:

Customership

 

The moto is: There is no customership without a product!

Event Management model

Here is small and quite practical domain model of organizing different kinds of events – like congresses, fairs or festivals. All these events can have several parallel tracks and each track consists of consequent sessions. Sessions have a subject or a tittle and from one to several performers. The members of listeners or audience are called participants. A sessions is event-moment type of object and all different roles are connected to the event trough participation-events. Each participant has exactly one participation event to connect the person to the event. Participation’s state attribute in the event reflects the life cycle of that event.

SomeTime14

I am trying to implement this model in an event management application. The application will provide means to register to the event and then the application will confirm the registration if there is available capacity left.

I am implementing this with Python and I am currently testing ZODB oo-db with this. I am still looking for a web framework. In my first tests I used tkinter GUI. I like the lightweight but oo structure of Python. It remains me a lot of Smalltalk with it’s dynamic oo variable binding.