Why use cases are evil?

The previous post about databases and OO wiht headline saying …also eval actually refered to this text – so their order is wrong. Anyhow here it is now for you.

Well this is not the whole truth and first this is not only use cases but all procedural abstractions of behavior. These come with many names in addition to use cases: activity models, process models, rule models etc. Another fact is that they are not intrinsically mean. Actually they are just one way of abstracting behavior. The reason which makes then mean in context of modeling for application development is that they yield ten times worse solution.

All the way from the early days of OOA (late 80’s and early 90’s ) it was crystal clear that the description of procedural and object behavior are strikingly different.  There is no right or wrong here. They just are the same thing from completely different viewpoints. They are orthogonal abstraction of the same thing.

The most significant thing that created object paradigm was just this difference. This is a paradigm issue and thus the difference has a strong antagonistic nature. This means that you must choose and there is now way of mixing these together.

The big step in deeper understanding simulation modeling was that object collaboration yield to a magnitude simpler model than the corresponding procedural one. What is the cause of this difference? The root of explanation comes from microbiologist Stuart Kauffman. It is question of how to manage complexity. His findings were that the most complex organisms (and organism are always both structure and behavior at the same time) are in the middle ground or gray area between chaos and rigid order.  The conclusion was that you need both ingredients to accomplish.

In case of our models (and also programs) “wild” or “free” behavior (in programming a method or equivalent) represents the flexible chaotic side and class definition, class boundary with behavioral encapsulation and objet collaborations represent the order ingredient. This mixture is actually the trigger that collects the jackpot. This way we are able model 10 times more complex simulation than in the realm of pure chaos. The fundamental aim to avoid the repetition of the same behavior in the model over and over again is gained by the classes definition -or concept- that ultimately limits the objects possible behaviors. In other words the number or trick that a single object can is limited and this of course reflects to implementation so that the number of methods (if designed right) is limited. Thus the object model is the  only way (that at least I know) that can achieve this.

We can ( and actually have several times) come across a subset of reality that we would like to reflect  with application but which is too complex for us. There is of course a limit what we can do with this. The real limitation at the bottom of this all is our computer. Currently it is an implementation of deterministic Turing machine. We know the limitation of this from mathematics.

When we want to exceed these limitations and we would like to create learning applications we must go to nondeterministic Turing machines. The fist difficulty here is that we don’t have the hardware yet. We human stars to understand enough of brain structure and behavior then we can copy this to a machine. On other consequence from this is of course the fact that these system cannot be programmed. They must be taught! The final inevitable consequence is these application can do mistakes like humans!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: