Rehabilitation of Object-Oriented Paradigm

The foundation I have sadly followed the slow corrosion of the OO-knowledge in general. What I want to achieve is the full rehabilitation on OO-technology. I am utterly convinced that Object Paradigm is the highest achievement in the theory of application development. My long working experience support completely this convictions. I am now referring to the great work from ParkPlace Smalltalk though the dawn of OOA by Grady Booch, Jim Rumbaugh, Perter Coad, Rebecca Wirfs-Brock …and many many other. The core of the great invention is crystallized in Grady Booch’s Object-Oriented Design (1991) (pages 15 and 16) says:

Actually, this is a trick question, because the right answer is that both views are important: the algorithmic view highlights the ordering of events, and the object-oriented view emphasizes the agents that either cause action or are the subjects upon which these operations act.` However, the fact remains that we cannot construct a complex system in both ways simultaneously, for they are completely orthogonal views. We must start decomposing a system either by algorithms or by objects, and then use the resulting structure as the framework for expressing the other perspective. Our experience leads us to apply the object-oriented view first because this approach is better at helping us organize the inherent complexity of software systems, just as it helped us to describe the organized complexity of complex systems Object-oriented decomposition yields smaller systems through the reuse of common mechanisms, thus providing an important economy of expression. Object-oriented systems are also more resilient to change and thus better able to evolve over time, because their design is based upon stable intermediate forms. Indeed, object-oriented decomposition greatly reduces the risk of building complex software systems, because they are designed to evolve incrementally from smaller systems in which we already have confidence. Furthermore, object-oriented decomposition directly addresses the inherent complexity of software by helping us make intelligent decisions regarding the separation of concerns in a large state space.

The following illustration is a picture of these two views that Booch talks:

Procedural and collaboration

Functional decomposition Agents’ collaboration The fundamental reason, why the model becomes so much simpler even it is containing the same semantic, is the close similarity to reality. We humans understand reality as massive collection of 3D functional entities – I will use here a term agent for those entities. These are autonomous things that interact with their neighbours. As we analyze world this way the functionality and thus also the responsibilities of these agents will naturally remain quite small and compact as long as the model is humble and honours the reality. In fact this is the only way -that at least I know- where every function of a big collaborative community of agent becomes defined exactly ones. This is actually the essence of the goal of all “reuse” even it is often badly formulated.

OO Application development process

If we try to model this way a big and complex application implementation, which of course finally have to be very detailed, we will end up to unmanageable complex models. This was just the mistake we did in early day of OO. But this growth of complexity can be restricted and finally managed with clever organisation of artefacts. To start to solve this problem the first thing to notice is the separation of concerns:

  1. Business logic.
  2. User interaction and the user’s workflow.

These aspects are naturally quite orthogonal (they are from the same world but at the same time they are rather independent from each other). The solution to this problem has two equally important parts: •

Logical 3-tier architecture to separate the concerns 

Start the design with first creating the abstract domain model representing the business logic and all times keep this completely unaware of the rest of the world. If you look at the literature mentioned above, you find that they are not doing either or if they have the first point they start with application requirements. This is an error that leads into exponential complexity growth. According to my experience only very small minority of people deeply understand the contents and true meaning of the concepts “abstract” and “abstraction”. Very briefly abstraction is reduction of details of a concept. This reduction is not however an easy one, because we should remove less important aspects from the concept. The tricky question however is which elements are less important. The selecting criterion here is the context on which we are considering the concept. When we limit the search strictly into domain, the important concepts will emerge relatively easily. Much more difficult it is to decide the abstract responsibilities of these concepts. Here the stress is on word abstract. The aim is to get the behaviour as equally distributed into the network of concept as possible. Here we have two guidelines as well. We have to try to reflect the reality and at the same time lift the level of abstraction by placing the behaviour in objects that have most resources available in them selves or very near to reduce unnecessary communication between object. After this the reality and the model are not one-to-one. The end result of this is a object model (class diagram and a set of collaboration diagrams), that describes the whole business behaviour on chosen level of abstraction and thus give us a simulation model of the business core. Now we have reached a point we the middle tier analysis is done:

3-tier

As I already emphasised, the dependences here are asymmetric. A very important point is that the middle layer is completely independent from the outer layer, but these are heavily dependent on middle layer. The business logic tier contains only the assimilated aspect of business rules derived from business objectives. This leaves a lot room for the application layer to implement this abstraction. All dependencies to user interface or environment technologies are handled in application layer. The third layer is purely technical isolation of the persistence and could be argued to be left out of this picture completely.

Development process

Development process

 

The sequence of the application development is following: In the second phase we will implement the abstract domain model to required detail. In most cases this means adding all the required attributes and the persistence which in a case of new database is quite strait forward. In the third phase we start to look the whole from user’s point of view. We define the processes and the use cases and finally implement them utilising now the services from our domain. The iterative and incremental (in one word agile ) nature of the implementation of the application layer is illustrated in the follow diagram: Each use case can be considered as one cycle here. First we have to design the work process. This starts from designing the use cases. Then each use case can then be developed as part of the application very independently from the others. The implementation uses the domain model from previous phase as it is. When the work proceeds additions, refinements and correction will be made into the implementation domain model. It will mature during the application development. It is necessary to question and doubt its content and structure and change as often as a new aspect appears.

Lost & Found OO

Lost & Found OO

The story of OO until today is both depressing and promising at the same time. The clue of the whole story is minimizing complexity. The story began somewhere in 1960’s (or a bit earlier) in Norway. They developed SIMULA programming language. This language was aimed at creating simulation modes. Simula (as far as I know) hade the idea of class and object, that had services and their internal implementation. This 2-layer implementation reduces complexity and facilitated a close and easy real world models.

Second step was the Xerox ParcPlace project and the down of Smalltalk. Alan Kay and his colleagues hade strong influence from Simula. They created a language that made possible to create a program with very natural mapping to reality – call simulation. The model behaved like the real word – but abstracts such. At the change on the decade from 1980’s to 1990’s the OO –programming had extended to design and analysis. This meant a breakthrough of the real OO. The pioneer both Grady Booch and Peter Coad wrote about the quality change from functional decomposition to objects collaboration. This meant a magnitude size simplification into the model itself. This is how far we reached on this road. Then economical climate went though rapid change – first exploration and then dramatic collapse. These change with the emerge of internet destroyed the small masse of intelligence that hadn’t grown to critical size yet. The change from Smalltalk to it fake copy called Java corrupted the implementation possibilities. The expansion of the usage of Java did not have any possibility to carry on the fundamental and complex change in thinking and a lot of people started to use Java in the way they had used 3GL such as Pascal and especially C. This started to muddy the water of the originally so clear and refreshing spring on OO. These people did not know and/or understand the crucial principles of OO behind the languages. Then these people go into trouble using these languages in a wrong way. There were attempts to ease their path to OO. One most sad example was Ivar Jacobsson “use cases”. This actually mislead people more by telling that OO is ease after all and NOW I can understand this. Peter Coad wrote a column in JOOP abut this named: “Use Cases considered harmful”. This article is still completely valid. When OO started to gain momentum came the next hit. This time it came from legacy people. They had a big need to emphasis the they are not obsolete and OO is not actually nothing new. This group introduce once again the concept of component and then they started to claim that their old C or Cobol applications had as components as objects . Of course this was completely untrue, but there were a lot of people in the field that had different kind of interests to support this fake.

The next blow came from unexpected direction and was called pattern. Patterns as such are actually a good thing in principle but at the same time they are very tricky thing. The sad thing about the Gang of Four book was that It transferred the focus to small detain on GU and not to domain implementation where it should have definitely gone.Now the idea and focus of simulation model representing the complexity minimum shortcut from reality to model was lost. The idea of object encapsulation still remain but was substantially weekend.

Then came the web bubble that pursted and a lot of experienced knowledge disappeared. The final los happened when for some reason OO was too complicated to attract older people with functional decomposition and relational db background all OO aspects were denied. This way even as people use OO languages they actually have degenerated to two layer architectures with “main program” and the behavior that should have been constructed as collaboration of object with characteristics and behavior borrowed directly from reality they diminished just a representatives of a row in a relational database. Now processes and SOA with their stateless services and “orchestration” is exactly this.

One couldn’t come further from Booch’s and Coad’s OO that this –actually nothing is left.

The genuine OO with logical 3-tier architecture and a clear separation of application ( ie. user workflows) and domain logic is in the absolute minimum of the solution space for all behavior that is equivalent with deterministic Turing machine. I hope that we are near that day when the lost treasure is found again.

See: Grady Booch: Object oriented design Second edition p: 16-20 ? and

Peter Coat Object Oriented Analysis chapter 1.3. Analysis Methods (p: 18 -36)