Monday, October 26, 2009

Comments on The IT Complexity Crisis: by Roger Sessions

Roger Sessions has written a paper on IT Complexity. When I met with Roger in London earlier this year (at the EAC) we had a short conversation, and it was clear we agreed on numerous things. This paper is useful because it provides a basis for more detailed discussion.

Roger’s paper:

Comments:

  1. Cost of complexity is a little suspect to me. Primary issue is basing numbers on government data – government projects have high probability of failure, at least in the UK. More important I think complexity has more impact on full life cycle costs. Direct IT cost is one issue, but lost business opportunity is probably the major cost. Hard to estimate. However we can all agree, it’s a it’s a big number.

  2. I am always wary of “functional decomposition”. It’s notoriously imprecise. At CBDI we recommend a combination of capabilities and business types (data). The outcome of Roger’s example SIP analysis looks similar to what would emerge from structured business type analysis, but in my experience the latter method is more reliable. More importantly I am vitally interested in developing an architecture that is a) demonstrably stable where it needs to be (managing business types such as customer, product and so on) and agile by design where it needs to be (managing process behaviour, transient data and events). I accept this may appear to be a form of religious debate, and we may have to agree to disagree, but I am interested to have the discussion.

  3. But regardless of how you arrive there, Roger and I are completely in agreement on the need for components. The component must be completely encapsulated, own it’s own data store(s) and be the sole provider of owned services. Roger’s components appear to be capabilities. No problem there, because in the CBDI reference architecture there are options. But I would be looking to separate out layered behaviour so that business type components (stable) are separate from process and event components (unstable, subject to change).

  4. The computation of SCUs (standard complexity units) is interesting. Roger’s methodology will lead you towards larger components with more internal dependencies. What’s the internal architecture look like Roger? The dependencies are still there. I am not concerned if the number of dependencies is very high, PROVIDING the dependencies are all well formed, reusable services and operations. Then it’s simply a question of ensuring you have good management (life cycle and run time) systems, which we do know how to do. I am more concerned by poor reference architecture (bad patterns, convergence of behaviors that should really be separate, bad or non existent contracts . . . )

  5. Finally I put more faith in principles, patterns and reference architecture than in numbers. Numbers can lie, and they often do. Whereas patterns and reference architecture are the distillation of good (and bad) experience and provide intelligent people good guidance to become even more intelligent.

  6. At CBDI we have developed methodology and process for modernization. We see real demand for rationalization and modernization of existing systems (full life cycle costs). The essence of the approach is to a) create the SOA fa├žade and b) componentize. You might like to take a look at our method for CA Gen, it’s just one method for one environment, but it illustrates we [practice what we preach. There’s a slideshare there that describes.

Good stuff Roger, fully support the component approach. Happy to dialog in more detail.

6 comments:

  1. David,
    Thanks for the excellent discussion. I'm looking forward to many more discussions on this important topic. Let me respond to your specific comments.

    #1: The failure numbers come from the U.S. public sector, but they are consistent with the private sector Standish numbers (not that I am a great fan of their methodology.) Also, private sector CIOs with whom I have discussed this feel the numbers are reasonable from their perspective. In any case, your comment that "lost business opportunity is probably the major cost" is consistent with my analysis.

    #2: I agree that functional decomposition is imprecise. That is why the SIP (Simple Iterative Partitions) methodology uses a balance of decomposition and synergistic recomposition. Decomposition is used to find the atomic business functions, but synergistic recomposition is used to determine where they should live(very important, from a complexity perspective.) The advantage of this approach is that the results can be verified. The problem with most structured business type analysis is that they are not verifiable (and thus not reproducible.) If you have two different business analysis conducting a structured business analysis, they are likely to come up with quite different results. There is no way to know which, if either, is the best analysis.

    #3: My components are capabilities. In fact, I call them atomic business capabilities (ABCs). However I also advocate for the SIP architecture projecting straight through to the technical architecture. It is at this level that we see components materialize. I have a whole theory about how capabilities should project onto components, much of which was covered in my book, "Software Fortresses" which was actually written before my "Simple Architectures" book on SIP. I think that we are in agreement on how components should be designed, but we should discuss this more.

    4: SIP does not generally lead you to larger components with more internal dependencies. It leads you to the optimal components with respect to complexity. It balances the complexity of having more functionality with the complexity of having more connections. Sometimes we reduce complexity by adding more functions into a component. Sometimes we reduce complexity by splitting up the component. SIP is designed to lead us to the best possible solution (from a complexity perspective.)

    5: Reference architectures are good, as far as they go. But they are generally either too coarse grained to be useful (as with FEAF) or too fine grain to be useful (as with patterns.) As far as principles, I think we agree here. My overriding principle is simplicity. Simplicity trumps every requirement except the functional requirements (and sometimes, even some of those.) Of two architectures that both solve the same business problem, the simpler one will require fewer resources to build and maintain. The simpler architecture is less likely to fail. For me, these points make it better.

    6: I look forward to hearing more about your methodology and process.

    I hope your readers will look at the white papers themselves (http://bit.ly/3O3GMp). The topic of complexity is a critical one for all of us.

    - Looking forward to much more dialog!

    ReplyDelete
  2. This complixity question is an interesting discussion that I have been following (and participating in) on twitter. I agree in principle with Roger's premise that complexity managment is core to project success, but we differ in approach. In this matter I agree with your summary - numbers can be highly misleading if the underlying effect is not properly understood. More importantly, numbers often tell you you have a problem long after you are already aware that it exists. Prevention is far better than cure, and the basic principles of component oriented thinking should ensure that complexity is avoided rather than created and then managed.

    For this, I feel that there are certain people who are skilled in creating simple solutions to complex problems, and by identifying and hiring these people (instead of those prone to complexity) the problem can be solved at source. (I have blogged on this matter here: http://theenterprisingarchitect.blogspot.com/2009/10/simplicity-art-or-architecture.html"

    These are, after all, sound and long lived engineering principles - architecture is simply an approach that grows out of these principles.

    Looking forward to further discussion on this matter.

    Regards
    The Enterprising Architect

    ReplyDelete
  3. Roger’s paper was an excellent stimulus for me to challenge some of my long held thinking. That’s what numbers do I guess. And I welcome the metrics approach although, as discussed above, I have some significant caveats. I will mull this over some more and revert.

    However I guess it will be no surprise to many folk that know and use the CBDI research, that I remain of the opinion that a strong reference model and architecture underpinned by policy is the best way to capture and communicate good practice. I accept that many will judge reference models and architecture on the basis of OASIS and TOGAF. But these are patently not helpful to real world endeavors. In our work at CBDI we have developed reference models and architecture to a level of detail that (we believe) is essential to manage largescale systems deliveries (of all flavors and hues). The reference artifacts provide detailed mapping across principles, patterns and policy and guide not just architecture but also specification, design and delivery.

    I do not expect architects to reinvent the wheel. A detailed meta class model provides a distillation of experience that allows practitioners of widely varying experience to focus on solving the business problem and deliver appropriate agility and quality characteristics. Roger’s focus on complexity is to reduce dependency. My focus is to ensure that granularity is fit for purpose and that dependencies are (yes minimized, but more importantly) properly managed with formal contracts, full encapsulation and loose coupling and implemented with properly formed components. Numbers don’t count (sic) so much as quality architecture and design.

    If you haven’t looked at our work I do encourage you to download the base V2 CBDI Meta Model from http://cbdi.wikispaces.com/SAE+Model. This is a little out of date, we are currently in the process of delivering the SoaML aligned V3, which will shortly enter a public review process. This is a real world, production strength example of what’s needed to drive real projects, define UML based deliverables that have traceability from business to code.

    The issue of complexity has always been with us. Frankly largescale systems are complicated beasts. It’s our task to turn inspiration into engineering, reliably and efficiently. That needs foundation stones that avoid reinventing the wheel.

    David

    ReplyDelete
  4. This comment has been removed by the author.

    ReplyDelete
  5. Let me respond to some of the component thinking of David and Jon.

    Jon says, "Prevention is far better than cure, and the basic principles of component oriented thinking should ensure that complexity is avoided rather than created and then managed."

    I strongly disagree with this statement. Component-oriented thinking only ensures that you will come up with components, not that those component will have the minimum complexity needed to solve the problem at hand.

    In my white paper, I give an example of an inter-library loan system and show the component-oriented design that most architects come up with. I then show a second component-oriented design that solves exactly the same problem.

    From the perspective of which is a better component-oriented design, there is no way to choose one over the other. However from the complexity perspective, there is a clear winner. One is three times the complexity of the other.

    Now you might argue that this is an artificial problem, that if you had hired good people, as Jon suggest, that they would come up with the simpler solution.

    I can assure you this is not true. I have taught my complexity workshop with many highly experienced architects using my inter-library loan system as an exercise. None have come up with the simpler solution.

    The only time you come up with the simplest possible solutions is when you
    a. focus on complexity as a driving requirement, and
    b. have a process that drives you to the minimally complex solution

    Now you might argue that I am over focusing on complexity. I disagree. As I show in my white paper, the simpler solution is not only 1/3 the complexity, it is also likely 1/3 the cost.

    The reason we care about complexity is not because of complexity per se. It is because of what complexity does. It drives up failure rates and costs. And it makes the things we want (high security, good performance, agility) much more difficult to achieve.

    Now this is not to say that complexity is the only problem we have. As David rightly points out, we need to ensure we have "formal contracts, full encapsulation and loose coupling and implemented with properly formed components." I fully agree with David here.

    My point is that all of these issues are addressed AFTER we have successfully completed an optimal project organization from a complexity perspective.

    In the SIP methodology (Simple Iterative Partitions), the complexity control part of the project occurs at the very beginning, at what might be called the pre-design phase.

    The issues David discusses are dealt with in the design phase. I suspect that David's methodology is quite good in this area. SIP is largely agnostic about what happens when we get to design, as long as what happens is good design. Of course, in the design phase you need good methodologies (and good people, as Jon suggests).

    So our ideas are all compatible. It would be interesting to work together to show in a more formal way how these ideas mesh.

    ReplyDelete
  6. This comment has been removed by a blog administrator.

    ReplyDelete