It's only a theory?

I am a scientist by training, and in discussing bodies of knowledge both in science and in software measurement I often run up against the  "But it's only a theory" argument.  Let's get this clear there are at least two definitions of the word theory.  One is based on the concept of an idea that is unproven (hence the cheap but wrong-headed trick often used by deniers of the Theory of Evolution), but the second, scientific, meaning is that a "theory refers to a comprehensive explanation of an important feature of nature supported by facts gathered over time.  Theories also allow [users] to make predictions about as yet unobserved phenomena." The definition in quotes is from the American National Academy of Sciences.  It is one of the simpler definitions, but it sums things up rather nicely.  I have substituted users for scientists in the definition above because I think it has a valid application in considering software performance. We have now got a Theory of Software Performance covering the whole software lifecycle, from thought to closedown.  Over the last thirty years our industry has been under the measurement spotlight and we now have a considerable body of knowledge which enables us to predict the success or otherwise of our projects, and measure the health of our products. So sound is this knowledge that predictive models are available.  For estimating there are tools on the open market including SEER™ and of course CoCoMo, which is available to all.  Backing this up are Benchmark Databases, including our own, where data are available for comparison against the performance of development and support teams.  Good process driven risk management such as that provided by De-Risk™ enables monetary quantification of risk adding further opportunities to control projects and programmes.  Add to this, analysis tools such as CAST™ provide us with clear diagnosis of the health of existing software. So fundamental is measurement to the Software Performance Theory that metrics is incorporated into standards such as CMMI at the most basic levels of maturity.   Yet everyday I come across organisations which subscribe to the "only a theory" argument, describing metrics as an overhead, or simply ignoring the facts and throwing money at problems.  Here are a few examples:
Client process failure The death march project where team have been working 60-70 hour weeks for months and the project has missed all its deadlines.  Despite clear metrics to show that the project is undeliverable, the management view is that if  "we all pull together" somehow, magically, all will turn out for the best.  There has been a clear failure of process from end to end, and the client hopes that by throwing money at it the problem will go away, yet a glance at the data would show otherwise.  Not only will problems remain and grow during development, but the downstream debt is likely to be enormous as defective code is fixed. How do we know this?  We (the industry) have the data and we can predict it. Development process failure Established organisations complain that their development projects always over-run on cost and budget, and yet when asked they neither collect effort data for tasks on a weekly basis nor even at the end of the project to enable them to improve predictability.  Add to this no view of size and schedules guessed at by developers and project managers and the results are bound to be chaotic.  How do we know this?  We have the data and we can predict it. Joint process failure A third group always delivers on time and on budget.  "We are the good guys," I hear them say. "We monitor our spend and we never overrun on cost or time."  They puff out their chests and many management teams pat them on the head and give them bonuses, but a quick look under the bonnet indicates that things may not be so rosy.  I have had a programme manager tell me that for years all a programme's projects were delivered to within three per cent of the detailed estimates - produced a year before delivery.  Looking at the data, these roughly 1000 function point projects were delivered with up to 80 change requests submitted after the end of design with some arriving during user acceptance tests.  Funnily enough these change requests seldom impacted the total budget, unless client senior management introduced major changes or cut budgets mid-year.  Clearly, potential overspends resulted in de-scoping and underspends resulted in added functionality to mop up the budget.  This is great if you want to spend a client's budget, and they may want you to do this, but as we know late change drives up defects, increases complexity, threatens schedules and decreases productivity.  How do we know this?  You've guessed it... We have the data and we can predict it.
With all of these examples it is impossible to effectively measure the value of the software development organisation, and all the behaviours I illustrate tend to increase what has been called the Technical Debt of the organisation.   This builds up as a result of poor requirements definition, multiple changes and incomplete testing resulting in poor quality code which must be fixed in the future.  The resultant total cost of ownership is higher than necessary, but worse still it is unpredictable. Development projects may deliver to a cost, which may not be known until after go-live, but the downstream technical debt built up as a result of defective code can have measureable economic consequences on the organisation and beyond.  Failure of banking systems to be delivered on time leads to regulatory fines, holiday booking system failures affect holiday makers, hoteliers and the holiday firms themselves.  Government failings are well known.   For example the replacement Child Support Agency system in the UK was in the headlines for a number of years as a failure.  The system was fundamentally unstable.  The Parliamentary Public Accounts Committee investigated and it was shown that the developers were forced to accept 2,500 change requests in a three-year project.  It was clear that when asked the Agency head had no idea just how out of control his requirements team had been.  The result was a debt visible to the public, but what was not visible was the cost of fixing the resultant mess.  That, I suspect, took several years and a lot of money.  The head of the CSA resigned. So why do parts of our industry still persist in seeing non-financial measurement as an awkward overhead? Answers on a postcard please.
Written by Alan Cameron at 10:15
Categories :

0 Comments :

Comment

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!