I am a scientist by training, and in discussing bodies of knowledge both in science and in software measurement I often run up against theÂ "But it's only a theory" argument.Â Let's get this clear there are at least two definitions of the word theory
.Â One is based on the concept of an idea that is unproven (hence the cheap but wrong-headed trick often used by deniers of the Theory of Evolution), but the second, scientific, meaning is that a "theory refers to a comprehensive explanation of an important feature of nature supported by facts gathered over time.Â Theories also allow [users] to make predictions about as yet unobserved phenomena." The definition in quotes is from the American National Academy of Sciences.Â It is one of the simpler definitions, but it sums things up rather nicely.Â I have substituted users for scientists in the definition above because I think it has a valid application in considering software performance.
We have now got a Theory of Software Performance covering the whole software lifecycle, from thought to closedown.Â Over the last thirty years our industry has been under the measurement spotlight and we now have a considerable body of knowledge which enables us to predict the success or otherwise of our projects, and measure the health of our products.
So sound is this knowledge that predictive models are available.Â For estimating there are tools on the open market including SEERâ„¢ and of course CoCoMo, which is available to all.Â Backing this up are Benchmark Databases, including our own, where data are available for comparison against the performance of development and support teams.Â Good process driven risk management such as that provided by De-Riskâ„¢ enables monetary quantification of risk adding further opportunities to control projects and programmes.Â Add to this, analysis tools such as CASTâ„¢ provide us with clear diagnosis of the health of existing software.
So fundamental is measurement to the Software Performance Theory that metrics is incorporated into standards such as CMMI at the most basic levels of maturity.Â Â Yet everyday I come across organisations which subscribe to the "only a theory" argument, describing metrics as an overhead, or simply ignoring the facts and throwing money at problems.Â Here are a few examples:
Client process failure
The death march project where team have been working 60-70 hour weeks for months and the project has missed all its deadlines.Â Despite clear metrics to show that the project is undeliverable, the management view is that ifÂ "we all pull together" somehow, magically, all will turn out for the best.Â There has been a clear failure of process from end to end, and the client hopes that by throwing money at it the problem will go away, yet a glance at the data would show otherwise.Â Not only will problems remain and grow during development, but the downstream debt is likely to be enormous as defective code is fixed. How do we know this?Â We (the industry) have the data and we can predict it.
Development process failure
Established organisations complain that their development projects always over-run on cost and budget, and yet when asked they neither collect effort data for tasks on a weekly basis nor even at the end of the project to enable them to improve predictability.Â Add to this no view of size and schedules guessed at by developers and project managers and the results are bound to be chaotic.Â How do we know this?Â We have the data and we can predict it.
Joint process failure
A third group always delivers on time and on budget.Â "We are the good guys," I hear them say. "We monitor our spend and we never overrun on cost or time."Â They puff out their chests and many management teams pat them on the head and give them bonuses, but a quick look under the bonnet indicates that things may not be so rosy.Â I have had a programme manager tell me that for years all a programme's projects were delivered to within three per cent of the detailed estimates - produced a year before delivery.Â Looking at the data, these roughly 1000 function point projects were delivered with up to 80 change requests submitted after the end of design with some arriving during user acceptance tests.Â Funnily enough these change requests seldom impacted the total budget, unless client senior management introduced major changes or cut budgets mid-year.Â Clearly, potential overspends resulted in de-scoping and underspends resulted in added functionality to mop up the budget.Â This is great if you want to spend a client's budget, and they may want you to do this, but as we know late change drives up defects, increases complexity, threatens schedules and decreases productivity.Â How do we know this?Â You've guessed it... We have the data and we can predict it.
With all of these examples it is impossible to effectively measure the value of the software development organisation, and all the behaviours I illustrate tend to increase what has been called the Technical Debt of the organisation.Â Â This builds up as a result of poor requirements definition, multiple changes and incomplete testing resulting in poor quality code which must be fixed in the future.Â The resultant total cost of ownership is higher than necessary, but worse still it is unpredictable.
Development projects may deliver to a cost, which may not be known until after go-live, but the downstream technical debt built up as a result of defective code can have measureable economic consequences on the organisation and beyond.Â Failure of banking systems to be delivered on time leads to regulatory fines, holiday booking system failures affect holiday makers, hoteliers and the holiday firms themselves.Â Government failings are well known.Â Â For example the replacement Child Support Agency system in the UK was in the headlines for a number of years as a failure.Â The system was fundamentally unstable.Â The Parliamentary Public Accounts Committee investigated and it was shown that the developers were forced to accept 2,500 change requests in a three-year project.Â It was clear that when asked the Agency head had no idea just how out of control his requirements team had been.Â The result was a debt visible to the public, but what was not visible was the cost of fixing the resultant mess.Â That, I suspect, took several years and a lot of money.Â The head of the CSA resigned.
So why do parts of our industry still persist in seeing non-financial measurement as an awkward overhead? Answers on a postcard please.