Beyond Benchmarks 101

I usually find that having a benchmark process is treated much too simplistically.  Suppliers are often forced to sign up to black and white productivity targets by naïve, and sometimes overly aggressive, clients.  A project is delivered and the productivity is measured in terms of cost or effort per output – and then the fighting starts.

The thing is, life is complicated and single data points don’t tell a complete story.

Never Let the Facts Get in the Way of a Good Story

In today’s news-driven, overly simplistic world, single data points are thought to be representative of trends, or worse, are given without context. In late 2013, there was a spate of fatalities amongst cyclists in London – about seven in a month – when in 2012, there were 14 in total. Cue sensation in the press and questions about the safety of cycling by politicians. 

Cooler heads started to dissect the results, and sad though this spate was, the annual rate in 2013 was exactly the same as 2012, while cycle journeys are increasing in London by more than 20 percent annually.  Sure, the causes of these fatalities needs to be analysed and lessons learned, but there is no need to panic.

In other words, hot spots do happen in random distributions, and hot spots don’t indicate long-term trends.

I really like the approach taken by the BBC programme, “More or Less.” There, professional statisticians dissect news stories and put the results in proper context – do go and download the podcasts. It is so refreshing to listen to, and it is the approach I have had to fight for throughout my career in software processes.

Using the Facts to Shape Thinking

But, back to software, let me give you an example of intelligent use of data.

We have a client who is keen to reduce software production costs – which client isn’t, right? Some time ago, this client had asked us, “Where will the next big savings come from?” Our response was, “Collect the data and we can discuss it with you.”

Our preliminary work focused on assessing the accuracy of estimates and measuring the results against those estimates.

As one might expect, the experienced teams were pretty good at estimating and when benchmarked, the balance between defined speed of delivery and cost was close to industry average.

Then we started to assess the data. Speed of delivery was faster than optimal for the size of delivery, a good thing on some eyes, but the cost per function point was higher because the team size had to be higher to create the software in the time.

Comparison of size of completed projects, delivered over a year, against industry trends indicated that two options were open – reduce scope or increase duration. Reducing scope by 25 percent from 200fp to 150fp suggests that savings of up to 30 percent in cost per function point were available. Increasing the development part of the lifecycle by three weeks indicates that savings of up to a startling 60 percent were available. 

The facts tell a good story and the reaction when we presented the results was truly wonderful to see. 

This is preliminary work, and life will get in the way, but it shows that intelligent use of benchmarks can be a positive experience for all concerned. Try it, it works.

 

Alan Cameron
Managing Director, DCG-SMS

Written by Alan Cameron at 04:00
Categories :

0 Comments :

Comment

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG President

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!