Talking Down to the Client
I’ve recently been talking to some clients about their experience of software development benchmarks, either as customer or supplier, and overwhelmingly they’re telling me that it’s not often a good experience.
“Oh, I’m fed up with the whole process,” said one outsourcer. “Clients engage the blue chip benchmark suppliers, and, after a tussle and a lot of ill-will, a report lands on a desk. The results are presented as a simple answer, which is ‘the right answer,’ and we are left to scrap with our client over the results. The method of comparison and the dataset used – size and characteristics – aren’t made clear, and the result is unsatisfactory. In the worst cases, it can be contract threatening.”
I have some sympathy. Poorly created benchmarks can be misleading and in the worst cases can lead to court action. I encountered a situation once where a major outsourcer was producing a complex financial regulatory system, and the client decided to benchmark the programme. The results indicated that the software development was vastly inefficient and much too costly.
There was a dispute between client and outsourcer, which nearly went to court, until someone on the client side asked the question, “How many data points are we looking at, and what sort of applications were included in the sample?” There was a metaphorical shuffling of feet and a sheepish reply. Basically there was one data point in the same industry and it referred to a CRM system. I don’t know who was more embarrassed, the client or the benchmarker.
Trying to commoditise the outputs may seem sensible, but the resultant model can be a gross simplification, and that is “slap in the face” benchmarking. Essentially, it’s a combative, adversarial process and it doesn’t work.
A more rational approach to benchmarking should involve a three-way discussion involving the client, outsourcer and benchmarker. Developing and enhancing applications is a skilled task, and it’s multi-dimensional. Benchmarks should reflect the things that matter to the client’s business; the benchmarker needs to be open and honest about the data used and must offer a range of answers to facilitate discussion.
We hear about time, cost and quality often and, when we benchmark, all three aspects have to be taken into account. I would add to that agility and flexibility. Benchmark reports should balance software development business drivers against what’s being delivered. In a waterfall or similar process, where time is at a premium, either costs or defects tend to go up. Where quality is the driver, then unit costs may go up because higher skilled staff members are used, but there may be better effort productivity. If cost is key, the speed of delivery and quality may be less than optimal.
Agile should be a game changer, which is where client agility and flexibility become as important as that of the supplier. Suppliers may be well versed in delivering working software of high quality in a short time, but if the client doesn’t understand the business goals and can’t adequately groom and prioritise the product backlog, then all the benefits can be lost. But that’s a story for another time.
Go in With Your Eyes Open
The result of a benchmark should be to identify any process and cost inefficiencies in both client and supplier processes against an informed backdrop.
When you ask for a benchmark:
- be sure what you’re asking for;
- demand clarity and transparency from the benchmarker;
- be prepared for rational discussion with the benchmarker and supplier;
- collaborate and understand - there is no right answer;
- be prepared to look inward – client processes can be as much the cause of sub-optimal performance as those of the supplier;
- don’t trust the man (or woman) with the simple answer to the complex question.
Unless, of course, you want a slap in the face.
Managing Director, DCG-SMS