As some of you may know, here at the David Consulting Group we are acknowledged to be the worlds leading independent provider of function point analysis services.Â I say that by way of an acknowledgement of potentially vested interests in what follows. One of our clients is taking an honest and critical look at its software development metrics and processes.Â It has bitten the bullet and is using DCG to help it review all its software development governance and measurement.Â For them, as for many of you, its that budget time of year.Â Accordingly, they have asked us to prepare "top-down" and "bottom-up" benchmarks (see explanations of these below) for their software development. They are comfortable with the quality of their team but they want to be sure they are maximizing the amount of their budget that goes to new developmentÂ (as opposed to maintenance).Â They are interested in validating such things as their location strategy and the span of control in the development management teams.Â All good stuff - this is a client that takes measurement software development seriously Enough background, my main point here is that this client once used function points but gave them up because they did not seem to be worth the effort.Â Te business never looked at them and couldn't see that the small incremental effort to generate them was justified. They are now regretting this decision.Â Our top-down benchmark shows that the cost per FTE of the development group is excellent but now we need to know how good is the output per FTE?Â Are they getting what they pay for - less than their competitors - or are they truly getting comparable output for less cost?Â They have asked me how they can measure output.Â While we are function point analysis experts, we are not function point bigots so we have been working to try to find other units of output.Â Its not easy.Â Function points are not perfect but for measuring the size of software, they are the best that is available.Â The best proxy for an non-FP output metric is often "number of projects in releases" but everyone knows how weak that is.Â Of course, the client suggested effort hours but quickly agreed with me that effort hours are an input metric not an output metric.Â We are now considering whether function points need to be a part of the clients future as well as its past! ********************************************************************************************* For reference purposes, here at DCG, we describe "top-down" benchmarks as being benchmarks that look at high level aggregate data for a company e.g. revenue, number of development staff, number of IT staff, number of locations, etc.Â to calculate metrics that can be used to compare companies.Â This is the sort of information that is often available in company's annual reports or other published sources.Â It can also be found in the survey-based reports available from various companies.Â It is an interesting but, in our experience, not very precise form of benchmark.Â At this high level and when survey data is involved, it is almost impossible to be sure that you are comparing like for like.Â That said, it has it's place.Â "Bottom-up" benchmarking is our preference.Â For this we take or calculate data from a representative set of projects in the target organization and compare to data from other organizations projects in either our own extensive database or other publicly available project databases such as the ISBSG data.Â While there is still the risk of some ambiguity, it is much easier to test for and control in "bottom-up" benchmarking.Â If you are looking for a software development benchmark, please make sure that the consultant you choose does not do a top-down analysis and use that with some broad conversion factors (e.g. number of function points that one FTE can or should produce per day/month/year) to extrapolate "bottom-up" results.Â That can produce wildly misleading results.