Automated function point counting can work for applications and releases but manual counting is better and necessary for projects.Â This post covers some of the considerations that DCG has gone through for one of our clients. Companies may have severalÂ goals in measuring their software using function points.Â For our client, it has been assumed to date that the singularly most important goal is to use the measurements: A.Â To improve AT&TÂ software development practices. However, in seeking to recommend the best sizing strategy going foward, we have aske them what weighting they would put on this goal (out of 100%) alongside the following equally valid goals: B.Â To baseline AT&T productivity for a future outsourcing program; C.Â To set productivity objectives for managers as part of their compensation plan; D.Â To compare the productivity of different development groups; E.Â To establish portfolio size for maintenance effort planning; F.Â To improve project estimating; G.Â Another goal Our clients current plan for sizing project-level activities is to use an automated sizing tool and to collect sizing data at the application release level by comparing the application size in the proprietary automated function points before and after the release.Â Clearly, this approach is useful for achieving goal E and, to a limited extent, goals B, C and possibly D.Â However, it is not clear that goals B. C and D can be fully achieved with this approach or that goal A can be achieved at all. DCG has suggested that sizing at the project level (rather than the release level) is the more effective and accurate way to obtain the data needed to attain goal A because measuring performance at the project level provides a greater opportunity to analyze process strengths and weaknesses within the SDLC. At face value, it might seem that project level (manual) counting is more costly than automated release counting because: -Â There are more projects than releases - this is true but DCG have found over the years that with appropriate resource planning and management the overhead typically associated with each count can be significantly reduced for regular counts on te same applications.Â Once that component is minimized, the effort (and hence cost) are proportional to the number of function points in the count so breaking up the counts from releases into projects makes minimal different to the effort. -Â Data collection by client staff to support manual counts for application has proved to be expensive in client resource time and interruptions to workflow - This is not well founded since our experience to date on client project counts has been very positive with minimal SME involvement. -Â Automated counting of releases is quicker than manual counting of releases after the initial calibration has been done - Of course, automated counting will still be quicker and cheaper for releases but it is not yet a reliable option for projects. DCG - experience suggests that data at the project level is of greater value than size data at the release levelÂ because: -Â Measuring productivity at the project level gives a more accurate and realistic perspective of the performance of the SDLC.Â Specifically, every project is unique in that there are any number of factors that can influence performance. When measuring at the release level, individual project performance is not easily distinguishable. Consequently, it is more difficult to determine what has contributed to high or low performance levels and to draw lessons for improvement from good or bad practices (goal A); -Â Measuring performance at the project level is in line with other client initiatives and therefore provides a basis of comparison (goals B,C,D) -Â Historical data at the project level provides the basis for improved project estimating (goal F).