Believable Estimates

David HerronAny level-of-effort estimate for a software product, no matter how well developed, will not be 100% accurate (or at least it is very unlikely). However, that doesn't mean that the estimate shouldn't be believable.

In this case, believable means that the customer believes that the estimate is as close to accurate as possible, based on the current information, with an understanding that the outcome may change.

When an estimate is believable, it's easier to communicate with the client and to manage the project. Download my article, "Believable Estimates," to learn how to create such an estimate using Function Point Analysis.

Download


David Herron
VP, Software Performance Management

 

 

Written by David Herron at 05:00

Finding the Right Service-Level Measure

David HerronThere is a recent trend in outsourcing towards smaller, shorter-term deals and an increase in using multiple vendors (usually referred to as multi-sourcing). But, while outsourcing arrangements may be changing and customers continue to look for greater efficiencies, there is still much that remains the same in regards to contract governance challenges. Proper service-level measures are a necessary in outsourcing (and especially in multi-sourcing) contracts to mitigate such challenges.

Typically, these contracts are priced on the basis of labor cost. But an effective contract takes other dimensions into consideration as well. A successful outsourcing arrangement is one that delivers a high quality product that meets the needs of the business (value) for a reasonable price. Thus price, value and quality should serve as the focus.

If you're interested in learning more about these measures and how to create a successful outsourcing arrangement, download "Finding the Right Service-Level Measure in a Changing Outsourcing Landscape" here.


David Herron
VP, Software Performance Management

Written by David Herron at 05:00

ISMA10 - Another Great Conference!

IFPUGIFPUG's recent ISMA10 conference, held in Charlotte, North Carolina was, by most any measure (even function points!), a success. The theme of the conference, "Creating Value from Measurement," was certainly realized during the day-long series of presentations delivered by the many professionals and experts in the field.

Prior to the presentations, two workshops were held to kick off the conference. A two-day SNAP training session introduced the principles of the Software Non-Functional Assessment Process (SNAP) and prepared the attendees for the upcoming SNAP certification exam. The second workshop, "Applying Function Points to Emerging Business Technologies," provided insights and examples into some of the more advanced function point sizing scenarios. (World-renowned function point guru David Herron made a guest appearance and lectured the class on how much better life can be when you have a steady diet of fun and function points. Haha!)

After the workshops, the conference focused on committee meetings, and the SNAP certification exam was made available to those daring individuals who attended the two-day SNAP workshop. 

The final day of the conference focused on the presentations, which provided an opportunity for attendees to access the eight presentations on the agenda, starting with a fascinating opening keynote, "Digital Forensic: The Evidence Left Behind," presented by local lawyer Clark Walton.

There were several talks that discussed how organizations were using function points and story points to better manage their Agile projects. Of course, measurement data collection, analysis and reporting were front-and-center in three of the presentations, as presenters shared their data and insights regarding productivity gains and improved estimating practices using functional measures.  And a tip of the hat goes to George Mitwasi from United Health, who shared his truly pioneering work in the area of advancing the use of SNAP with his presentation, "Integrating SNAP into an Established FP based Estimation and Measurements Program."

As always, it was a great event and we are already looking forward to returning next year.

For further information on future IFPUG events and conferences, visit www.ifpug.org.


David Herron
Vice President, Software Performance Management

Written by David Herron at 05:00

Estimating – Have the Discussion

David HerronDoesn’t it seem like the phrase "estimating accuracy" is something of an oxymoron? If an estimate is an approximation, then how accurate should we expect an estimate to be? And what should your customer expect with regard to an estimate he/she is given?

A proper estimating procedure considers three primary input parameters: the size of the project, the complexity of the project and the delivery team's ability to produce the desired deliverable. In order to produce an "accurate" estimate, all three parameters need to be accurately represented. The entire estimating process is dependent upon having detailed information available upon which to base the estimate. Typically, during the lifecycle of a software project, additional information becomes available as the project progresses. Quite often, early estimates are noted to be +/- 100% accurate. This is usually understood and accepted within the organization. 

Based on this common occurrence, it is easy to make the argument that most estimating practices do not result in accurate predictions. So what is the value in investing time to estimate a project?  Perhaps we should change our perspective and look not at the end result, but instead at the process of estimating. We have already said that accuracy is dependent upon getting or having the right information. Well, the source of that information varies. It can come from documentation or from the local subject matter expert. And, some of the information will be incomplete or subject to interpretation. Further, some necessary information may be missing altogether, thereby increasing the risk of an inaccurate estimate. 

But, it's important to remember that the key player in the estimating process is your customer or end user. The proper perspective to take for a successful estimating practice is to make sure that you have properly set expectations. Yes, the estimate is not going to be 100% accurate. But, by engaging the customer several things occur. First, the customer has an increased level of awareness with regard to what goes into an estimate. Their expectations are now properly set in terms of potential risks and impacts. And, ideally they have some accountability in the accuracy of the estimate.

Having a formal discussion with your customer regarding the risk factors inherent to an estimate and the potential impact they may have is likely as valuable (if not more so) than the so-called accuracy of the estimate.


David Herron
VP, Software Performance Management

Written by David Herron at 05:00
Categories :

How to: Manage Vendor Performance

DavidThere are a variety of reasons why we outsource our application development and maintenance activities to third-party vendors. There are an equal number of ways that the contractual arrangements for these are drawn up and executed. In the majority of these engagements, the primary service-level measure that is most often present focuses on vendor performance. How effectively we apply these measures usually makes a difference in how well we govern the engagement and how accurately we measure the value received.

Vendor performance may include measures such as on-time delivery, quality, productivity, and cost. In practical terms, all measures of vendor performance can be summed up into three dimensions: Cost, quality, and value. A successful outsourcing arrangement is one that is competitively priced (cost), delivers a working software product (quality), and meets the needs of the business (value). Service-level agreements and measures need to be established to take these three dimensions into account.

As you might suspect, the challenge is to be able to properly measure the value dimension. One could successfully argue that value equates to delivering within budget and/or producing a quality deliverable. Cost and quality measures are easily defined and measured. But what about the measure of value delivered to the business? Furthermore, how do you equate cost to value? How can we ensure that we are getting a good price for the value being delivered?

A Metric to Measure Cost and Value

An ideal scenario would be to have a metric that would measure both cost and value. For example, in manufacturing, output is often measured as a cost per unit of work (a unit of work representing a high quality delivered product that brings value to the business). So, what is our cost per unit of work for software?

If we can agree that software delivers value to the customer in the form of business functionality, then our “unit of work” measure for software can equate to a measurement of functionality being delivered to the business. Fortunately, there is an industry standard measure called function points that does exactly that. Function points are a measure of the features and functions being developed and delivered to an end user. Function Point Analysis defines features and functions as they relates to things, such as input transactions, output displays and reports, inquiries on data values, groupings of maintained data, and interfaces to other applications.

Using the function point measure, coupled with a measure of cost, we can easily produce a cost per function point, which serves as our cost per unit of work. Using some historical performance measures, we can develop a standard cost per unit of work and then use that standard as a benchmark to measure third-party vendor bids and deliverables.

A Sound Negotiating Position With Vendors

With this information available, an organization now has the basis for a sound negotiating position with an outsourcing vendor, with deliverables based on this cost per unit of work. The function point size metric can also be used to effectively measure the quality of the software deliverable. There are several common function-point-based quality metrics, the most notable one being defect density. This is often calculated as the number of delivered defects per 1000 FPs.   

In conclusion, an ADM outsourcing arrangement that includes service-level agreements measuring both quality and value (cost per unit of work) are more likely to be successful and well governed. It is to both parties’ advantage to consider these metrics. The value for the client is an assurance that value deliverables are priced right. The value for the provider is to be able to demonstrate that what was promised was delivered within budget. 


David Herron
VP, Software Performance Management

Written by David Herron at 05:00
Categories :

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG President

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!