An Introduction to Functional Size and Function Points: Part 1 | DCG

Function points are a measure of the functional size provided to the user by an application. The user is any entity (either a human or another application), outside of the application being measured, that considers the function to be important. Size is determined by applying one of several rule sets, including those outlined by the International Function Point Users Group (IFPUG) or the Common Software Measurement International Consortium (COSMIC). It is important to note that these systems do not measure size in terms of the number of lines of code, but instead based on what the user can ultimately see anduse.

Sizing software helps to establish an appropriate timeline for completing the work, as well as to appropriately allocate resources, including both team members and budget. But how does this relate to the testing process? The adage, “What you can’t measure, you can’t manage,” applies here. 

As a preventative measure, it can be difficult to determine the importance of software testing or how much effort should be applied to it. Coding, by contrast, reveals its value more readily, as it provides clear-cut value by fulfilling requirements. This difference in visibility can cause testing to be less emphasized than other steps, when it should ideally be emphasized throughout the development process. What is needed is a method of making the importance and effectiveness of testing more visible and measurable. Function points can help with this, providing measures that can be applied in test planning and in measuring the effectiveness oftesting.

First Step: Estimating Testing Requirements

One of the measures that is of particular use in testing is test case coverage i, which is defined as:

Number of test cases / Total number of function points

This provides a measure of how thorough the testing process is for a given project. If the test case coverage from a similar project that was determined to have been successful (perhaps using a measure like defect density, described in the next section) is available, this can be used to give an estimate of software test cases needed on a new project. Once the number of function points is known, simply use the number of test cases that gives similar coverage. Since function points can be measured based on requirements, rather than waiting until coding, a good idea of testing needs becomes apparent early in the process. This can be especially helpful when using defect prevention techniques (done throughout development), rather than defect removal (traditionally done at the end of development).

Initially, a software development organization may not have the metrics necessary to determine exactly how much testing is necessary for a given project. This does not have to be an impediment to using software measurement to get an initial estimate. There are a number of sites that publish benchmarking statistics, such as the International Software Benchmarking Standards Group (www.isbsg.org), which can be used in creating initial plans. If industry data is not available for a similar project, then it is still possible for experienced testers to give an estimate based on the expected size of the final product. Once the project is finished, continued measurement can be used to help in test planning for future projects, as well as to determine if the project met expectations.

Measuring Quality

Another good measure used in testing, and an indicator of both testing effectiveness and software quality, is defect densityi, defined as:

Total number of defects / Total function points

Defect density can also be compared to industry data or projects previously created to determine if the software meets standards. It does so in a neutral fashion, unlike some measures, such as cost per defect, which can penalize efforts at software qualityii. Ultimately, testing should be part of the process of improving quality, and measuring its effects on defect density is one way to track changes in quality.

Maintenance and Process Improvement

Once the software development project is finished, there is still the process of maintaining it. Even during this time, software may require re-engineering at some point. The choice to re-engineer, like most business decisions, should be made at a point when it is likely to be most cost effective. Function points can be used in this decision as well. The amount of maintenance an application requires per function point can give an idea of when re-engineering should take place: the longer spent on an application that provides a given amount of benefit (measured by function points), the more likely that it needs to be re-workediii. This should be based on company standards or industry standards, as appropriate for the project.

Measures are also necessary for the continuous improvement of processes. While the data may not be available at the beginning of the measurement process, perhaps because industry data including a similar project does not exist, as data is collected and analyzed it can be used to improve future projects. This data collection is a requirement of certain testing models, such as the Testing Maturity Model integration (TMMi)iv. Level Four of the TMMi model, Measured, requires test measurement, so function points and the analysis that can be based on them are one way to fulfill that need.

Conclusion

Software testing can be a challenge to track in a rigorous fashion. It does not itself create easily seen benefits, but instead usually prevents issues down the line. By combining function point sizings and other information, the very real benefits of testing can be more easily seen, analyzed and compared with other projects. This makes the sizing potentially valuable, allowing testing to be planned earlier in the process, giving a measure of effectiveness and allowing maintenance planning. By making the test process measurable, function points can facilitate process improvement, such as in the TMMi model. Perhaps most importantly, function points can be used to demonstrate the importance of testing to non-testers, which often includes managers. That, by itself, may be a sufficient reason to use them.

Further Reading

 

i Function Point Analysis, David Garmus and David Herron, pages37-39

ii Function Points as a Universal Software Metric, CapersJones

iii. Using Function Points,http://www.softwaremetrics.com/Articles/using.htm

iv Test Maturity Model integration, Erik van Veenendaal and BrianWells

 

Written by Default at 11:56

Can Function Points Be Used to Estimate Code Complexity?

Software code can get really complex and we all agree that complex code is harder to maintain and enhance.  This leads to lower business value for the application as the inevitable changes required for the application to keep pace with a changing environment cost more and more to implement for a given function size. Can function points be used to estimate code

Consequently, I was a little surprised to see the title, “Cozying up to Complexity,” at the head of a book review by Robert W. Lucky in the January 2017 edition of the IEEE Spectrum.  Lucky reviewed the new book by Samuel Arbesman, “Overcomplicated.” Lucky identifies Arbesman’s three key drivers of increasing technological complexity: “accretion”, “interconnection”, and “edge cases”.  Accretion is the result of the continual addition of functionality on top of and connecting existing systems.  Connecting ever more systems leads to interconnection complexity.  Edge cases are the rare but possible use cases or failure modes that have to be accounted for in the initial design or incorporated when they are discovered.  Over time, these edge cases add a lot of complexity that is not apparent from majority uses of the system.  Increased software complexity can be a problem for outsourcing software development because more complex code is more difficult to maintain and more difficult to enhance.  This becomes a problem for software vendor management as costs go up due to reduced vendor productivity.

There are measurements and metrics for software complexity but Lucky reports that Arbesman’s suggested solutions for complexity including the novel idea that we should not take a physicists mathematical view to try to build a model.  Instead, we should take a biologists view: record the complexity we find (e.g. in nature) and look for patterns that might repeat elsewhere.  Arbesman does not necessarily see increased complexity as a bad thing.

If we accept that some level of complexity is a good and necessary thing to achieve the “magic” of current and future software capabilities, I wonder if there is a way to identify the point of maximum useful complexity?  Perhaps “useful complexity” could be measured in function points per line of code?  Too much complexity would be indicated by a low “useful complexity” value – trying to shoehorn too much functionality into too few lines of code.  At the other end of the spectrum – what Arbesman might refer to as his edge cases – we might see too little functionality being delivered by too many lines of code.

My train of thought was as follows:

  • A program with zero functionality (and zero function points) may have complexity but I’m going to exclude it.
  • A program with 1 function point must have some lines of code and some small complexity.
  • For a program with a reasonable number of function points, I (as a former ‘C’ programmer) could make the program more complex by reducing the number of lines of code.
  • Adding lines of code could make the program less complex and easier to maintain or enhance by spreading out the functionality (and adding explanatory comments although these don’t usually count as lines of code) up to a certain point, after which diminishing returns would apply.  The question is where is that point.
  • It must also be true that there must be a certain complexity inherent in coding a certain number of function points.  This implies a lower limit for the complexity given a fixed number of function points.
  • This suggests that, for a given number of function points, there might be a non-linear inverse relationship between complexity and lines of code.

I’d welcome people’s ideas on this topic.  Thoughts?

Written by Michael D. Harris at 10:03

How Software Estimation Impacts Business Value

Software estimation in simple terms is the prediction of the cost, effort and/or duration of a software development project based on some foundation of knowledge.  Once an estimate is created, a budget is generated from the estimate and the flow of activity (the planning process) runs from the budget.  

Software estimation can significantly impact business value because it impacts business planning and budgeting. 

One challenge is that most organizations have a portfolio of software development work that is larger than they can accomplish and need a mechanism to prioritize the projects based on the value they deliver to the business.  This is where estimation can help – they predict the future value of the project to the business and estimate the cost of the project in resources and time.   Unfortunately, the estimates are often created by the people that are performing the actual day-to-day work not estimation experts.  Worse, new estimates from the people doing the work are typically based on their recall of previous estimates, not on previous project actuals – very few organizations take the time to report the actuals after a project is completed.  To most accurately estimate a software development project’s future business value,  it is best to generate the estimate based on the actuals from similar past projects and statistical modelling of the parameter that are different for the next project. 

Of course, an estimate is only an estimate no matter who develops it.  You can’t predict all the factors that may require modifications to the plan.  This is where the estimation cone of uncertainty comes in.  The cone starts wide because there is quite a bit of uncertainty at the beginning around the requirements of a project.  As decisions are made and the team discovers some of the unknown challenges that a project presents, then the cone of uncertainty starts to get smaller towards the final estimate. 

In regards to business value, the cone of uncertainty is significant because of the impact that the rigid adoption of early estimates can have on the budgeting and planning processes, especially if the software development effort is outsourced

 

I see software estimation as both a form of planning and input to the business planning process.  However, there is a a significant cross-section of the development community that believes #NoEstimates is the wave of the future.  This is a movement within the Agile community based on the premise that software development is a learning process that will always involve discovery and be influenced by rapid external change.  They believe that this dynamic environment of ongoing changes makes detailed, up-front plans a waste of time as software estimates can never be accurate.  Using #NoEstimates techniques requires breaking down stories into manageable, predictable chunks so that teams can predictably deliver value.  The ability to predictably deliver value provides organizations with the tool to forecast the delivery.  In my view, the #NoEstimates philosophy really isn’t not estimating – it is just estimating differently. 

Whether you use classic estimation methodologies that leverage plans and performance to the plans to generate feedback and guidance, or follow the #NoEstimates mindset that uses both functional software and throughput measures as feedback and guidance – the goal is usually the same.  They are both a form of planning and input to the business planning processes that are aimed at driving the business value of each software development initiative. 

Written by Michael D. Harris at 11:16

How can I use SNAP to improve my estimation practices?

Scope of Report

This month’s report will focus on how to improve estimation practices by incorporating the Software Non- functional Assessment Process (SNAP) developed by the International Function Point User’s Group (IFPUG) into the estimation process.

Software Estimation

The Issue

Software development estimation is not an easy or straightforward activity. Software development is not like making widgets where every deliverable is the same and every time the process is executed it is the same. Software development varies from project to project in requirements definition and what needs to be delivered. In addition, projects can also vary in what processes and methodologies are used as well as the technology itself. Given these variations it can be difficult to come up with a standard, efficient, and accurate way of estimating all software projects.

The Partial Solution

Software estimation approaches have improved but these have not been widely adopted. Many organizations still rely on a bottom-up approach. For many years, development organizations have used a bottom-up approach to estimation based on expert knowledge. This technique involves looking at all of the tasks that need to be developed and using Subject Mater Experts (SMEs) to determine how much time will be required for each activity. Often organizations ask for input separately, but often a Delphi method is used. The Delphi method was developed in the 1950’s by the Rand Corporation. Per Rand “The Delphi method solicits the opinions of experts through a series of carefully designed questionnaires interspersed with information and feedback in order to establish a convergence of opinion”. As the group converges the theory is that the estimate range will get smaller and become more accurate. This technique, and similarly Agile planning poker, is still utilized, but often is just relying on expert opinion and not data.

As software estimation became more critical other techniques began to emerge. In addition to the bottom-up method, organizations began to utilize a top-down approach. This approach involved identifying the total costs and dividing it by the number of various activities that needed to be completed. Initially this approach also was based more on opinion than fact.

In both of the above cases the estimates were based on tasks and costs rather than on the deliverable. Most industries quantify what needs to be built/created and then based on historical data determine how long it will take to reproduce. For example, it took one day to build a desk yesterday so the estimate for building the same desk today will also be one day.

The software industry needed a way to quantify deliverables in a consistent manner across different types of projects that could be used along with historical data to obtain more accurate estimates. The invention of Function Points (FPs) made this possible. Per the International Function Point User Group (IFPUG), FPs are defined as a unit of measure that quantifies the functional work product of software development. It is expressed in terms of functionality seen by the user and is measured independently of technology. That means that FPs can be used to quantify software deliverables independently of the tools, methods, and personnel used on the project. It provides for a consistent measure allowing data to be collected, analyzed, and used for estimating future projects.

With FPs available the top-down methodologies were improved. This technique involves quantifying the FPs for the intended project and then looking at historical data for projects of similar size to identify the average productivity rate (FP/Hour) and determine the estimate for the new project. However, as mentioned above, not every software development project is the same, so additional information is required to determine the most accurate estimate.

Although FPs provide an important missing piece of data to assist in estimation, they do not magically make estimation simple. In addition to FP size, the type of project (Enhancement or New Development) and the technology (Web, Client Server, etc.) have a strong influence on the productivity. It is important to segment historical productivity data by FP size, type, and technology to ensure that the correct comparisons are being made. In addition to the deliverable itself, the methodology (waterfall, agile), the experience of personnel, the tools used, and the organizational environment can all influence the effort estimate. Most estimation tools have developed a series of questions surrounding these ‘soft’ attributes that raise or lower the estimate based on the answers. For example, if highly productive tools and reuse are available then the productivity rate should be higher than average and thus require less effort. However, if the staff are new to the tools, then the full benefit may not be realized. Most estimation tools adjust for these variances and are intrinsic to the organizations’ historical data.

At this point we have accounted for the functional deliverables and the tools, methods, and personnel involved. So what else is needed?

The Rest of the Story

Although FPs are a good measure of the functionality that is added, changed, or removed in a software development or enhancement project, there is often project work separate from the FP measurement functionality that cannot be counted under the IFPUG rules. These are typically items that are defined as Non-Functional requirements. As stated in the IFPUG SNAP Assessment Practices Manual (APM), ISO/IEC 24765, Systems and Software Engineering Vocabulary defines non-functional requirements as “a software requirement that describes not what the software will do but how the software will do it. Examples include software performance requirements, software external interface requirements, software design constraints, and software quality constraints. Non-functional requirements are sometimes difficult to test, so they are usually evaluated subjectively.”

IFPUG saw an opportunity to fill this estimation gap and developed the Software Non-Functional Assessment Practice (SNAP) as a method to quantify non-functional requirements.

SNAP

History

IFPUG began the SNAP project in 2008 by initially developing an overall framework for measuring non- functional requirements. Beginning in 2009 a team began to define rules for counting SNAP and in 2011 published the first release of the APM. Various organizations beta tested the methodology and provided data and feedback to the IFPUG team to begin statistical analysis. The current version of APM is APM 2.3 and includes definitions, rules, and examples. As with the initial development of FPs, as more SNAP data is provided adjustments will need to be made to the rules to improve accuracy and consistency.

SNAP Methodology

The SNAP methodology is a standalone process; however, rather than re-invent the wheel, the IFPUG team utilized common definitions and terminology from the IFPUG FP Counting Practices Manual within the SNAP process. This also allows for an easier understanding of SNAP for those that are already familiar with FPs.

The SNAP framework is comprised of non-functional categories that are divided into sub-categories and evaluated using specific criteria. Although SNAP is a standalone process it can be used in conjunction with FPs to enhance a software project estimate.

The following are the SNAP categories and subcategories assessed:

 

Each sub-category has its’ own definition and assessment calculation. That means that each subcategory should be assessed independently of the others to determine the SNAP points for that subcategory. After all relevant subcategories have been assessed the SNAP points are added together to obtain the total SNAP points for the project.

Keep in mind that a non-functional requirement may be implemented using one or more subcategories and a subcategory can be used for many types of non-functional requirements. So the first step in the process is to examine the non-functional requirements and determine which categories/subcategories apply. Then only those categories/subcategories are assessed for the project.

With different assessment criteria for each subcategory it is impossible to review them all in this report; however, the following is an example of how to assess subcategory 3.3 Batch Processes:

Definition: Batch jobs that are not considered as functional requirements (they do not qualify as transactional functions) can be considered in SNAP. This subcategory allows for the sizing of batch processes which are triggered within the boundary of the application, not resulting in any data crossing the boundary.

Snap Counting Unit (SCU): User identified batch job

Complexity Parameters: 1. The number of Data Elements (DETs) processed by the job
2. The number of Logical Files (FTRs) referenced or updated by the job

SNAP Points calculation:

>

Result: Scheduling batch job uses 2 FTRs so High complexity. 10*25 DETs= 250 SP >/p>

Each non-functional requirement is assessed in this manner for the applicable subcategories and the SP results are added together for the total project SNAP points.

SNAP and Estimation

Once the SNAP points have been determined they are ready to be used in the software project estimation model. SNAP is used in the historical top-down method of estimating, similar to FPs. The estimator should look at the total SNAP points for the project and look at historical organization data if available, or industry data for projects with similar SNAP points to determine the average productivity rate for non-functional requirements (SNAP/Hours). Once the SNAP/Hour rate is selected the estimate can calculate effort by taking the SNAP points divided by the SNAP/Hour productivity rate. It is important to note that this figure is just the effort for developing/implementing the non-functional requirements. The estimator will still need to develop an effort estimate for the functional requirements. This can be done by taking the FPs divided by the selected FP/Hour productivity rate. Once these two figures are calculated they can be added together to have the total effort estimate for the project.

Estimate example:

Note that the SNAP points and the FPs are not added together, just the effort hours. SNAP and FP are two separate metrics and should never be added together. It is also important to make sure that the same functionality is not counted multiple times between SNAP and FPs as that would be ‘double counting’. So, for example, if multiple input/output methods are counted in FPs they should not be counted in SNAP.

This initial estimate is a good place to start; however, it is also good to understand the details behind the SNAP points and FPs to determine if the productivity rate should be adjusted. For instance, with FPs, an enhancement project that is mostly adding functionality would be more productive than a project that is mostly changing existing functionality. Similarly, with SNAP, different categories/subcategories may achieve higher or lower productivity rates. For example, a non-functional requirement for adding Multiple Input Methods would probably be more productive than non-functional requirements related to Data Entry Validations. These are the types of analyses that an organization should conduct with their historical data so that it can be used in future project estimations.

FPs have been around for over 30 years so there has been plenty of time for data collection and analysis by organizations and consultants to develop industry trends; but it had to start somewhere. SNAP is a relatively new methodology and therefore has limited industry data that can be used by organizations. As more companies implement SNAP more data will become available to the industry to develop trends. However, that doesn’t mean that an organization needs to wait for industry data. An individual company can start implementing SNAP today and collecting their own historical data, conducting their own analyses, and improving their estimates. Organizational historical data is typically more useful for estimating projects anyway.

Conclusion:

An estimate is only as good as the information and data available at the time of the estimate. Given this, it is always recommended to use multiple estimation methods (e.g. bottom-up, top-down, Delphi, Historical/Industry data based) to find a consensus for a reasonable estimate. Having historical and/or industry data to base an estimate upon is a huge advantage as opposed to ‘guessing’ what a result may be. Both FP/Hour and SNAP/Hour productivity rates can be used in this fashion to enhance the estimation process. Although the estimation process still isn’t automatic and requires some analysis, having data is always better than not having data. Also, being able to document an estimate with supporting data is always useful when managing projects throughout the life cycle and assessing results after implementation.

Sources:

  • Rand Corporation http://www.rand.org/topcs/delphi-method

  • Counting Practices Manual (CPM), Release 4.3.1; International Function Point User Group (IFPUG), https://www.ifpug.org/

  • APM 2.3 Assessment Process Manual (SNAP); International Function Point User Group (IFPUG), https://www.ifpug.org/

Written by Default at 05:00

How to Count Function Points from User Stories

I was recently involved in a consulting engagement where Agile methodologies were being implemented with User Stories as the documentation standard. The organization had used function points (FPs) for years on their waterfall projects and were wondering if they could use them for their Agile methodology – and if User Stories would be a good input into the FP counting process. The answer I provided was a resounding “YES.” Having User Stories is actually a huge advantage to counting FPs, especially early in the lifecycle, because User Stories are typically focused on the user perspective, just like FPs.

The only difficulty in using FPs in Agile methodologies is determining what to count and when to count. As with any metric, this always goes back to the purpose. For example, if you want to know the size of the final delivered product, then you count the FPs at the end of the project. If you want to estimate effort for a Sprint or Program Increment (PI), then you need to count at the beginning of the Sprint or PI.  The key is defining the purpose early in order to have access to what you need at the time of data collection.

When actually counting FPs from User Stories, there are a few tips that help with the process. Depending on the level of the User Stories, more questions or assumptions may be needed to get to an accurate FP count. There are also key words used in User Stories that may help identify FP components (e.g. Maintain, Report, Enter, Select). Often User Stories equate to transactional functions in FPs, so it is important for the FP analyst to identify data functions as they go along.

More tips and advice, including real-life examples, will be provided in my upcoming webinar, “Counting Function Points from User Stories,” taking place on Wednesday September 28, 2016 at 12:00 pm EST. Please register here. If you have any questions before the webinar, just leave a comment and I’ll be sure to address them during the presentation.

Lori Limbacher
Estimation Specialist; Certified Function Point Specialist (CFPS)

Written by Lori Limbacher at 05:00
Categories :

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG President

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!