Successful Software Deployment Strategies

On October 25, DCG Software Value will co-host the webinar “Successful Deployment Strategies – From Software Sizing to Productivity Measurement” with CAST. Presenters, Philippe Guerin from CAST and Mike Harris from DCG Software Value, will examine how effective quality benchmarking and productivity measurement translate into successful transformation initiatives that cost less and de-risk your IT organization.

Philippe Guerin & Mike Harris

As a senior consultant with a broad expertise in all phases of the SDLC, Philippe Guerin has significant experience leading both technology and organizational transformation initiatives in complex global environments, especially around productivity measurement and improvement programs.

Mike Harris, CEO of DCG, has more than 30 years of broad management experience in the IT field, including periods in R&D, development, production, business, and academia. He is an internationally recognized author and speaker on a range of topics related to the Value Visualization of IT and is considered a thought leader in the international software development industry.

Successful Deployment Strategies

Attendees will walk away from this webinar with broadened knowledge around successful deployment processes, including how portfolio visibility can help manage risk, complexity, and architectural quality.

Learn how to introduce proactive measurements to detect structural quality and risk and vendor / ADM team output before transformation, monitor key performance indicators during, and continue to optimize applications by establishing performance improvement and cost reduction goals.

Philippe and Mike will also address how to:

  • Monitor, track, and compare ADM teams’ utilization, delivery efficiency, throughput, and quality of outputs
  • Detect portfolio outliers, compare against competitors, identify improvement opportunities, and track the evolution of size, risk, complexity, and quality
  • Increase management's visibility of risk, quality, and throughput through enhanced Service Level Agreements 

Register Now 

IT leaders across all industries are invited to attend this 30-minute webinar exploring best practices in software sizing and measurement. Register now to join Philippe and Mike on October 25 at 11:00am EST.

Written by Default at 05:00
Categories :

How can I use SNAP to improve my estimation practices?

Scope of Report

This month’s report will focus on how to improve estimation practices by incorporating the Software Non- functional Assessment Process (SNAP) developed by the International Function Point User’s Group (IFPUG) into the estimation process.

Software Estimation

The Issue

Software development estimation is not an easy or straightforward activity. Software development is not like making widgets where every deliverable is the same and every time the process is executed it is the same. Software development varies from project to project in requirements definition and what needs to be delivered. In addition, projects can also vary in what processes and methodologies are used as well as the technology itself. Given these variations it can be difficult to come up with a standard, efficient, and accurate way of estimating all software projects.

The Partial Solution

Software estimation approaches have improved but these have not been widely adopted. Many organizations still rely on a bottom-up approach. For many years, development organizations have used a bottom-up approach to estimation based on expert knowledge. This technique involves looking at all of the tasks that need to be developed and using Subject Mater Experts (SMEs) to determine how much time will be required for each activity. Often organizations ask for input separately, but often a Delphi method is used. The Delphi method was developed in the 1950’s by the Rand Corporation. Per Rand “The Delphi method solicits the opinions of experts through a series of carefully designed questionnaires interspersed with information and feedback in order to establish a convergence of opinion”. As the group converges the theory is that the estimate range will get smaller and become more accurate. This technique, and similarly Agile planning poker, is still utilized, but often is just relying on expert opinion and not data.

As software estimation became more critical other techniques began to emerge. In addition to the bottom-up method, organizations began to utilize a top-down approach. This approach involved identifying the total costs and dividing it by the number of various activities that needed to be completed. Initially this approach also was based more on opinion than fact.

In both of the above cases the estimates were based on tasks and costs rather than on the deliverable. Most industries quantify what needs to be built/created and then based on historical data determine how long it will take to reproduce. For example, it took one day to build a desk yesterday so the estimate for building the same desk today will also be one day.

The software industry needed a way to quantify deliverables in a consistent manner across different types of projects that could be used along with historical data to obtain more accurate estimates. The invention of Function Points (FPs) made this possible. Per the International Function Point User Group (IFPUG), FPs are defined as a unit of measure that quantifies the functional work product of software development. It is expressed in terms of functionality seen by the user and is measured independently of technology. That means that FPs can be used to quantify software deliverables independently of the tools, methods, and personnel used on the project. It provides for a consistent measure allowing data to be collected, analyzed, and used for estimating future projects.

With FPs available the top-down methodologies were improved. This technique involves quantifying the FPs for the intended project and then looking at historical data for projects of similar size to identify the average productivity rate (FP/Hour) and determine the estimate for the new project. However, as mentioned above, not every software development project is the same, so additional information is required to determine the most accurate estimate.

Although FPs provide an important missing piece of data to assist in estimation, they do not magically make estimation simple. In addition to FP size, the type of project (Enhancement or New Development) and the technology (Web, Client Server, etc.) have a strong influence on the productivity. It is important to segment historical productivity data by FP size, type, and technology to ensure that the correct comparisons are being made. In addition to the deliverable itself, the methodology (waterfall, agile), the experience of personnel, the tools used, and the organizational environment can all influence the effort estimate. Most estimation tools have developed a series of questions surrounding these ‘soft’ attributes that raise or lower the estimate based on the answers. For example, if highly productive tools and reuse are available then the productivity rate should be higher than average and thus require less effort. However, if the staff are new to the tools, then the full benefit may not be realized. Most estimation tools adjust for these variances and are intrinsic to the organizations’ historical data.

At this point we have accounted for the functional deliverables and the tools, methods, and personnel involved. So what else is needed?

The Rest of the Story

Although FPs are a good measure of the functionality that is added, changed, or removed in a software development or enhancement project, there is often project work separate from the FP measurement functionality that cannot be counted under the IFPUG rules. These are typically items that are defined as Non-Functional requirements. As stated in the IFPUG SNAP Assessment Practices Manual (APM), ISO/IEC 24765, Systems and Software Engineering Vocabulary defines non-functional requirements as “a software requirement that describes not what the software will do but how the software will do it. Examples include software performance requirements, software external interface requirements, software design constraints, and software quality constraints. Non-functional requirements are sometimes difficult to test, so they are usually evaluated subjectively.”

IFPUG saw an opportunity to fill this estimation gap and developed the Software Non-Functional Assessment Practice (SNAP) as a method to quantify non-functional requirements.

SNAP

History

IFPUG began the SNAP project in 2008 by initially developing an overall framework for measuring non- functional requirements. Beginning in 2009 a team began to define rules for counting SNAP and in 2011 published the first release of the APM. Various organizations beta tested the methodology and provided data and feedback to the IFPUG team to begin statistical analysis. The current version of APM is APM 2.3 and includes definitions, rules, and examples. As with the initial development of FPs, as more SNAP data is provided adjustments will need to be made to the rules to improve accuracy and consistency.

SNAP Methodology

The SNAP methodology is a standalone process; however, rather than re-invent the wheel, the IFPUG team utilized common definitions and terminology from the IFPUG FP Counting Practices Manual within the SNAP process. This also allows for an easier understanding of SNAP for those that are already familiar with FPs.

The SNAP framework is comprised of non-functional categories that are divided into sub-categories and evaluated using specific criteria. Although SNAP is a standalone process it can be used in conjunction with FPs to enhance a software project estimate.

The following are the SNAP categories and subcategories assessed:

 

Each sub-category has its’ own definition and assessment calculation. That means that each subcategory should be assessed independently of the others to determine the SNAP points for that subcategory. After all relevant subcategories have been assessed the SNAP points are added together to obtain the total SNAP points for the project.

Keep in mind that a non-functional requirement may be implemented using one or more subcategories and a subcategory can be used for many types of non-functional requirements. So the first step in the process is to examine the non-functional requirements and determine which categories/subcategories apply. Then only those categories/subcategories are assessed for the project.

With different assessment criteria for each subcategory it is impossible to review them all in this report; however, the following is an example of how to assess subcategory 3.3 Batch Processes:

Definition: Batch jobs that are not considered as functional requirements (they do not qualify as transactional functions) can be considered in SNAP. This subcategory allows for the sizing of batch processes which are triggered within the boundary of the application, not resulting in any data crossing the boundary.

Snap Counting Unit (SCU): User identified batch job

Complexity Parameters: 1. The number of Data Elements (DETs) processed by the job
2. The number of Logical Files (FTRs) referenced or updated by the job

SNAP Points calculation:

>

Result: Scheduling batch job uses 2 FTRs so High complexity. 10*25 DETs= 250 SP >/p>

Each non-functional requirement is assessed in this manner for the applicable subcategories and the SP results are added together for the total project SNAP points.

SNAP and Estimation

Once the SNAP points have been determined they are ready to be used in the software project estimation model. SNAP is used in the historical top-down method of estimating, similar to FPs. The estimator should look at the total SNAP points for the project and look at historical organization data if available, or industry data for projects with similar SNAP points to determine the average productivity rate for non-functional requirements (SNAP/Hours). Once the SNAP/Hour rate is selected the estimate can calculate effort by taking the SNAP points divided by the SNAP/Hour productivity rate. It is important to note that this figure is just the effort for developing/implementing the non-functional requirements. The estimator will still need to develop an effort estimate for the functional requirements. This can be done by taking the FPs divided by the selected FP/Hour productivity rate. Once these two figures are calculated they can be added together to have the total effort estimate for the project.

Estimate example:

Note that the SNAP points and the FPs are not added together, just the effort hours. SNAP and FP are two separate metrics and should never be added together. It is also important to make sure that the same functionality is not counted multiple times between SNAP and FPs as that would be ‘double counting’. So, for example, if multiple input/output methods are counted in FPs they should not be counted in SNAP.

This initial estimate is a good place to start; however, it is also good to understand the details behind the SNAP points and FPs to determine if the productivity rate should be adjusted. For instance, with FPs, an enhancement project that is mostly adding functionality would be more productive than a project that is mostly changing existing functionality. Similarly, with SNAP, different categories/subcategories may achieve higher or lower productivity rates. For example, a non-functional requirement for adding Multiple Input Methods would probably be more productive than non-functional requirements related to Data Entry Validations. These are the types of analyses that an organization should conduct with their historical data so that it can be used in future project estimations.

FPs have been around for over 30 years so there has been plenty of time for data collection and analysis by organizations and consultants to develop industry trends; but it had to start somewhere. SNAP is a relatively new methodology and therefore has limited industry data that can be used by organizations. As more companies implement SNAP more data will become available to the industry to develop trends. However, that doesn’t mean that an organization needs to wait for industry data. An individual company can start implementing SNAP today and collecting their own historical data, conducting their own analyses, and improving their estimates. Organizational historical data is typically more useful for estimating projects anyway.

Conclusion:

An estimate is only as good as the information and data available at the time of the estimate. Given this, it is always recommended to use multiple estimation methods (e.g. bottom-up, top-down, Delphi, Historical/Industry data based) to find a consensus for a reasonable estimate. Having historical and/or industry data to base an estimate upon is a huge advantage as opposed to ‘guessing’ what a result may be. Both FP/Hour and SNAP/Hour productivity rates can be used in this fashion to enhance the estimation process. Although the estimation process still isn’t automatic and requires some analysis, having data is always better than not having data. Also, being able to document an estimate with supporting data is always useful when managing projects throughout the life cycle and assessing results after implementation.

Sources:

  • Rand Corporation http://www.rand.org/topcs/delphi-method

  • Counting Practices Manual (CPM), Release 4.3.1; International Function Point User Group (IFPUG), https://www.ifpug.org/

  • APM 2.3 Assessment Process Manual (SNAP); International Function Point User Group (IFPUG), https://www.ifpug.org/

Written by Default at 05:00

The Magic Quadrant for Software Test Automation

Mike HarrisOne of the most fundamental questions test engineers ask before starting a new project is what tools they should use to help create their automated tests. Luckily, Gartner issues a yearly report to address this issue. This report, “Magic Quadrant for Software Test Automation,” focuses specifically on functional software test automation and the UI automation facilities of tools. The use cases the report considers with regard to each tool includes:

  • They must support mobile applications
  • They must feature responsive design
  • They must support packaged applications

With those use cases as evaluation criteria, Gartner evaluated 12 major vendors:

1. Automation Anywhere
2. Borland
3. Hewlett Packard Enterprise
4. IBM
5. Oracle
6. Original Software
7. Progress
8. Ranorex
9. SmartBear
10. TestPlant
11. Tricentist
12. Worksoft

As part of its analysis, Gartner placed each vendor in one of four categories:

1. Leaders – Those who support all three use cases.
2. Challengers – Those who have strong execution but typically only support two of the use cases.
3. Visionaries – Those who generally focus on a particular test automation problem or class of user.
4. Niche Players – Those who provide unique functions to a specific market or use case.

Beyond that, the vendors were assessed by their ability to execute and their completeness of vision. In short, ability to execute is ultimately the ability of the organization to meet its goals and commitments. Completeness of vision is the ability of the vendor to understand buyers’ wants and needs and successfully deliver against them.

The Magic Quadrant

The result is above. It’s important to mention that Gartner notes that most organizations typically have more than one automation tool provider. In addition, many of the solutions are still maturing – and will continue to mature over time.

Gartner updates the report on an annual basis, and it’s valuable to any organization who does testing. Testing, as we often say at DCG, is a key part of the development process, but it’s one that is often overlooked. The information in this report can enable organizations to make educated choices about software vendors, resulting in improved software quality and execution.

Read the article: “Magic Quadrant for Software Test Automation.”


Mike Harris
CEO

Written by Michael D. Harris at 05:00
Categories :

Process Improvement and Small Organizations

Mike HarrisThe article in the March/April 2016 edition of IEEE Software, “Software Process Improvement in Very Small Organizations,” focuses on a topic that any reader with a small organization will find interesting: the fact that very small entities (VSEs) – which have 25 or fewer employees – occupy a large part of the software business. Not only do many VSEs offer software services directly to clients, VSEs often are an outsourced provider for larger organizations, serving as a crucial factor of success.

However, there is no software process framework in place for VSEs. Those commonly used in the industry, such as CMMI and SPICE, are difficult to apply in smaller organizations due to cost, time, or other factors.

As a result, the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) collaborated to publish a set of ISO/IEC 29110 standards and guides (available for free at http://standards.iso.org/ittf/PubliclyAvailableStandards/index.html), a widely adopted standard. This document introduced the term VSE and includes process guidelines based on VSE characteristics.

If you read the IEEE article, you’ll find tables summarizing the most common improvement hurdles that VSEs face and the opportunities SPI offers them, based on decades of field experience in multiple countries.

In addition, the authors of the article are in the process of building an “experience factory,” helping VSEs to start process initiatives. VSEs can join the effort or benefit from the findings to-date (for free!).  

Why is this so interesting? Because in any organization – software or not – constant improvement is a necessity. In order to thrive and succeed, an organization must be looking for areas of improvement, best practices to follow, and an increase in quality. The easiest way to achieve that is via a framework that serves as a roadmap for change (like CMMI or the Scaled Agile Framework (SAFe)). Smaller software organizations have long been at a disadvantage because so many of the available frameworks were not created with them in mind.

This new set of standards and guides allows smaller organizations to reap the benefits of process improvement, while also contributing to the body of knowledge. Check it out and let us know what you think!

Read the article: “Software Process Improvement in Very Small Organizations,” IEEE Software.


Mike Harris
CEO

Written by Michael D. Harris at 05:00
Categories :

Effective Queue Management Can Drive Software Business Value

Sticky Notes

Proper prioritization is essential to driving the business value of software. Those working in the trenches need to have a clear understanding of the end goal in order to prioritize their projects appropriately.  With all team members focusing on the same mission to maximize business value from software by optimizing the flow of business value through software development, better decisions can be made throughout the software development lifecycle.

The key element to properly prioritizing projects is effective queue management. On a daily basis, tactical decisions are made at the team level about the prioritization of tasks.  In Donald Reinertsen’s book “The Principles of Product Development Flow: Second Generation Lean Product Development,” he offers six principles for creating a value management capability for queue management at the team level:

1. Software development inventory is physically and financially invisible
2. Queues are the root cause of the majority of economic waste in software development
3. Increasing resource utilization increases queues exponentially (but variability only increases queues linearly)
4. Optimum queue size is an economic trade-off
5. Don’t control capacity utilization, control queue size
6. Use cumulative flow diagrams to monitor queues

Often decisions made by IT are not based on delivering business value, but on the difficulty of the project, the resources required or who is shouting the loudest to push their project to the top of the queue. This needs to change. IT departments need to prioritize their projects based on the business value they will deliver to the organization. Effective queue management is an essential component to making the right tactical decisions that will lead to maximizing the flow of business value in software development efforts.

What drives your decision-making process when determining what project to put in the queue next?


Mike Harris
CEO

Written by Michael D. Harris at 05:00
Categories :

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!