Better Software: Agile Governance: Value Metrics That Work

Spring 2016

In this article from Better Software, Mike Harris discusses how to create top-down Agile metrics that are compatible with and extend traditional waterfall metrics.

Better Software: Agile Governance: Value Metrics That Work

Spring 2016

Agile implementations, particularly Scrum, are rich in simple, team-level metrics such as story points, velocity, and burn-down charts. Unfortunately, these team-level metrics are not very useful for planning or monitoring across an entire software development organization. There is often a gap when attempting to measure an organization’s efficiency, economy, and effectiveness.

Software value visibility metrics are a better choice for governing your agile software development organization-wide. In this article, you will learn how to create top-down agile metrics that are compatible with and extend traditional waterfall metrics.

Current Practices

The following metric definitions are examples from a real waterfall organization:

Delivered as promised: Defined as the software delivered that must have the functionality promised in the requirements section of the project scope agreement.

Productivity: Defined as the number of function points delivered divided by the total number of work years of effort charged against the project from start to completion.

Timeliness: Defined as a comparison of the agreed-upon baselined implementation date against the current implementation date of record.

Quality: Defined as the number of defects delivered into production.

Accuracy of effort estimate: Defined as a comparison of the originally agreed-upon baselined effort estimate against the current effort estimate.

Even though these metrics can be captured initially at the project level, they can be readily aggregated up to the program and organization level for governance purposes. As a result, project progress can be easily summarized. Without any significant understanding of software development methodologies, any executive would be able to read and understand trend lines for these metrics on a dashboard.

In comparison, let’s look at how agile teams measure value. As part of sprint or release planning processes, some agile teams ask the product owner to assign relative value sizes, or value points, to individual stories or epics. From a governance perspective, these value points do not aggregate well, and few organizations even attempt to track these metrics.

The Metrics Challenge for Agile

We need agile metrics that will match the waterfall governance metrics. For example, anyone familiar with the Scrum methodology will recognize that the conversion is not necessarily straightforward for all of the metrics just highlighted. The waterfall metrics focus on assumptions built into the waterfall approach; for example, the timeliness metric assumes that it is difficult to predict when the functionality will be complete and operational. Agile, on the other hand, focuses on how much of the functionality is operational by a fixed date.

At least for the software development community, agile offers better benefits than waterfall on two main fronts: value delivery and customer satisfaction. As agile practitioners and champions, we can smugly review the list of waterfall metrics mentioned earlier and observe that none of these metrics measure business value delivery or customer satisfaction directly. It is a fair observation. But, does agile really do any better?

From my experience, agile implementations tend to fall into the trap of trying to deliver to executives the same reports that waterfall delivered. Figure 1 shows a set of the default reports from a popular agile lifecycle management tool. It is not the only popular tool on the market, and I don’t mean to imply that others don’t have slightly better results. My point here is that the reports listed at the executive, project, and sprint levels do not include any explicit references to value delivery or customer satisfaction.

Sample Agile Management Tool

Figure 1: Default reports from a sample agile management tool

There are reports that imply business value delivery just as there is some implied value which may be business value for the waterfall metrics (e.g., timeliness. For example, at the sprint report level, work item cycle time is not a bad proxy for value delivery. On the assumption that work items are broadly similar in size, a trend to deliver them more quickly implies more value being delivered over some time period. There are two problems with this view of value delivery. First, work item cycle time is not visible in executive reports; Second, the assumption that all work items have the same business value is unlikely. Cumulative flow reports are available in both executive and sprint reporting. These reports are powerful tools for identifying bottlenecks that can be tackled to help maximize value flow. In the absence of metrics for value, they cannot explicitly report value flow.

Customer satisfaction is entirely absent from executive, project/release, and sprint reports. Maybe that’s understandable in a tool that is focused on agile lifecycle management, but shouldn’t it be reflected in the dashboard at the executive level? Executives need to see summaries in addition to aggregate and detailed information about customer satisfaction. To meet the needs of executives, many organizations have separate groups and tools to conduct customer satisfaction surveys. This is better than nothing, but too often there is no connection between these high-level surveys and the agile teams who need fast, pertinent customer feedback to drive their product and process improvement.

The Solution

Organizations need to implement customer satisfaction measurements that are useful to individual agile teams and scalable for executives. This can be somewhat challenging because the best way to measure customer satisfaction is usually very situationally specific.

Let’s consider how we might measure value delivery. All stakeholders and team members must know the business and economic value of the project and work toward the same goal of maximizing business value flow.

Business units and IT must collaborate to define the value for each initiative, right down to the lowest level at which resourcing decisions are made. For example, there must be an approved business case for every project or program above a certain size—let’s say $10 million in Corporation X. This business case will presumably include some metric such as return on investment (ROI), described in table 1.



Less than or equal to 5%


6% to 10%

Low, but worthwhile

11% to 20%


21% to 40%


Greater than 40%

Very high

Table 1: A sample project ROI

For software development to start a $15 million project with a medium-value ROI, we need to break the project up into epics and stories. It probably does not make sense to do an individual business case for each epic and story, but we need to make sure each epic and story is linked to the master project. In addition, each epic and story should inherit the T-shirt size value label of the master project (in this case, medium).

When we think about value visualization and projects, it is important to remember that cost is not value any more than price is. So, why do organizations only talk to IT about cost? If organizations can share business value details for their initiatives, they will enable IT to become part of the ROI solution.

Is there a comprehensive and practical way for organizations to communicate business value information for software development projects? We should first acknowledge that even simple ways to communicate and visualize business value to the tactical decision-makers are better than none at all. We are quite happy to start with T-shirt sizes, but we can do so much more with a concept known as the value visualization framework (VVF).

The Value Visualization Framework

The VVF shows a comprehensive set of steps, not all of which need to be implemented from the beginning. It also illustrates ways to implement those steps in a simple manner. The VVF shown in figure 2 is not intended to be prescriptive. Instead, it is designed to address the complaint that although assigning value to software development is worthwhile, it is just too difficult to implement in practice.

Value Visualization Framework 

Figure 2: Value visualization framework

VVF is a five-step parallel process to the practice that creates the business case for a project and then breaks the project down into epics and stories or some form of tasks. Table 2 shows the information needed for each epic and story.




Define the units of value delivery (e.g., number of subscribers, hours saved in the process).


Define the value of the project in specific units (e.g., 17 new subscribers once deployed).


Define the size (e.g., 100 story points).


Define the cost of delay of the implementation challenge, including level of complexity, duration, and so on (e.g., $2,000 penalty for a missed deadline).


Quantify the economic value once deployed (e.g., $10, $15, and $20 per subscriber at weeks 9, 12, and 15, respectively).

Table 2: Requirements for each of the five steps of the value visualization framework

The first three steps are extremely important parts of the process and need to be well thought out. Steps four and five may need a little more explanation.

Step four—Define the cost of delay—is of fundamental importance to prioritizing work packets, projects, or stories. Essentially, we should always prioritize projects with the highest cost of delay. Cost of delay has three components: [1]

User/business valuerelative value to the customer or business: Do they prefer this over that? What is the revenue impact, and what is the potential penalty or other negative impact?

Time criticalityhow user/business value decays over time: Is there a fixed deadline, and will the customer wait for us or move to another solution? What is the current effect on customer satisfaction?

Risk reduction/opportunity enablementwhat else this does for our business: Will it reduce the risk of this or future delivery? Is there value in the information we will receive? Will it enable new business opportunities?

Identifying the cost of delay for a particular story is neither intuitively obvious nor easy. There are several absolute quantitative approaches that can work when there is a strong quantitative business case or when certain facts are known. For example, a fine will be levied if the software is not operational by December 31. While it is ideal, the opportunities to get absolute monetary values tied to software development are few and far between. Instead, I recommend using a relative sizing for cost of delay.

This approach can allow cost of delay to be assigned by an informed and representative team with relatively little data. The process is similar to agile story estimation using planning poker. Usually a limited set of numbers (e.g., the Fibonacci-like series popularized by Mike Cohn [2] for use in story points: 0, 0.5, 1, 2, 3, 5, 8, 13, 20, 40, 100, and ?) is used by participants to select the relative cost of delay for each story against the other stories (table 3).


Cost of Delay

story 1


story 2


story 3


story 4


Table 3: Relative cost of delay using a modified Fibonacci set of numbers

The example in table 3 captures the team’s judgment that story three has a higher cost of delay than the other stories being prioritized, and so, all other things being equal, story three should be done first.

Delaying the assignment of actual market value to functionality until it is actually deployed in step five allows the team to take into account fluctuations in value due to market forces or environment changes. The example in table 4 assumes that the value of website subscribers for a website development project increases as the subscriber base grows. As a result, each subscriber is worth $10 in week nine. This increases to $15 in week twelve and $20 in week fifteen. This is based on advertising revenue per subscriber growing as the number of subscribers increases. Of course, the value could just as easily fall!


Week 9

Week 12

Week 15

New subscribers this billing cycle




Total subscribers to date






Billing cycle revenue


(=$10 * 23)


(=$15 * 71)


 (=$20 * 106)

Fines or payments




(fine for missed regulatory deadline)

Billing cycle gross profit





Cost of delay refund




(cost of lost subscribers due to unduly delayed stories)

Total gross profit to date






Table 4: Example of quantifying the economic value of stories once deployed

The example in table 4 uses figures from a kanban simulation game. [3] Not surprisingly, the inclusion of a fine for a missed deadline affects profitability. Even without the fine, there is a real cost of delay generated by the fact that each story has a cost of delay that also impacts profits. Cost of delay should be tallied and tracked if those stories are not finished, even if the numbers are just relative, as in our example in table 3.

The granularity of these numbers can be off-putting—how can this level of accuracy be achieved in a practical implementation? The answer is that this level of accuracy is not necessary in a practical implementation, but approximations are worthwhile and very useful if this general structure is used.

Agile Governance Metrics Maturity Evolution

The biggest objection to measuring value delivered by software development is that it is just too difficult. I am often told that software development teams can’t map the details from business cases to their work once details are broken down. This issue becomes even more cumbersome if business cases are not shared with development (or there are no business cases at all). The VVF shows that using value for decision-making can be relatively simple, and the business can be involved in a way that’s simple for them, too. That said, there is no need to attempt all of this at once. Agile governance metrics can and should evolve over time. Figure 3 provides an evolutionary path to full agile governance through the three levels of organizational metrics maturity.

Agile Governance Metrics 

Figure 3: Agile governance metrics maturity evolution

The relative positions and level boundaries of the metrics in figure 3 are certainly debatable and may be different depending on the organization. Also, some of the metrics, such as velocity and size, that are fundamental to team performance do not lend themselves to being aggregated because they are almost always team-specific.

These metrics represent a comprehensive set of measurements for agile governance. The levels are intended to support the increasing maturity of the agile teams, so level one metrics, such as agile adoption (percent of projects using agile) and customer satisfaction, can be measured and used for governance in the early stages of agile project implementations.

Value delivered is certainly a later-stage metric, but it is no less important for agile implementations and agile governance.


Some of these agile governance metrics are consistent with previous waterfall metrics, some are largely redundant, and others are more powerful than those typically used with waterfall.

As agile software development needs to quickly deliver more business value, so must agile governance metrics include measurement of value.


1. Scaled Agile, Inc. “WSJF (Weighted Shortest Job First) Abstract.” SAFe. August 24, 2015.

2. Cohn, Mike. Agile Estimating and Planning. Upper Saddle River, NJ: Prentice Hall. 2005.

3. GetKanban. The getKanban Board Game.

Read the article: Agile Governance: Value Metrics That Work

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG President

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!