How Would I Know How Badly We Are Losing Out Through Sub-Optimal Software Development?

Scope of this Report

Every company wants to maximize its profits while meeting its customer expectations. The primary purpose of software delivery is to provide a product to the customer that will validate a business idea, and ultimately provide value to the end-user. There must be feedback between the customer and the business, and this iterative process must be performed quickly, cheaply and reliably.1 The real question is how does an organization know whether its software delivery is performing at optimal levels?

This report considers the following topics:

  • What is sub optimal software development?
  • How would you know if your performance is sub optimal?
  • How do we measure for optimal development?

What is Sub-Optimal Software Development?

The purest definition of sub-optimal is “being below an optimal level or standard”. However, in the information technology (IT) organization, the development life cycle is characterized by multiple facets, each having its own ‘optimal level or standard’. Sub-optimal software development delivers less value than possible. Unfortunately sub-optimal, like beauty, is in the eye of the beholder and therefore can be very different based on context.

A sub-optimal development lif cycle is generally characterized by one of more of the following: cost overruns, poor time to market, excessive and/or critical defects, or low productivity. To any particular organization, one or more of these factors would signal sub-optimal software development.

How Would You Know if Your Performance is Sub-Optimal?

A sub-optimal software delivery process can manifest itself in a variety of ways. The most evident from an external perspective is customer satisfaction whether this is based upon an emotional response to the success or failure of the delivered software or an objective assessment based on testing or value for money. Mediocre quality or time to delivery will surely cause a response on the part of the consumer or client regardless of the reasons for the mediocrity. However, from an internal perspective, it has been observed that there are at least three scenarios that apply.

First we have the organization that doesn’t recognize that their software delivery process is sub-optimal. An example of this would be a company who is experiencing a solid bottom line and reasonable customer satisfaction. Or, a company leading the current technology wave and doesn’t see an immediate decline in their near future. In either case, while they may not be ‘sub-optimal’ in the usual sense of the meaning, they may not be the ‘best that they can be’. Since the first step towards improvement is to recognize there is an improvement to be made, there is a level of awareness that much be reached before this type of organizational environment can progress. In these two companies, the “awareness” may be to simply gain or keep the competitive advantage.

This next dynamic shows itself, not so subtly, when management doesn’t really want to address the software delivery process at all; they simply want the software delivered when they want it delivered.

How many times have we seen a situation where senior management has requested a software solution that has a fixed delivery date already attached to it? And they really aren’t interested in any type of response from the project manager unless they are told what they want to hear. In this type of management environment, the IT organization doesn’t invest much time in their delivery process because they don’t realize the power of good governance as a vehicle to properly manage the project and/or their customer’s expectations.

This third perspective involves an organization that wants to improve its software delivery capability but is unwilling or unable to make the resource investment to make necessary improvements.

Experience shows that lifecycle contributing factors can be numerous: an unrealistic schedule, ambiguous user requirements, the availability of appropriate resources, excessive defects and/or testing, etc. However, in this scenario, the visible issues may seem overwhelming or the root causes too obscure to be able to come up with a viable solution.

Regardless of how the sub-optimal performance manifests itself, the best way to determine if it exists or how pervasive it is, is by executing a benchmark. According to Financial Executive, benchmarking is one of the highest used and most successful management tools used by global senior executives. The purpose of a benchmark is to improve decision making and resource management in order to quantifiably impact a company’s bottom line.

“Benchmarking is the process through which a company measures its products, services, and practices against recognized as leaders in its industry. Benchmarking enables managers to determine what the best practice is, to prioritize opportunities for improvement, to enhance performance relative to customer expectations, and to leapfrog the traditional cycle of change. It also helps managers to understand the most accurate and efficient means of performing an activity, to learn how lower costs are actually achieved, and to take action to improve a company's cost competitiveness.”2

According to C.J. McNair and Kathleen H.J. Leibfried in their book, “Benchmarking: A Tool for Continuous Improvement”, some potential "triggers" for the benchmarking process include:

  • quality programs
  • cost reduction/budget process
  • operations improvement efforts
  • management change
  • new operations/new ventures
  • rethinking existing strategies
  • competitive assaults/crises

Any of the triggers above could certainly have been influenced by sub-optimal development.

In IT benchmarking, there are several options, any of which would be appropriate in addressing sub-optimal performance depending on the causal analysis desired. Horizontal benchmarking (across multiple teams), vertical benchmarking (across certain processes or categories), infrastructure benchmarking (data centers, networks, end-user support) or strategy benchmarking (information technology strategy or business-technology alignment) are some of the types used. The most common benchmark is the ADM software development benchmark.

The key factors that are addressed in an ADM benchmark are cost, time to market, quality and productivity. While benchmarking will identify good practices and successes, it is most beneficial at highlighting sub-optimal activities. These include inefficiencies and problems in methodology, staffing, planning, productivity, cost or capability across sizes, types & technologies. In addition, improvement actions are proposed. In many cases, a benchmark can also identify the probability of successful delivery against time, budget & quality targets – and propose alternative scenarios with higher likelihood. Other measures typically provided are your internal rate of return/ return on investment (ROI) and estimated tech debt – how much is being spent on the ratio of development to maintenance. Either can be key indicators of sub-optimal performance.

There are other benefits of benchmarking (listed below). Benchmarking …
... provides an independent, measurable verification of a team’s capability to perform against time and cost parameters by an objective 3rd party;

... signals management's willingness to pursue a philosophy that embraces change in a proactive rather than reactive manner;

... establishes meaningful goals and performance measures that reflect an external/customer focus, foster innovative thinking, and focus on high-payoff opportunities;

... creates organizational awareness of competitive disadvantage; and

... promotes teamwork that is based on competitive need and is driven by concrete data analysis, not intuition or gut feeling.

In summary, benchmarking would:

  • show you how you stack up against others and how you are performing internally;
  • act as a catalyst for change by setting realistic improvement targets; and
  • provide a path for the company toward optimal software practices.

How Do We Measure for Optimal Development?

Ultimately, the benchmarking exercise is to enable executive management to improve the performance of your software development using data-driven decisions to prioritize improvements. While the benchmark provides an evaluation of existing methods and outcomes against industry standard best practices, it also produces a gap analysis, producing recommendations for maximum return on investment (ROI).

The next step in the solution to software process optimization includes using a structured road map which includes the development of strategic goals based upon the benchmark results, and team analytics that are mapped to and support the strategic goals. From the road map exercise, scorecards and dashboards are developed for feedback to management.

The scorecard will combine an organization’s overall business strategy with the strategic goals set by the road map. These factors and the targets associated with them will reconcile to the desired state and alerts can be set up to identify situations whereby targets will not be met to enable future proactive actions. Generally, the scorecard is used to focus on long term solutions.

The formal definition of dashboards includes the identification and management of metrics within an interactive visual interface to enable the continual interaction and analysis of data. A dashboard is suited to a shorter cycle, or snapshot approach by providing varying types of visualizations to enable quicker decision making by showing charts, graphs, maps and gauges, each with its own metrics. A dashboard may be the visualization of the scorecard, or there may be hybrids of both.

Conclusion

Benchmarking and metrics modeling are the primary tools in recognizing and addressing sub-optimal development delivery to enable a company to become or stay competitive. By using the road map approach, the measurement and presentation of data for management use is key to recognizing and supporting optimal development processes.

Sources:

  • 1Dave Farley on the Rationale for Continuous Delivery; QCON London 2015, March 8, 2015.
  • 2Reference for Business, http://www.referenceforbusiness.com/management/A-Bud/Benchmarking.html.
  • “Benchmarking: A Tool for Continuous Improvement,” C.J. McNair and Kathleen H.J. Leibfried, 1993.
  • “Measuring Success - Benchmarking as a Tool for Optimizing Software Development Performance”, DCG Software Value (formerly David Consulting Group), 2015.  “A Closer Look at Scorecards And Dashboards”, Lyndsay Wise; April 27, 2010.
  • “Why Can’t We Estimate Better?” David Herron & Sheila P. Dennis, 2013.
Written by Default at 05:00

0 Comments :

Comment

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG President

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!