Mistrusted metrics, misapplied metrics, multitudinous metrics all result in marginal metrics – measures that are sidelined and not used effectively to drive better performance, better results, better value.
DCG-SMS Managing Director Alan Cameron has characterized some distinct metrics behaviour patterns he's observed into a “Metrics Maturity Model,” showing how correct use of metrics changes the way people think about managing knowledge-work such as software development.
This paper introduces the concept of the Value Tetrahedron, which has been developed for business use of software metrics. The Value Tetrahedron enables a business to understand the balance between technical debt and software development performance based on software metrics, where technical debt is the inherent indebtedness of an organization due to deviations from technical and architectural standards and uncleared known and unknown defects. The concept of the Value Tetrahedron allows an organisation to make informed decisions about the level of technical debt that it is willing to carry within its systems and portfolio.
This report identifies evidence that projects are late, over budget or deliver less than promised. It then considers various potential causes for these failures including culture, process, and estimation and how getting these things right can contribute to success.
In this report we examine what people consider as excellence in software development, and how they compare performance of development teams – the process of benchmarking. We will show that concentration on one aspect of excellence has a direct influence on other possible views. We will determine how individual views of excellence may coincide with aspects of the business lifecycle. Finally, we will look at how benchmarks tend to be driven to one conclusion, which may be optimum for one view of excellence but generally ignores the other factors.
The use of project metrics is often contentious and depends on the user’s viewpoint. In this report we take a look at what so-called “classic” project metrics are, how they might be defined, the consequences of the definitions, and how measures can be used effectively as part of assessing the outcome of a software development project.
This report explains the IT Capability Maturity Framework and compares it to other popular industry frameworks.
Benchmarks of software development processes are now commonplace in our industry, and they can be used effectively in order to understand how a software development organization is performing in relation to others against certain key metrics, like productivity, quality and time to market.
Unfortunately, abuse of the process is common, leading to a devaluing of the process and the tendency to resist the use of metrics. In this report, we discuss how benchmarks can be used effectively within a commercial framework.
The average cost to fix a defect at the end of the lifecycle is 400-800 times greater than if it were addressed earlier. On average, poor requirement practices account for 60 percent of a project’s time and budget. Organizations with well-defined, closely managed, and effectively measured quality activities succeed and continuously improve. Yet, in a recent survey, 77 percent of managers reported that bad decisions have been made due to a lack of accurate information.
This presentation outlines the TMMi model, the de facto international standard to assess and improve test maturity, featuring independent best practices from more than 14 quality and test models. It explores how TMMI can be used in conjunction with the GQM model to ensure that upper management is provided with the information they need to make informed business decisions.