Tom Cagley the Only TMMi Accredited Assessor in the U.S.

TMMi Accredited Assessor

We're happy to announce that we officially have the only Test Maturity Model integration (TMMi) Accredited Assessor in the United States, Tom Cagley, our Vice President of Consulting.

In case you haven't noticed, we're passionate about the TMMi. There is no doubt that it is one of the most effective ways to improve software testing processes, leading to improved software quality and reduced risk. What this news means for you is that we're here and available to help you with a TMMi assessment - or even just to answer your questions.

Only an accredited assessor can perform a Test Maturity Model integration assessment. Assessments include a Gap Analysis to help evaluate how an organization functions against the model, identifying strengths, weaknesses and areas for improvement. To become a TMMi Accredited Assessor, a person must be a certified tester, take the TMMi Foundation training course and pass the examination. 

Tom has actually been an assessor for quite some time, and he's helped a number of companies here in the U.S. utilize the TMMi framework (you can read about one engagement here).

The TMMi framework was developed by the TMMi Foundation, a non-profit organization dedicated to improving test processes and practices. It is the de facto international standard to assess and improve test maturity, featuring independent best practices from more than 14 quality and test models.

DCG Software Value is a TMMi Accredited Supplier. More information about our TMMi services is available here.

Written by Default at 05:00
Categories :

Test Maturity Model integration (TMMi): Definition and History

Tom Cagley“All models are wrong, but some are useful.” – George E. P. Box

Testing is a mechanism for affecting product quality. The definition is of quality is varied, ranging from precise (Crosby – “Conformance to requirements”) to meta-physical (Juran – “Quality is an attitude or state of mind”). Without a standard model of testing that codifies a definition, it is difficult to determine whether testing is affecting quality in a positive manner. The Test Maturity Model integration (TMMi®) is an independent test maturity model. A model provides a framework of the activities and processes that need to be addressed, rather than merely laying out a set of milestones or events that need to be followed explicitly.

The TMMi is a reference model representing an abstract framework of interlinked concepts based on expert opinions. The Wikipedia definition suggests that a reference model can be used as a communication vehicle for ideas and concepts among the members of the model’s community. The use of a model as a tool to define the boundaries of a community also amplifies its usefulness as a communication tool, as it defines the language the community uses to describe itself. Thus, the TMMi is a testing reference model, for the testing community, defining the boundaries of testing, the language of testing and a path for process improvement and assessment.

Many developers (and development managers) think of testing as a group of activities that occur at the end of coding. This flies in the face of software engineering practices that have been in use since the 1980s and the Agile tenant of integrating testing into the entire development process. The TMMi model explicitly details a framework in which testing is not an event or gate that has to be hurdled, but rather a set of activities that stretch across the development lifecycle (waterfall, iterative or Agile). The TMMi model extends the boundary of testing to the entire development process.

The model lays out a set five maturity levels and sixteen process areas, ranging from test environment to defect prevention. The model has a similar feel to the classic CMMI model. The TMMi, through its framework of maturity levels, process areas, practices and sub-practices, lays out best practices for testing that should be considered when developing testing practices. Like other reference models, the TMMi provides a framework but does not prescribe how any project or organization should do any of the practices or sub-practices. By not prescribing how practices are to be implemented, the TMMi can be used in any organization that includes testing. A framework that is neutral to lean, Agile or waterfall practices is a tool that can be molded by managers and practitioners to make testing more efficient and effective in almost any organization.

DCG is a TMMi Accredited Supplier, which means that we can walk you through the model and address all of your questions and concerns, as well as assist with TMMi assessments and appraisals. If you're interested in learning more about how the TMMi works, read this case study on how DCG helped one organization to apply the TMMi and improve its testing processes.


Tom Cagley
VP of Consulting, TMMi Accredited Assessor

Written by Tom Cagley at 05:00

Why Do We Never Have Time to Test?

(You can download this report here.)

Scope of this Report

This paper discusses the time constraints of testing, its impact on several testing stakeholders, and possible ways to mitigate this problem. It includes:
Statistics on testing length.

  • Who are some of the stakeholders for software testing?
  • What kinds of delays do testers frequently face?
  • Making more time to test.

Testing Length

The following estimate of an average testing length is drawn from The Economics of Software Quality, by Capers Jones and Olivier Bonsignour, which is based on the authors’ clients average test cases per function point and time per case. The calculation is based on the types of tests used by 70% or more projects, and provides the following average for a 100 function point project. This assumes a thorough test of the system using an appropriate number of test cases for the size of the project.

Testing Times

Therefore, between one or two months might be spent for the testing of the sample project. Note that a 100 FP project is relatively small. Projects of ten thousand function points or more, especially for new software, are not uncommon. Testing these larger projects to the same degree could take a year or more, assuming testing time increased in a linear fashion. Testing time actually increases faster as the project grows larger. To completely test all combinations of possible processes for very large projects will take far more time than is available. For practical purposes, exhaustive testing is impossible for such projects.

The above calculation uses only a tenth of the total number of testing types. For example, integration and system testing were not included in the list of four most common testing types but are still quite common (listed as used in 60% and 40% of projects, respectively). The more testing methods applied, the better the final software quality but they will require still more time. Again, the potential testing time starts to exceed practical reasonableness. As a result, risk analysis and other measures must be applied to determine how much testing is enough.

Stakeholders

Any software project will have a number of stakeholders, each with potentially very different needs that can actively conflict with one another. Several stakeholders, with example goals, are presented here.

Testers

A goal of a software tester is to ensure that the application being tested is as free of defects as possible. As noted above, for any large project this is at best a lengthy and expensive goal, at worst an unobtainable one. At a certain point, further testing will not be practical. Some testers may naturally want to be more thorough than is necessary while others may fear reprisal if a defect is missed. Either way, the testing can easily become overlong because of these goals.

Developers

Developers want the project to be done and out the door so they can move on to the next one, or in Agile, to finish the current sprint. Testing, or having to return to the program to remove defects, delays this goal. As for the testers, this can become an even greater problem for developers penalties are imposed for defects in the code.

Customers

Customers want the application to be perfect out of the box for a low cost. The software must work without defect on all hardware combinations (especially for home users) and fulfill all requirements. Most customers realize that this is not going to happen but that is the goal. Software that does not meet the ideal will be well known in the community very quickly. This can put pressure on the business, which puts pressure on the manager, and finally on the development and testing teams.

Managers

Like the customer, the manager wants the program to be good quality and low cost, and most likely also wants a short development time. Improving any one of these goals (reducing cost, increasing quality, reducing time) requires a sacrifice in one or both of the other two. To decrease time and cost, testing may be cut or reduced. The process of solving a problem (development) is often, from the manager’s point of view, more clearly important than the process of preventing one (testing). Ultimately, management must make the decisions on how much time can be expended on any part of a project, and testing is often sacrificed in favor of the more visible development.

Delays, Delays

There is always the potential for any work to run into delays. Unforeseen problems with the software, changing personnel, and damaged equipment are all potential problems. There are too many possibilities to list here, but two will be presented: human factors and changing requirements.

Human Factors

However well-planned the software testing portion of a project might be, there is always the possibility that attitudes and habits of the development team can get in the way of completion. Distractions, attitudes towards testing, and politics can all cause delays.

Software teams, clearly, must work on computers much of their time, and computers are rife with potential distractions: social media, games, e-mail, and so on. These pursuits can sometimes improve efficiency in various ways, but they are still a lure to waste more time than is appropriate.

A number of testing types, including Subroutine and Unit testing (included in the time estimate above), are often most appropriately performed by the developers. Additionally, and pre-test defect removal will also involve the developer. Sometimes, developers do not believe that their time is properly spent on such activities. Further, even if the developers do relatively little testing themselves, a separate group of testers sending back the work due to defects, especially if this happens multiple times, can cause friction and further delays.

Changing Requirements

Most projects are going to have requirements evolve over time. Except with very small applications, as the work progresses, new features will be desired, a better view of current features will be available, and some requirements may actually be removed as unnecessary. Priorities will also shift, even if the requirements remain relatively stable. This increases both the development time and the testing time, as adaptions are made, but in this case testing is more likely to be sacrificed than development.

Making More Time for Testing

Defect Prevention

Traditionally, testing has been done after all development tasks are finished but before deployment. This means that for a small project an extra two months (according to the earlier testing time estimate) would be added to the end of the project, increasing the likelihood that the testing will be cut short. Finding defects throughout the development process (as in Figure 1) may increase the efficiency of removing defects, making the testing needed after the coding phase shorter.

Finding Defects

Test Driven Development

Test driven development (TDD) comes from Agile practices. It is a discipline in which development does not start with code, but with the development of a test case, which in turn is based on a requirement. The code is then written to pass the test case. If the code does not initially pass, it is returned to the backlog and attempted once again until it succeeds. This means that the testing is spread throughout the development process and the tests are ready at hand. However, studies of the technique show inconsistent benefits to costs: the process often costs more than other testing methods.

Test Maturity Model integration

TMMi is similar to Capability Maturity Model Integration (CMMI). It is a set of criteria that are used to determine the fitness of an organization’s testing processes. It does not dictate how the testing is done, rather it gives guidance in making improvements. It has five levels: Initial, Managed, Defined, Measured, and Optimizations. Each level except Initial has a set of requirements for an organization to be certified to it. The model is growing in popularity, as it gives a map to continuous improvement of testing systems. While not all organizations will need to obtain the highest level of TMMi, even the lower levels can lend insight to testing. Indeed, under TMMi, it is perfectly acceptable to be operating at several different levels simultaneously if those levels reflect the goals of the organization.

Conclusions

So, why is there never enough time for testing? Part of this is perception. All stakeholders want, to greater or lesser extents, an error-free application. Unfortunately, the exhaustive testing that this would require takes too much time to be possible in all but the smallest projects. As long as the goal is finding and fixing all defects, there can never be enough time to test. Proper risk assessment and prioritization is necessary before testing to reduce this problem.

Written by Default at 05:00

Visit DCG at the QUEST Conference!

QUEST2015

We're headed back again this year to the QUEST conference - this time in Atlanta! Not familiar? QUEST is the best source for new technologies and proven methods for Quality Engineered Software and Testing. Thought leaders, evangelists, innovative practitioners, and IT professionals from across North America gather together for a week of events.

From April 20-24th, you'll be able to find DCG representatives all around the conference.

The easiest place to find us is at booth #12 in the exhibition hall! We urge you to stop by to see what we're giving away at our booth and to hear a little bit about how you can use the TMMi framework to give you more time and more money for other things (including a cup of coffee!).

We'll also be giving an EXPO Talk, "A Cross Section of TMMi Survey Results 2014." So if you're interested in learning more about how other companies have progressed through TMMi Maturity Level Three, don't miss out! We'll provide a profile of the level of capability found in a typical testing organization, areas of the model that tend to give respondents the most trouble and a general pattern of progression.

Finally, don't miss Tom Cagley, VP of Consulting & TMMi Accredited Assessor, when he gives his presentation on April 22nd at 11:00am. "Scaling Agile Testing with TMMi" will discuss how to effectively tailor and use the TMMi model in plan-based and Agile environments and how to measure the results.

Have we piqued your interest about TMMi? Take this high-level evaluation of how your organization's testing compares to the TMMi. Tom Cagley will personally follow up with your results - and you can always stop by our booth at QUEST to chat some more!

See you in Atlanta!

Written by Default at 05:00

Exploratory Testing and Technical Debt

Software testing is a costly – but important – activity, as any software developer knows. While it’s a necessity, there are a number of ways that organizations can carry out testing. Automated testing, which requires significant investment but reduces required effort, and manual testing, which is labor intensive but the more common approach. Both techniques can be effective, but they also both have their own set of challenges, including technical debt.

Exploratory testing (ET) is a type of manual testing. The IT Pro article, “Exploratory Testing as a Source of Technical Debt,” examined this technique for a better understanding of the technical debt it creates.

Using this technique, testers run tests based on their intuition and knowledge of the system, which requires less test documentation and planning – it’s a flexible system that is most useful when there is only limited time available for testing. As a testing technique, ET is known to be quite cost effective – at least to begin with – but it can also increase the amount of work that has to be redone, creating additional costs.

The diagram below, recreated from IT Pro, illustrates how ET results in technical debt, and the type of technical debt created.

Exploratory TestingSource: IT Pro, "Exploratory Testing as a Source of Technical Debt"

Technical debt is not necessarily bad, nor is it entirely avoidable. The article notes three methods for dealing with technical debt, as proposed by Frank Buchman:

  • Paying the interest: Deal with the debt and the additional costs associated with it, on a regular basis.
  • Repaying the debt: Rework the system to eliminate the source of technical debt.
  • Converting the debt: Replace the source of technical debt with another solution that results in less debt.

Regardless, management will need a strategy for dealing with it, which may mean adjusting testing techniques. With both structured and unstructured techniques available, like ET, it is up to management to choose a solution that best fits the project at hand.  

For more information about technical debt, check out our Trusted Advisor report, What is Technical Debt and What Should We Do About It?

Of course, if you’re interested in new ways to improve your testing processes, you may be interested in the Test Maturity Model integration (TMMi) framework. The framework focuses on defect prevention, not detection, to help you find and fix errors more effectively.

Read the IT Pro article: Exploratory Testing as a Source of Technical Debt

 

Mike Harris
DCG President

Written by Michael D. Harris at 05:00
Categories :

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!