Friday 30 July 2010

The most important bug

At the recent UKTMF quarterly one of the session leaders posed the question "What it the most important piece of information that you want to see in a testing project?".
Number of Bugs was a statistic that was suggested by a few, with more than one respondent in the room suggesting bug cut-offs such as:-

When we hit fewer than 20 P1 bugs

When there are no P1 bugs and less than 20 P2s

It appears that the approach taken in the organisations that these people worked in was to assess the release fitness of the product based on the number of known bugs of a certain priority in the release software, and consider it to be of acceptable quality once this threshold had been achieved.

This is not an approach that I choose or recommend, for a number of reasons that I thought I'd outline here:-

1. The cutoff point is arbitrary



I would be pretty confident in asserting that the individual that decided that less than 20 P1 bugs was acceptable release quality had no empirical evidence to back up the fact that this figure provides us with significantly greater customer quality than 15, or 21, or 25. Despite this they are placing a huge amount of emphasis on that boundary bug that needs to be fixed in order to achieve the release target. This may result in a disproportionate amount of effort going into resolving one or two issues to hit a target one one hand, or on the flipside there may be a tendency to ease up on effort to fix the remaining issues for a release once the magic number has been achieved.

2. Assumption that bugs of the same priority as equal



Most folks have 4 or 5 bug priorities in their tracking systems, the more avanced may separate priority from severity. In either case we are grouping a wide variety of issues under very broad priority categories. Can we honestly say that all P2s are equal? What happens if we are in the last stages of a release project and we have 21 P1 bugs outstanding? If it is possible to achieve release quality through resolving just one of these issues then our decision process over which one to resolve is simple, we target the simplest to fix, with the least retesting required and the lowest risk of regression issues. From a customer quality perspective what we should be doing, however, is concentrating on resolving those issues that are in the core functionality in the critical paths of our application as it is these issues that pose the larger risk to our customers.

3. Removes responsibility on quality decisions



Imposing a measure of acceptable quality removes the responsibility from the decision makers from actually looking at the issues in the system and using their judgement to assess whether or not to release the software. In reality quality is a very difficult thing to define, let alone measure, and the testers role should be one of information provider to feed as much information into the decision making process as possible. By reducing this information to a set of bug priorities then you are essentially placing the decision on release quality in the hands of the tester who assigns the priorities. The decision about whether software is fit for release merits more management involvement than reviewing a four column bar chart.

4. Bug severities are subjective and movable



This is a double edged sword as it does allow for some human judgement to bring flexibility into a very black-and-white process. On the downside, however, it does introduce the temptation of re-prioritising issues in order to bring the product in under the quality radar. When we are considering bug summary statistics rather than bugs themselves for our quality measure then we introduce the possibility of hiding issues with re-prioritisation.

Summary



Bug priorities are there as a simple guideline, not an absolute measure. We should treat each issue on its own merit rather than masking details behind statistics. A review of individual bugs gives a far greater understanding of the current status of the system than summary of bug statistics ever could. This will lead to a far more informed decision making process at release time than when this information is abstracted behind a set of bug statistics, particularly if the individuals in the process have a good understanding of their customers' and the qualities that matter to them.

Copyright (c) Adam Knight 2009-2010

Whatsapp Button works on Mobile Device only

Start typing and press Enter to search