Tuesday, 24 September 2013

Textual description of firstImageUrl

Blaming the Tester

 

It has been my unfortunate experience more than once to have to defend my testing approach to customers. On each occasion has been deemed necessary in the light of an issue that the customer has encountered in the production use of a piece of software that I have been responsible for testing. I'm open to admitting when an issue that should have been detected was not. What has been particularly frustrating for me in these situations is when the presence of the issue in question, or at least the risk of it, had already been detected and raised by the testing team...

The Dreaded Document

I have a document. I'm not proud of it. It describes details of the rigour of our testing approach in terms that our customers are comfortable with. It talks about the number of tests we have in our regression packs, how many test data sets we have and how often these are run. The reason that the document exists is that, on the rare occasion that a customer encounters a significant problem with our software, a stock reaction seems to be to question our testing. My team and I created this document in discussion with the product management team as a means to explain and justify our testing approach, should this situation arise.

The really interesting element in exchanges of this nature is that no customer has ever questioned any other aspects of our development approach, irrespective of how much impact they may have on the overall software quality.

  • They do not question our coding standards
  • They do not question our requirements gathering and validation techniques
  • They do not question our levels of accepted risk in the business
  • They do not question our list of known issues or backlog items
  • They do not question the understood testing limits of the system

Instead they question the testing

This response is not always limited to external customers. In previous roles I've even had people from other internal departments questioning how bugs had made their way into the released software. I've sat in review meetings listening to how individuals '... thought we had tested this software' and how they wanted to find out how an issue 'got through testing'. Luckily in my current company this has not been the case, however I have had to face similar questions from external customers, hence the document.

A sacrificial anode

The behaviour of the business when faced with questions from customers over testing has typically been to defend the testing approach. Whilst this is reassuring and a good vote of confidence in our work, it is also very interesting. It seems that there is a preference for maintaining the focus on testing, rather than admitting that there could be other areas of the business at fault. Product owners would apparently rather admit a failure in testing than establishing a root cause elsewhere. Whilst frustrating from a testing perspective, I think that on closer examination there are explainable, if not good, reasons for this reluctance.

    • The perception of testing in the industry - Whilst increasing numbers of testers are enjoying more integrated roles within development teams, the most commonly encountered perception of testing within large organisations is still of a separate testing function, which is seen as less critical to software development than product management, analysis and programming. As a consequence of this I believe that it is deemed more acceptable to admit a failure in testing than other functions of the development process which are seen as more fundamental. A reasonable conclusion then is that, if we don't want testers to receive blame for bugs in the products then we need to integrate more closely with the development process. See here for a series of posts I wrote last year on more reasons why this is a good idea.
    • Reluctance to admit own mistakes - Often the individuals asked to explain the presence of an issue that was identified by the testing were the ones responsible for making the decisions not to follow up on that issue. In defending their own position it is easy to use a mistake in testing as a 'sacrificial anode' in removing attention from risk decisions that they have made . This is not purely a selfish approach. Customer perception is likely to be more heavily impacted by an exposed problem in the decision making process than one in the testing, as a result of the phenomenon described in the previous point. It therefore makes sense to sacrifice some confidence in testing at the expense of admitting that a problem arose due to the conscious taking of a risk.
    • "Last one to see the victim" effect - A principle in murder investigation is that the last person to see the victim alive is the likeliest culprit. The same phenomenon applies to testing. We're typically the last people to work on the software prior to release, and therefore the first function to blame when things go wrong. This is understandable and something we're probably always going to have to live with, however again the more integrated the testing and programming functions are into a unified development team, the less likely we are to see testing as the ones who shut the door on the software on its way out.

Our own worst enemy

Given the number of testers that I interact with and follow through various channels, I get a very high level of exposure to any public problems with IT systems. It seems that testers love a good bug in other people's software. What I find rather disappointing is that, when testers choose to share news of a public IT failure, they will often bemoan the lack of appropriate testing that would have found the issue. I'm sure we all fall into this mindset, I know that I do. Whenever I perceive a problem with a software system, either first hand or via news reports, I convince myself that it would never have happened in a system that I tested. This is almost certainly not the case, and demonstrates a really unhealthy attitude. By adopting this stance all we are doing is reinforcing the idea that it is the responsibility of the tester to find all of the bugs in the software. How do we know that the organisation in question hasn't employed capable testers who have fully appraised the managers of the risks of such issues, and the decision has been made to ship anyway? Or that the testers recommended that the area was tested and a budgetary constraint prevented them from performing that testing. Or simply could it be that the problem in question was very hard to find and even excellent testing failed to uncover it? We are quick to contradict the managers who have unrealistic expectations of perfect software, claiming the infinite combinations of functions in even the most simple systems, yet we seem to have the lowest tolerance of failure in systems that are not our own.

Change begins at home and if we're to change the 'blame the tester' game then we need to start within our community. Next time you see news of a data loss or security breach, don't jump to blaming the thoroughness, or even absence, of testing by that organisation. Instead question the development process as a whole, including all relevant functions and decision making processes. Maybe if we start to do this then others will follow, and the first response to issues won't be to blame the tester.

 

image: http://www.flickr.com/photos/cyberslayer/2535502341

ShareThis

Recommended