Wednesday, 9 April 2014

Knuckling Down

Having attended this year's TestBash conference in Brighton I come away from it with mixed feelings. It was fantastic to see so many testers who were clearly passionate about testing, and the atmosphere was vibrant. On the other hand I felt that there wasn't quite as much variety around the talks as the previous year, in terms of both gender of presenters and subject matter, the latter noticeable particularly the middle of the day. At least three of the speakers' opening gambit was to admit a history of ISQTB before discovering a more enlightened path of a context driven approach. (As anyone who has read this post will appreciate I am not a fan of ISQTB, however this repetition, combined with the intensity of multiple half hour talks with no breaks and no natural light, evoked a mild feeling of being in a recruitment meeting for a religious cult).

One of the messages that came up in more than one of the talks during the day, most strongly in Huib Schoots talk on Context Driven in Agile, was the need to stick to the principle of refusing to do bad work. The consequential suggestion was that a tester should leave any position where you are asked to compromise this principle. As anyone who has read my previous post will be well aware, I am a strong believer in taking an approach based on integrity, and I have a lot of sympathy with what was being presented here. Saying that, I think that we need to be very careful as an industry of the message that we are portraying, as the one interpreted could be very different from that intented.

Leaving so soon?

On the face of it the idea of leaving projects on the grounds of principles rather than doing what they perceived to be poor testing is a sentiment that I agree with. If you are working on a project where the limitations placed on your ability to do good testing with no avenue to circumnavigate these then I can totally understand that moving on should be considered.

What was missing for me in the sentiments presented at TestBash was any suggestion that testers should attempt to tackle the challenges faced on a poor or misguided project before leaving. In the examples I noted from the day there was no suggestion of any effort to resolve the situation, or alter the approach being taken. There was no implication of leaving only 'if all else fails'. As an employer of testers though I'd like to see an attitude more around attempting to tackle a bad situation head on rather than looking at moving on as the only option. Of course we should consider moving if a situation in untenable, but I'd like to think that this decision be made only after knuckling down and putting your best effort in to make the best of a bad lot.

Inform Decisions

Huib in his talk highlighted the need to restrict testing to being the providers of information to business stakeholders to inform decisions, not to be part of the decision making process, which is a common sentiment in testing circles and one that I generally agree with (although I do feel that in an empowered team, testers can take on some of the decision making responsibilities in collaboration with the other roles).

I felt that this presented an interesting juxtaposition for testers. On one hand we are saying that we should restrict our activities to providing information to informing decisions. At the same time we are refusing to perform bad work, where a major factor in the quality of the testing work that is possible will be the decisions that are made. Whilst I can see that these principles are not contradictory, I think that there is room for confusion in the interpretation that could lead testers to the conclusion that moving on is the only option when decisions have been made which present challenges for testing.

The root of that ambiguity lies in the phrase 'bad work' and the possible interpretations of what could constitute bad work for testers. In many scenarios testing projects have constraints or limitations in place that restrict testing activities that some might see as 'bad work', but I feel that it is still possible to do good testing in these situations. In fact it is in the more challenging and time constrained projects that knuckling down to perform excellent testing to quickly expose information is most valuable.

  • Not having sufficient time to test?
  • One situation which causes concern for testers is when they feel that there is insufficient time for testing. This is typically the result of a business decision to impose time constraints, sometimes based on rigid deadlines, often arbitrary targets, but outside of the control or influence of the tester.  As long as they clarify the risks imposed in the limited information that they will be able to uncover, then a skilled tester can still add a great deal of value in uncovering as much information as possible in the limited time available. These presentation slides from STARWest 2011 by Lynn Mckee and Nancy Kelln provide an excellent summary of test estimation issues and predefined project constraints, and also a superb set of references to further reading including a lot of writings on the subject by Michael Bolton.

  • High risk approaches
  • Testers provide information to inform decisions. If those decisions involve unneccessary risks this can have dire consequences for the project, but does not mean that the tester has done bad work. I've certainly been involved in projects where I've not been in agreement with the risk decisions that were made regarding the product in question. As I've discussed in this previous post, the levels of risk adoption in a business are unlikely to change, however there are various approaches that I've adopted in the situation where I felt that the decisions had a negative impact on the testing and the project as a whole. These primarily involved exposing and presenting risk information to the business that may not have been available previously. I'll expand on some examples of these below.

  • Inappropriate test approaches
  • Some projects may be characterised by the business decision makers dictating the testing approach. This is a more challenging situation, and one that I've been fortunate to avoid for most of my career. I have been in the situation where I was being told what to test, which involved focussing on the obvious user interface features at the expense of investigating more fundamental server stability issues. I've also been on projects where the development manager was advocating a lightweight manual testing approach where the use of appropriate automation tools would have dramatically improved the testing effort. In these situations I've had to take steps to address the situation, and this has not always been easy.

One of the most difficult skills I've found to learn as a tester is the ability to justify your approach and your reasons for taking it, and being able to argue your case to someone else who has a misguided perspective on what testing does or should involve. Having these discussions, and changing peoples minds, is a big part of what good testing is.

Avoiding Bad Work

I certainly don't believe in knowingly doing bad work, but I do believe in attempting to put in every effort to improve a situation. Having worked flor a long time as a permanent employee in small reactive technology companies I've been involved in a variety of projects, some better conceived and delivered than others. In situations where the testing risked being compromised I have had to dig deep on more than one occasion to try to recover a bad situation and deliver a successful result. Here are some of the approaches that I've found to be useful when trying to change the direction of a project towards better testing.

  • Pointing out the Risks
  • I've found a risks map to be a useful tool in highlighting the problems faced on a testing project. For example with the utility that I wrote about in this post on Testability where the testability was limited, I explained the situation to the management with the use of a Testability map that broke down the types of exploratory and automated testing that I felt was appropriate on that product, with icons highlighting areas which were inaccessible to testing in the time available. This formed the basis of a discussion around the approach to be taken and an agreement around improvements needed to progress the testing work.

  • Stories on the backlog
  • In an agile approach we tackle work in the team through the scheduling and prioritisation of user stories. If we feel that the testing of a certain feature is inadequate then an effective approach is to add stories to the backlog targeting the customer value that comes from the extra confidence that the testing can provide. This places into the hands of the business the decision to prioritise that testing work based on the value that is derived from it.

  • Using groups
  • A group discussion or exploratory testing session can help build confidence and cohesion in the testing group before raising concerns with management. In situations where myself and the testing team have had concerns over the suitability of a product for customers then I have arranged group exploratory testing sessions in the form of tours from the perspective of the relevant users. Discussing openly and critically the value of the product helped to gain a group understanding and provide confidence in raising our concerns to product manager at the time.

  • Just doing it
  • If the approach that is imposed on a testing effort is invalid for the context, then I'm a strong believer in taking the time and using a dose of professional flexibility to do what I feel is right anyway. Sometimes it is necessary to go the extra mile to do enough to demonstrate the value of a different approach. I was once being pressured to take a very limited testing approach on an API integration that was highly volatile and I felt that a level of automation around the interface was appropriate. I freed up some time during the working day to create a simple harness and then spent my evenings learning Java to develop a more extensive test suite. This was invaluable in helping to highlight basic regressions in further deliveries of the integration and also as a tool in facilitating further exploratory testing.

A Ray Of Sunshine

This is the kind of thing that wanted to see more of in the talks at TestBash - practical examples of how people have avoided bad testing by tackling difficult testing challenges. Thank goodness then for Chris George from RedGate in the penultimate talk of the day. Chris recounted a story of himself and a developer getting stuck into a test automation problem that had been estimated and needing 6 months of work, and through a can do attitude and a healthy dose of intelligent hard work, achieved a successful result in a short time.

The testing community is going through a fantastic period at present with more and more testers passionately promoting testing as a skilled profession. With the best will in the world, not every project that we work on or role that we step into is going to be initiated from an enlightened position of appreciating testing as a highly skilled role. Working on improving the testing effort on poorly conceived projects could be an opportunity to demonstrate the value of testing and the information that we can provide to a new audience. If testers capable of changing peoples' perceptions about testing and thereby improving the status of the testing profession take the approach that they will move on as soon as they are presented with a request that offends their testing sensibilities then that is an opportunity lost.



  1. Great post, Adam. In addition I think testers need to consider their own position in the team when they are thinking about 'bad testing', if you are an independent consultant then your response will likely be very different from a permanent test manager. I would hope that anyone leading a team would stick around and fight for the cause. If however you are being asked to do bad testing by your own test manager then things might be different.

    The only time I have actually had to leave a job because of bad testing was on a project where absolutely no one cared about the outcome of the testing, I was simply required to go through the motions so that a box could be ticked. After catching myself cutting corners I decided it was in all of our interests that I move on.

    1. Thanks for the great comment. I can understand your wanting to move on in the situation where there was absolutely no interest in the results. If no decisions are being made on the basis of the information provided then the incentive to test well is lost.

      Thanks for taking the time to read and comment,


  2. Hi Adam

    I enjoyed reading this, a well considered, well written post. In particular this

    One of the most difficult skills I've found to learn as a tester is the ability to justify your approach and your reasons for taking it, and being able to argue your case to someone else who has a misguided perspective on what testing does or should involve. Having these discussions, and changing peoples minds, is a big part of what good testing is.

    I think this is important for both employees and consultants. Rather than looking at 'am I doing bad work?" I ask myself can I still add value? If the answer is no, then its time to leave. Again, highly subjective language but the language is slightly different.

    1. Anne Marie,

      I'm glad you picked out that paragraph as that is a key sentiment for me in testing. Your principle

      "Rather than looking at 'am I doing bad work?" I ask myself can I still add value? "

      Is an excellent, concise statement which summarises both the main theme of this post but also an excellent work ethic for testers in general.

      Thanks for taking the time to provide this insightful comment.


  3. On the 'practical' side of things, I think as a whole we could do with more, not just TestBash, but as a community. We love to talk and debate, but sometimes this doesn't help us progress as fast as we would like. I think sometimes seeing how people are doing things practically helps people understand how they can move forward.

    I've been trying to work towards this with the 99 Things ebook and testing checklists we did a while back. Simple ideas and tips to help testers move forward. The heuristics that other testers have published across the web are another good example.

    I do plan to encourage more practical talks at TestBash. The balance is hard and perhaps alot of the practical and hands on stuff stuff happened at the pre-conference training.

    I wish I had more women speakers. I did try to invite some women, but for various reasons it didn't work out. Perhaps I should have tried harder to encourage more women to submit. There were some, but I felt strongly that I shouldn't 'positively discriminate' by choosing a speaker just because she was a woman over a potentially better talk by a man.

    With respect to your 'can I add value?' - I personally like to use 'how can I add value?' - this, for me, helps me open up my mind to many possibilities rather than confronting myself with a yes or no answer.

    1. Rosie,
      Thanks for taking the time for replying. I don’t for a second imagine that you didn’t try hard to get female speakers!
      On paper the range of talk subjects was diverse (I wouldn’t have paid to attend if it wasn’t) . As I said it was more of a ‘mild feeling’, born out of the common themes through the talks, than anything fundamentally wrong with the program. I didn’t attend the other events around the day that would possibly have provided a more practical focus.
      I know that in terms of the MOT/STC offering there is a wealth of practical based offerings. My perspective here is one of someone attending the conference as a standalone event. I know that many others will have done the same and when writing this post I was thinking of those individuals, particularly those like Chris (who commented here) for whom it was a new experience from which he was looking to gain direction and inspiration.
      'how can I add value?’ – I totally agree. When hiring I’m looking for people who look to step outside of their remit and see where they can add value to help the team and company as a whole.


  4. I appreciate this post. The points you raise need to be emphasized more in our rhetoric.

    1. James

      Thanks for taking the time to read and comment. I reference your work extensively in the testing work within my organisation and your teaching was clearly hugely influential for many who attended and spoke at the event. It is rewarding to think that by putting my thoughts down here I might in some way influence the content of that teaching in the future.


  5. An excellent post Adam. Your thoughts on test bash resonate with what I wrote in my response to Steve Janaway's blog post on incensing community engagement. Testbash was my first testing event, and I went wondering what a conference of testers would sound like. I too noticed that at times it had the air of Sunday morning at the gospel hall; the creed was read out and we had stories of those who were saved. I sensed that the ISTQB moans didn't so much come from a speakers personal story of experience but were more a matter of "Insert obligatory remarks about IQSTB as this is what I am expected to say to my peers of the context driven school ". I want to be careful not to overemphasise this, I returned home with a very positive impression.

    Regarding your primary topic. I work in defence, which is a domain that is traditionally process and standards driven. The freedom to which you can work can be limited by the compliance needs of prime contractors, auditors etc. The need for test cases and formality that does not add much value could be viewed by some as "bad work". I'd say you can chip away at this. For example use exploratory testing early in the project, report regularly on risk to management, write your test cases in a style that facilitates learning etc.

  6. Another important thing to add is that Ministry of Testing / TestBash is not a context driven company/community. We tend to attract alot of context driven type people, but I'm very conscious of not being linked solely with one way of thinking/being.

    I think it's extremely important that we need to be thinking more practically about lots of different areas of testing. What specific things can we be doing that are practical, useful and can be applied in the (real) testing world? What challenges are we facing? How can we communicate these challenges better?

    I was under the impression that lots of people got value from the LeanCoffee, perhaps there is room to do more of that kind of stuff? I'm keen an inspired to do more and different for next year.


    1. “Ministry of Testing / TestBash is not a context driven company/community” – I’m aware that is the case. As someone who tries to maintain independence from any one group I felt that the factors I’ve mentioned led to a stronger sense of CDT influence on the day than previous years. I think that the other commenter Chris’s point is appropriate – “I want to be careful not to overemphasise this, I returned home with a very positive impression.” and the event overall was a big boost for the testing community.

      I really like your second paragraph – I too think LeanCoffee is a great way to share ideas and thoughts, to the extent that I’ve started running these at lunchtimes at work. I didn’t make it to the TestBash one so maybe I am talking from a position of not looking at the event as a whole (needless to say I didn’t do the run either!). I got a lot out of the LeanCoffees at Agile Testing Days, as I wrote about in my post on Integrity and Pragmatism.
      Rest assured the overall impression I got from people who attended the event was that it was a great success.


  7. Hi Adam, great post! I wrote a reply on my blog: Refusing to do bad work…

  8. Hi Adam, great post! I wrote a reply on my blog: Refusing to do bad work…


Thanks for taking the time to read this post, I appreciate any comments that you may have:-