Monday, 28 February 2011

Beyond Criteria

One of the commonly identified responsibilities for the tester working as part of an Agile development team is to identify the acceptance criteria for the story/stories they are working on. Here I discuss some issues that we have had in focussing the testing effort on acceptance criteria and an approach that we have adopted in my organisation to tackle some of these.

It took me a while to instigate the agreement of acceptance criteria as a fundamental part of our sprint operations but for the last couple of years it has been the case that we agree and publish acceptance criteria for each story being worked on. Prior to this we had struggled through the use of mini requirements specifications that were fed into the team at the start of evey iteration, with all of the negative implications for the testing effort that coincide with parallelising the testing and development efforts based on a written specification of behaviour.

The acceptance criteria became essential aspect of our testing operation. The ongoing discussion and verification of these criteria helped to ensure that the testing effort was inline with the development while staying focussed on testing the target customer value. As a story approached completion the tester would identify whether the criteria had been met or not and whether there were any limitations in place on the manner in which that criteria had been achieved. Limitations usually revolved around the scope of operation of the functionality in terms of scalability/performance, or a limit on the level of testing that had been achieved in the time available.

This approach was OK but through various discussions and retrospectives we identified a common feeling among the team that reporting story status against criteria was too restrictive. Trying to represent the result of a complex piece of work through a series of done/not done seemed insufficient to represent to product management the true status of the story. The acceptance criteria were our tool to use as a basis for discussion on what had been delivered but instead we found ourselves working to raise the awareness on any issues encountered behind a mask of 'Done' statuses.

Through a targeted team workshop we tackled the limitations of acceptance criteria and what we could do to improve our ability to improve on the process of gathering acceptance criteria, both for feeding into the development process and reporting feature status at the end of the iteration. We identified two significant areas that we wanted to change:-

Criteria, Assumptions and Risks


Instead of focussing the elaboration process on each feature to identifying acceptance criteria, the tester is now responsible for identifying three different aspects of the development story:-

  • Acceptance Criteria
  • Still an important aspect of the feature. Each story worked on will have a distinct set of criteria which we aim to meet through the implementation of that feature. The criteria where possible focus on value delivered through externally visible behaviour rather than internal structures and implementation details.
  • Assumptions
  • Sometimes it is necessary to make assumptions in order to get started on a feature, however hidden assumptions can be feature killers (I discuss this further here). If we are making any assumptions in order to make a start on the development of any feature then we identify these and publish them to the product management team along with the criteria. The testers early work on the feature should involve the confirmation, modification or negation of these with the appropriate stakeholder representatives. If we reach the point at which development work is progressing and affected criteria are being tested based on an unconfirmed assumption then the confirmation of that assumption becomes a top priority.
  • Risks
  • One of the key elements that we identify, both during elaboration and throughout the development of a story, are the key risks that are involved in the implementation of that story. The risks are identified based on whole team discussions and we work to mitigate these risks through the course of development. This may be through focussed testing, additional development work or simply highlighting further research/testing activities which need to be prioritised to gain further knowledge on the likelihood of that risk manifesting itself in a problem for the customer. Risks identified often focus around the complexity of development, the likelihood of collateral issues, the time required for thorough testing or any other factor that we feel imposes a risk on the successful delivery or successful operation of that feature.

Done is a sliding scale of Confidence


Instead of a binary done/not done measure of completion status, instead we've opted to report a measure of confidence against every one of the above items. Initially we've opted for a simple high/medium/low/none scale for this. For each item the confidence measure reflects a different confidence in the context of the item in question:-

  • Assumptions - Confidence that the assumption has been confirmed. In essence, have we removed this as an assumption and confirmed it as a constraint on the scope of the work being delivered.
  • Risks - Confidence in our steps taken to mitigate the risk. If an identified risk is not going to be mitigated to a level that we are happy with through the sprint then, as early as possible, we report low confidence in the risk and discuss potential actions that may be taken to address this in future iterations.
  • Criteria - Confidence that the criteria has been achieved. This can be affected by the level of testing that we have been able to achieve against that story, the complexity of the item and the level of problems that we have encountered through the testing of the feature.

As with our original criteria, the elements that we identify here are intended as tools to aid in our discussion of the development stories, rather than the sole means of communication. Having a slightly richer confidence based approach, however, does provide us with more flexibility in reporting the true status of the items that we are testing. This approach also affords the Product Management team a clearer high level indication of the status of the items being delivered and the levels of confidence that we are achieving with the features being implemented than discussions focussed around criteria alone.

In addition to the benefits as the sprint progresses, this approach has also yielded hidden benefits in that our elaboration discussions expose far more relevant information from the entire team, helping the tester to identify areas to focus on which in turn drives the improvement of the testing being performed.

The thing I like most about this approach is that the confidence measure is a subjective one that is under the control of the tester. We are not counting test passes/fails or counting bugs or any other arbitrary measure of success with the potential for gaming and false targets that are inherent in such practises. Instead we are utlising the judgement and expertise of the tester to summarise the confidence that we have in each item delivered, which is something that believe yields far more relevant information and is something that I place significantly more value on.

Copyright (c) Adam Knight 2009-2010

Monday, 14 February 2011

Testing Positive

I can remember when I took my first permanent Testing job telling my Uncle what I was going to be doing.

"Testing" he said, "Ah, that can be pretty soul destroying, it is a thankless task."

Now this, coming from a man who was a teacher for years and complained bitterly about the hardships of teaching for most of them, worried me. I was concerned that working as a tester would cause me to degenerate into a bitter, frustrated individual who criticised the work of others for a living, creating nothing of my own.

I have encountered organisations and individuals for whom this is not so far from the truth, but this certainly does not need to be the case. My personal feeling is that the Testing role is a very positive and creative one. Here I discuss some of the key elements that I think can help testers to flourish as constructive influences into the creative process that is software development.

Involvement from the start


A lot has been written about the benefits of having test involved in the development process right from the elaboration of requirements. In addition to the tangible benefits of identifying potential issues early in the feature lifecycle, I believe that there are also significant psychological benefits for the tester. Working on a project from inception creates a sense of pride and ownership of the deliverables of that project. Delaying the involvement of the tester risks alienating them from the development and can push them into the more adversarial role of external quality gatekeeper.

Involvement during development


By being involved through the development of new functionality testers feel more integral to the creative process of software development. They can use their critical analysis skills to help drive the project forward through identifying potential problem areas and allowing resolutions to be implemented as an integral part of the development team. I feel that this level of involvement leads to testers being less inclined to taking a stance of detached criticism that can occur when testing is a more isolated discipline.

Significant negative feelings can be generated when the testing activity is focused after development is perceived to have finished, and the development team is trying to move onto other items. I think the two most significant causes of negative feeling in this situation are:-

  • The feeling of being left behind
  • Development is moving on with new things and the tester is left working on the 'old' project. A lot of resentment can result from this situation, particularly if the tester perceives the quality to be poor.
  • A reluctance to disturb others
  • If the developers who worked on an item have moved on by the time the testing is being performed, then the tester can develop feelings of guilt about disturbing the progress of others to look at issues in work that the developer hoped to have completed.

Ironically the worst case scenario for these last issues may not be Waterfall, where at least we have planned testing phases with expected rework, but rather iterative development processes where the testing is staggered from development e.g. in the following iteration. This has the worst of both cases as we have neither the immediate input into the development process of true collaborative agile, nor the contingency planning and development resource devoted to a traditional waterfall testing phase.

Specification by example


The process of automating examples of customer use of the system can help in more ways than the obvious demonstration and regression testing benefits. Again there are intangible psychological advantages from having the tester help to drive the development through the implementation of key examples. I personally gain a huge sense of satisfaction from seeing realistic automated examples working on new functionality that I have helped to create. Testers with a project deliverable of demonstratable working examples of the funtionality have more of a personal investment in, and therefore sense of responsibility towards, the success of the development as a whole.

Exploratory testing

Much has been written about the effectiveness of exploratory testing as compared to manual scripted testing. Due consideration should also be given to the benefits for job satisfaction for the testers involved. Executing manual scripted tests is dull, unchallenging and, if we are honest, unlikely to be performed with a high level of diligence. The benefits to the tester of exploratory testing in terms of their personal sense of creativity and the sense of developing their craft should not be underestimated.


Here I've highlighted just a few techniques and principles that help towards a healthy testing culture and hopefully a more positive outlook for the testers involved. Rather than adopting a stance of criticising developments and raising as many issues as possible, the tester can work to improve the product as it is developed. In this way we become agents for positive change in the organisation, and improve our own job satisfaction in the process.

Copyright (c) Adam Knight 2011

Wednesday, 2 February 2011

Letting yourself go

This week I dealt with a support issue from a customer running a live implementation. The problem was that they were seeing a sporadic change in the behaviour of the system in querying certain international characters. On examination of the problem I realised that it was an occurrence of a bug that I had identified previously. I found it during release of a custom option we created for the customer to support their data validation utility.

My notes on the bug detailed my findings and an assessment of the scope of the problem. Based on the testing that I had performed the problem appeared to be limited to a very specific scenario relating to the use of the new option. At the time I had added this limitation to the release note and made the customer aware. They accepted this as a limitation and I felt that this was sufficient to allow releasing with the issue in place.

On discussing the recent occurence of the bug with the customer it became apparent that they had not used the custom option in the implementation in question and were still hitting the problem. I carried out some more extensive tests around the area and recreated the problem in an scenario where the option had not been used. It occurred less frequently than if the option had been applied, but was definitely present.

Looking back at the time when we released the option, and the work climate at the time, I can understand why this situation happened, and I think that it provides some valuable lessons:-

Testing doesn't give absolute information on scope of issues


Just because a problem can be recreated consistently under certain circumstances and not under others does not mean that the issue is limited to those circumstances. In this case the issue was recreatable quickly once the custom option had been used, but it still existed, albeit less frequently, on sessions where this option had not been applied. It was the same issue. Granted this is an unusual situation but I should still have assessed the risk of the problem on the basis that it could occur anywhere, not just in the scenario in which I had managed to recreate it.

Don't submit to confirmation bias because you are under pressure


In this case the customer was putting pressure on to deliver a solution. I had identified the problem in question, however I too quickly submitted to the idea that the use of the option caused the problem and my subsequent tests were biased towards confirming this rather than disproving it. I felt that documenting the limitations on using the option was sufficient. In hindsight I doubt that I would have come to this conclusion had the bug been found testing a major release rather than an custom patch which we were under pressure to deliver.

It is never possible to identify all of the bugs present in a software release. It is rarely possible to fix all of the issues that we do identify. Our ability to assess the risks posed by known issues in the system and prioritise fixes is limited by the information that we have on the nature and scope of those issues. Whatever the circumstances of releasing a piece of software, the standards of information we gather and assessments we perform should be consistently high. No matter how much pressure is on, the customer will be thankful for it in the long run.

Copyright (c) Adam Knight 2011

ShareThis

Recommended