Tuesday 16 October 2012

Moving Backwards by Standing Still - How Inactivity Causes Regressions



In the last couple of weeks I've encountered an interesting situation with one of our customers. They'd raised a couple of issues through the support desk with specific areas of our system, yet the behaviour in question had been consistent for some time. We were well aware of the behaviour and had not considered that it needed addressing before, in fact it was not even considered to be a bug previously. So what had changed?

The customer, through their ongoing use of the system, had grown more confident in their operation at their existing scale and had started to tackle larger and larger installations. Working on a Big Data system we are naturally well accustomed to massive scale, however in this case the size was in a different direction to our more common implementations. What the customer viewed as 'bugs' had not arisen through programmatical error, and had not been 'missed' in the original testing of the areas in question. They had arisen through changes over time in the customers perception of acceptable behaviour through their evolving business needs. The customer had moved in a new direction and the previously acceptable behaviour to them was now considered an issue.

A Static Measure of Quality


The common approach to regression testing is to identify a set of tests whose output can be checked to provide confidence that it has not changed from the time that the initial testing of that area was performed. In my post Automated Tests - you on a lot of good days I identified the benefits of such an approach in that testers have a heightened awareness of the area under test at the point of designing these checks than they would performing manual regression testing at a later date. On the flipside, however, automated tests represent a static 'snapshot' in time of acceptable behaviour, whereas customers expectations will inevitably change over time. Automated regression tests in themselves will not evolve in response to changes in the customers' demands of the product. The result is that regressions can occur without any change in the functionality in question, and with all of the existing regression tests passing, through ongoing changes elsewhere which have a consequential negative impact on the perception of that functionality.

Changing Expectations


I've personally encountered a number of possible drivers for changing customer expectations, I'm sure there are many others :-

  • The Raving fan
  • This is the scenario that we encountered in my example above. The product had delivered and exceeded the customer expectations in terms of the scale of implementations that they were tackling. This gave them the confidence to tackle larger and larger installations without feeling the need to raise change requests with our product management team to test or advance the functionality to meet their new needs, they just expected the product to scale. In some ways we were the masters of our own downfall in this regard, both positively by performing so well relative to the previous targets, but also by not putting an upper limit on the parameters in question. Putting limits in place can tactically be a great idea, even if the software potentially scales beyond it. It will provide confidence in what has been tested and at least provide some valuable information on the scale of use when customers request that the limit be extended.

  • The new shiny feature
  • This one is again a problem that a company can bring upon itself. One of our implementation team recently raised an issue with a supported character set for delimiting export data on the client side, he thought that there had been a regression. Actually the client side export had not changed. We had, more recently extended both the import and a parallel server side export feature to support an extended delimiter set, thereby changing the expectations of the client side feature, which up until that point had been perfectly acceptable. Testers need to be on top of these type of inconsistencies creeping in. If new features advance behaviour to the extent that other areas of the system are inconsistent or simply look a little tired in comparison then these issues need raising.

  • The moving market
  • Markets don't stay still and what was market leading behaviour can soon get overtaken by competitors. In the wake of the Olympics some excellent infographics comparing race winning times, such as the graph here. Notably in the 2012 Olympics Justin Gatlin posted a time of 9.79 - this was faster than the 9.85 that won him the title in 2004, but only landed him bronze this time. The competition had moved on. In the most obvious software cases advances can come in the form of improved performance and features, however factors such as platform support, integration with other tools and technologies and price can also be important. The failure of Friends Reunited stands out as an extreme example. Why would users pay for your service if a competitor is offering a more advanced feature set for free?


Keeping Up


Now, this is a testing blog, and I'm not saying that it is necessarily the tester's sole responsibility to keep up with changing markets and drive the direction of the product. One would hope that the Product Owners have an overall strategy which takes market changes into consideration and drives new targets and features in response, certainly responsibility for the decline of Friends Reunited cannot be laid at the feet of the testers. What I am saying is that we need to maintain awareness of the phenomenon and try to adapt our testing approach accordingly. Some such regressions may be subtle enough to slip through the net of requirements gathering. It may be, as in our case, that the customer up until that point has not felt the need to highlight their changing use cases. It could be the incremental introduction of inconsistencies across the product. It is our responsibility to check for bugs and regressions in our software and understand that these can arise through changes that are undetectable by automated or scripted regression tests as they occur outside the feature under test. Along with a solid set of automated regression tests there needs to exist an expert human understanding of the software behaviour and a critical eye on developments, both internal and external, that might affect the perception of it.

  • As new features are implemented as well as questioning those features, question whether their implementation could have a negative relative impact elsewhere in the system
  • Clarify the limits of what you are testing so that you can raise awareness when customers are approaching these limits
  • Talk to your support desk and outbound teams to see if customers' demands are changing over time and whether your tests or product need to change to reflect this
  • Monitor blogs and news feeds in your market to see what new products and features are coming up and how this could reflect on your product
  • Try to use relevant and current oracles in your testing. This is not always easy and I'm as guilty as anyone of using old versions of software rather than going through the pain of upgrading, however your market is constantly changing and what consitute relevant oracles will also need to change over time as new products and versions appear.

As I discussed in this post bugs are a personal thing. It stands to reason when quality is subjective that regressions are relative, and require no change in the behaviour of the software functionality in question to occur. When interviewing candidates for testing roles I often ask the question "What differentiates good employees from great employees". When it comes to testing, maintaining an understanding of this and having the awareness to look outside of a fixed set of pre-defined tests to ensure that the product is not suffering regressions would be one thing for me that marks out the latter from the former.

image : ray wewerka http://www.flickr.com/photos/picfix/4409257668

Phil said...

Really good post illustrated with great examples, nice work

Adam Knight said...

Thanks Phil - knowing what a great critical eye you have, any feedback from you is much appreciated.

Adam.

Whatsapp Button works on Mobile Device only

Start typing and press Enter to search