Tuesday, 25 October 2011

A Problem Halved - using a context specific cheat sheet to share product test heuristics

A term that is often used when highlighting the skilled nature of the job of software testing, particularly exploratory testing, is the use of testing heuristics. Heuristics are experience based rules of thumb that we use selectively to guide our approach to a problem, such as testing software features. Over the course of a testers career, (which, as I discuss here can be viewed as its own minor evolution) they will evolve a set of personal heuristics. We can apply these selectively in the appropriate contexts to increase our chances of quickly exposing the potential or actual problems with a software solution. The posession of an excellent set of heuristics can make a huge difference in the effectiveness of testing performed within a given timeframe.

Every solution is different

Over the course of time working on testing a product a tester will develop not only their general testing knowledge but also an excellent set of testing heuristics that relate solely to the context of that product. This knowledge is non-portable and valuable only within its present testing context, however within this context these heuristics can be the most valuable resources for targeted testing of the product in question. Some examples from my current context
  • There is an internal boundary within our NUMERIC libraries between NUMERICs of precision 18 and 19 which adds extra boundary cases to any NUMERIC type testing
  • Some validation on data imports is client side and some is server side, error handling and server logging testing needs to consider both types
  • The query system follows a different execution path when operating on few (<10) data partitions so testing needs to consider both paths
  • For our record level expiry, different paths are followed depending on whether some records in a partition, all records in a partition or all records in an import are expired
The knowledge of these facts adds a richness to any testing beyond that which might be obvious from the external, documented behaviour of product.

(Shared) knowledge is power

For some, their heuristics may be closely guarded secrets, the knowledge that elevates them above the competition or renders them invaluable within the organisation. My opinion, however, is that our goal is to produce the best software that we can as a team, and the sharing of knowledge that has arisen through our combined experience is essential.

Having read Elisabeth Hendrickson, James Lyndsey and Dale Emery's excellent Test Heuristics Cheat Sheet, I found this to be a great method of communicating simple high level heuristics. I felt that a context specific heuristics cheat sheet would be an excellent tool for sharing our internal testing knowledge and for use as a guide/checklist in exploratory testing of our product features. Using a similar layout to the testobsessed cheat sheet, I created a simple framework on our Wiki for our own context specific version for my team to use to document our internal heuristics. We break down by product area and then individual functions or operations. These are then annotated with key-word points that might need to be considered when testing that aspect of the product. If a concept merits further explanation then we link to a separate "glossary" where terms can be expanded upon to avoid cluttering up the sheet. This helps to keep the sheet as a lightweight reference tool rather than anything more involved. The page is maintained by the team, with each member updating the list if they encounter any behaviour that might be useful to others if testing that feature in the future. For example, an entry might look like

Record Level Operations
  • Record Expiry - Expire Subset of Records from Tree ; Expire All in Tree ; Expire All in Import ; Record Level Delete ; Records On Legal Hold ; Purged Records
We've been working with this tool for a few months now and have found it to be very useful. We reference the heuristics in exploratory testing, in elaboration meetings and also charter reviews of story coverage as a mental checklist and to prompt new ideas. If new problems are discussed in reviews then we suggest adding them to the sheet so that it is maintained by the team and for the team. (We have not had to maintain the sheet for long enough to drop features or consider versioning which may cause minor headaches). Of course no checklist or cheat sheet should be considered exhaustive and we have to apply our own personal intelligence to every testing challenge, however, as a simple tool for sharing testing experiences and driving new ones, I can recommend it.
image : http://de.wikipedia.org/wiki/Benutzer:KMJ

Monday, 3 October 2011

"You were supposed to draw him standing up" - testing preconceptions in design

Last weekend whilst I was supposedly looking after my two eldest children and actually sneakily checking my email my attention was drawn by a tweet linking to a testing challenge on the Testing Planet website.

The focus of the challenge was a great website called draw a stickman. The site is a great fun interactive site that invites the visitor to draw a stickman and then proceeds to engage this man in a series of activities requiring further graphical contribution from the visitor to complete the story.

The title of the challenge, "Play with this, I mean test it casually", indicated to me that the focus of the challenge should be around the user interactions with the application rather than any deep exploration of the html content or structure. Some comments had already been posted around what happens if you draw just a line or a blob. While all valid issues, most of them were based around interacting outside of the instructions of the application. While it is true that many bugs can arise when users operate in contravention of the documentation and instructions, in my experience these can often be resolved through referring the user to the instruction that has been missed and possibly amending the details to make them clearer.

Scope for intepretation

Instead I tried to focus on working within the instructions presented but looking for scope to intepret these in a way that the designer never intended. It was pretty easy finding some excellent ambiguity just by focussing on the first instruction (and the name of the site) "Draw a Stick Man". To most people, including me, my first idea if someone asks me to draw a stick man is something like this:-
However there is a huge amount of scope in the instruction here. My first attempt at pushing the scope was pure deviousness. I drew a similar stick man as the one above, but just upside-down. He duly progressed to live out the remainder of the adventure moving around the screen by means of his head pulsating and sliding him along like a snail's foot. Great fun but probably not what we could call a bug.

Next I progressed from pure deviousness to a more serious deviation from the expected norm. I imagined a wheelchair-user visiting the site and duly drew my stick man sitting in a wheelchair. Again the result was quite fun, the wheel of the wheelchair operating much as the head in the upside-down experiment. The behaviour of the wheel, whilst understandable, might upset a more sensitive wheelchair user so this might constitute a bug that the software had not been designed with this in mind.

My next attempt took the lack of specfic detail around the position of the stick man a little further, I decided to draw him in a reclined position with his hands behind his head, a "lazy stick man".

The results of this were fantastic. Rather than moving down the screen to progress with the adventure, lazy stick man decided that opening boxes and fighting dragons wasn't for him and flolloped off the screen to the right, leaving a completely blank screen with no further instructions. It turns out that this was a lucky shot as in my next 10 attempts I only managed to get 1 man to take the same lazy path off down the pub.

Whilst hilarious fun this also consituted a definite bug in the system, as it was possible for someone who came to the site and followed the instructions in good faith, but deviated slightly from the expected inputs, could be left stranded with no idea what was supposed to happen next.

I find that issues when the instructions are open to interpretation but the application is limited to a single case based on the designers preconceptions can be hugely problematic. In this case it is not possible to refer the user to any documentation or instructions highlighting their mistake, as in their mind they were operating within the instructions and have made no mistake. If not addressed carefully there is even the scope for insulting the users of the system if you mistakenly take the stance of suggesting that their ideas and notions are incorrect (had the wheelchair image decided to exit stage right this could have been a more serious issue risking insulting a potential user).

Question Yourself

Next time you are testing inputs into a system, question yourself and your own preconceptions. Are your default test usernames all based on western length/structure names. Are you assuming that your users all have full ability to use keyboard, screen and mouse? I once worked on a system where a key user had a muscular disorder and could not use a mouse - it certainly opened my eyes to what keyboard accessible really meant. (Darren MacMillan recently wrote an excellent post on accessibility here). As well as cultural ideas, do your technological or organisational experiences lead you to interpret instructions in a way that may be hidden from or contorted by users with less experience of your context?

Are you testing the actual operating instructions or following your own notions on how these instructions should be implemented? If it is the latter then having your stick man run away could be the least of your worries.