Thursday 24 November 2011

The Thread of an idea - Adopting a Thread Based approach to Exploratory Testing

In all of our testing activities my approach is very much to treat our current practices as a point in a process of evolution. Here I write about an excellent example of how we evolved our testing approach over time moving away from the idea of test cases to a more flexible thread based strategy that was more suited to our approach to testing and our need to parallelise our testing activities

Not the tool for the job

When I started in my current role the team was attempting to manage their testing in an open source test case management tool. The process around tool use was poorly defined and rooted very much in the principles of a staged test approach, planning test cases up front and then executing these repeatedly across subsequent builds of the software. This was a hugely unwieldy approach given that we were attempting to work with continuous integration builds. The tool was clearly not suitable for the way that we were operating, with some managing their work outside the tool and bulk updating at a later point.

Needless to say as soon as I was able I moved to replace this system.

Deferring Commitment

I had enjoyed some success in previous teams with a lightweight spreadsheet solution backed by a database. I thought that this would be an excellent replacement, so I worked to implement a variation on this into the team. In an attempt to address the issue of not having a fixed set of test cases to execute, David Evans provided a good suggestion that using an estimate of test cases might alleviate the immediate problem by allowing us to track based on estimated work. This allowed us to defer commitments on the actual tests performed until such time as we were actually working on the feature. The estimates would naturally converge with the actual tests executed as the feature progressed. This did provide a significant improvement, however I still found that the tracking of these estimates and counts was not giving me any useful information in terms of understanding the status and progress of testing during the sprint.

You're off the Case

The conclusion that we were rapidly arriving at was that, counting test cases for us provided little or no valuable information. May of our tests resulted in the generation of graphs and performance charts for review. The focus of our testing revolved around providing sufficient information on the system to the product owners to allow them to make decisions on priorities. Often these decisions involved some careful trade-off in performance between small data performance and scalability over larger data, there was no concept of a pass or fail with these activities. Counts of test cases held little value in this regard as test cases as artifacts describe no useful information to help understand the behaviour of the application under test or visualise any relative performance differences between alternative solutions. It was more important to us to be able to convey information on the impact of changes introduced than support meaningless measurements of progress through the aggregation of abstract entities.

In an attempt to find a more appropriate method of managing our testing activities, we trialled the use of Session Based Exploratory Testing, an approach arising from the Context Driven School of Software Testing and championed in particular by James Bach and Michael Bolton. What we found was that, for our context, this approach also posed us some challenges. Many of our testing investigations involved setting up and executing long batch running import or query processes and then gathering and analysing the resulting data at the end. This meant that each testing activity could potentially have long periods of 'down time' from the tester perspective which did not fit well with the idea of intensive time-boxed testing sessions. Our testers naturally parallelised their testing activity around these down times to retest bugs and work on tasks such as automation maintenance.

The thread of an idea

Whilst not wanting to give up on the idea of the Exploratory Testing Charter driven approach, it was clear that we needed to tailor this somewhat to suit our operating context and the software under test. My thinking was that the approach could work for us, but we needed to support the testers working on parallel investigation streams at the same time. Wondering if anyone else had hit upon similar ideas of a "Thread-Based" approach I did some research and hit upon this page on James Bach's blog.

I am a great believer in tailoring your management to suit the way that your team want to work rather than the other way around, and this approach seemed to fit perfectly. Over subsequent weeks we altered our test documentation to work, not in terms of cases but in terms of testing threads. As discussed here we drive to identify Acceptance Criteria, Assumptions and Risks during the story elaboration which help to determine the scope of the testing activity. In a workbook for each user story we document these and then publish and agree them with the product management team. (BTW - svn apache add-in running alongside a Wiki is an excellent way of sharing documents stored in subversion).

The initial thread charters are then drawn up with the aim of providing understanding of, and confidence in, the acceptance criteria and risks identitied.

Initially each testing thread has:

  • A charter describing scope - we use Elisabeth Hendrickson's excellent guide on writing charters to help us
  • An estimate of time required to complete the charter to the level of confidence agreed for acceptance of the story. This estimate covers any data generation and setup required to prepare, and also the generation of automated regression tests to cover that charter on an ongoing basis, if this is appropriate.

We still use Excel as the primary store for the testing charters. This allows me to backend onto a MySQL database for storing charters and creating dashboards which can be used to review work in progress.



As the investigation on the threads progresses, information is generated to fill out the testing story:-

Given that we were now tracking threads rather than cases this allowed for greater flexibility in the format of our testing documentation - details of the testing activities performed are tracked in Mind Maps or Excel sheets.
  • Mind Maps are generated using XMind and hyperlinked from the Thread entry in the main sheet. For details on using Mind Maps to track testing activities I highly recommend Darren McMillan's blog. There is little more that I can add to his excellent work.
  • In terms of the Excel based exploration, testers can document their decisions and activities in additional entries under the Charter in the main testing sheet, using a simple macro input dialog for fast input on new testing activity as here:-
  • More detailed results, data tables and graphs can then be included in additional worksheets.
  • Details of any automated test deliverables generated as a result of that charter are also reported at the thread level.

Reeling in the Benefits

From a Test Management perspective I have found significant benefits working in this way
  • Variety of testing output is supported and encouraged - rather than gathering useful information and then stripping this down to standardise the results into a test case management structure, we can provide the outputs from the testing in the most appropriate form for understanding the behaviour and feeding the relevant decisions
  • Managing at charter level allows me to understand the activities that the testers are doing without micro-managing their every activity. We have regular charter reviews where we discuss as a team the approach to each feature and pool ideas on other areas that need considering
  • Estimate on remaining effort is maintained - this can be fed directly into a burndown chart. As an additional advantage I've found that our estimates in work outstanding are much more realistic now that these correlate with specific testing charters than my previous method of obtaining an estimate based on the work remaining on the story.

In the few months that we've been working with this approach it has slotted in extremely well with our testing activities. As well as making it easier for me to visualise progress on the testing activities, it has freed up the team to record their investigations in the form that they feel is most appropriate for the task at hand. The process of team review has improved team communication and really helped us to share our testing knowledge, driving our charter reviews using our shared test heuristics in a team based review session.

As with all things, the most appropriate approach to any problem is subject to context. If your context involves similar long setup and test run times, or sufficient distractions to prevent the use of session based approach, it may be that this is a possible alternative to progress away from the restrictions of a test case based strategy.

Image: http://en.wikipedia.org/wiki/File:Spool_of_white_thread.jpg
halperinko said...

Using any free ALM tool, and documenting just to the level you prefer (No one enforces you to write detailed steps etc.), you can spend less effort on building your own MySQL server queries interconnections.
You gain full visibility as everything is on the net, open to all (Easier to share than Excel, Trees are very similar to MindMap abilities).
You can extract information in almost any format you want to,
You ensure easier migration into future tools.
What do you gain by designing things from scratch?

Kobi (@halperinko)

Adam Knight said...

I would answer your question with a question - what would we gain from using a tool?

- being restricted to a view of the world dictated by a tool vendor
- Not being flexible in our reporting mechanisms e.g. mind maps, charts, tables, graphs, statistical analysis
- Having to work through a limited GUI interface

It may be that there are tools out there that would be more appropriate than where we started out, however I've reviewed a number of tools in the interim and not seen anything that would provide the flexibility that we have in using a file based approach. The macros to interface with the database are very simple and allow control over the backend database in terms of what we store and report on and , as we control the database we can report using any tool or interface that suits us. I'm not aware of any ALM tools that provide that level of flexibility.

Unknown said...

Hello Adam, Is there any possibility I can adapt your excel sheet system, as a template?
Thanks,
Maryam

Whatsapp Button works on Mobile Device only

Start typing and press Enter to search