Tuesday 12 April 2011

A template for success - harnessing the power of patterns to document internal test structures

During a test retrospecitve last year the subject of discussion among the team came onto the documentation of our automated tests and the visibility of the test pack structure. For a while we had been using a metadata structure to document the purpose of the test along with each step in the test packs. This was proving very effective in documenting the functional test case or acceptance scenario being covered, helping us in understanding the reasons why the tests existed. The specific problem in discussion was a lack of visibility of the structure of tests themselves and understanding of how the tests ran, particularly ones created by other members of the team.

More than one way to...

All of our tests are grouped under a top level element of an archive, which usually relates to a set of customer example test data or a custom developed data set to test a specific feature set. Under the archive are test packs which relate generally to specific scenarios (this is the equivalent level to a FIT page or "Given, When, Then" scenario.) Although the test packs were structured in a well understood way, the nature of different requirements resulted in significant differences in the way that individual test archives and the packs within them were structured.
  • Some archives are built from source data up front and then queried, some change through the course of the test execution
  • Some test packs were dependent on the execution of earlier packs, some could be run in isolation
  • Some test packs build the data from scratch, some restore from a backup, some have no archive data at all
  • Some tests ran each pack once, some iterated through the same packs many times e.g. to model behaviour as archives scale in size
We'd made attempts to document each archive in a free text format readme file, but I felt that this was rather cumbersome and wasn't delivering the relevant information that the testers needed in a consistent, consise form.


Around the same time I read this post by Markus Gärtner on test patterns. I'd not really considered using patterns in the documentation of test cases previously but, giving the matter some thought it made real sense to use the power of patterns to document our test pack design.

Taking advantage of a train journey to London, I examined our existing tests, identifying common structures that could be documented as distinct patterns of test execution. I drew up a few graphical representations of the initial patterns I identified, then I started the process of examining the test archives in more detail to identify variations on these patterns around data setup methods and test initialisation. I came up with about 5 core patterns with around 12 variations of these, which I felt was an appropriate number to provide value without too much differentiation. If each archive had its own pattern there would be little point in the exercise. To test the patterns I started tagging the test archives with these to see how effective a fit these were. I added the graphical representations of the patterns to the Wiki before presenting the approach to the rest of the team for some feedback.

Although this process is in its early stages the ideas have been well received by the team and we have seen some significant benefits result from this process already:-
  • A pattern can speak a thousand words
  • The tagging of a test archive with a specific pattern yields an immediate insight into the layout, setup and mode of execution of the tests without the need to read extensive notes or examine the structure in detail.
  • Patterns drive good test design
  • Although not the primary driver behind our use of patterns, the major benefit of using patterns is that they provide a template for implementation, understanding effective patterns can therefore help to drive good test design.
  • Identification of areas to refactor
  • While examining existing tests looking for design patterns I soon realised that some patterns resulted in far more maintainable, clean tests than others. Tagging exising test packs with patterns has helped us to identify the tests that are not in a maintainable structure, and given us a set of target patterns to aim for when refactoring.
  • Patterns identify ambigous test usage
  • With some test archives we found that multiple patterns were applicable. This indicated an ambiguous test design, where one test archive was being used for multiple purposes. This again helped to identify a need for refactoring in separating out the distinct implementations into a single test archive.
  • Patterns become more effective as test functionality becomes more complex
  • In the past we had suffered from testers implementing the flexible capabilities of the test harness in inconsistent ways which made maintainance more difficult, as test structures became more complex this problem was magnified. As we have recently extended the support in our test harnesses to cover more parallel and multi-server processing, patterns have provided a means of introducing the new functionality with recommended structures of implementation. It was far easier communicating the new capabilities of the test harnesses in the context of the new patterns supported than it would have been trying to explain the features in isolation.

As I mention we are in very much the early stages of adopting this approach and I'm sure there are more benefits to uncover and more effective ways to use the power of patterns. The benefits of creating even a very simple set of bespoke patterns which were relevant to our automation context were immediately apparent. If you have a rich automated test infrastructure which involves a variety of test structures with different modes of implementation then it is an technique that I would certainly recommend.

Copyright (c) Adam Knight 2011

Whatsapp Button works on Mobile Device only

Start typing and press Enter to search