Monday 21 January 2013

Fractal Exploratory Testing

Fractal Circles

 

Last summer I was lucky enough to have an undergraduate, Tom, on an internship plement working on a testing project for my team, I wrote up a summary of the experience here. Tom contacted me last week sending me an interesting article on pair testing that he'd encountered through his studies and thought I might find interesting. This contact jogged my memory on something that came up during Tom's time with us that I'd meant to write about at the time and forgotten, so here it is, hopefully better late than never.

A simple graphic

During Tom's internship I spent some time introducing him to the approaches to testing that I have adopted in the company and the schools of thought behind them. In one session I found myself trying to describe the process of exploratory testing, and specifically explain the difference between ET and a more traditional scripted approach that Tom was familiar with.

I drew a horizontal line across a whiteboard to represent a products feature set. I explained that, with a given set of functionality, a scripted approach would traditionally be driven by a traceability matrix that ensured test cases covered each of the functional requirements. In the simplest representation of the approach the result would be a uniform distribution of test cases across the feature set, which I represented as another horizontal line above the first. We then discussed the fact that specific areas of that system may have greater complexity associated with the underlying modules, a weaker design or simply poorer code which would result in a higher risk of problems in that area. I shaded these on the first line and explained how these details might only become apparent at the point of testing and so test cases designed in advance may not provide any greater focus on these areas than any others.

I went on to explain the concept of exploratory charters and the idea of executing tests, learning from the results and applying further tests based on the information obtained. As I talked I drew a second line which undulated across the board, with sharp peaks of activity as I we encountered the shaded areas of higher risk. Tom quickly appreciated from this simple description how this approach allowed the tester to focus their attention around problem areas as they became apparent.

The beauty of recursion

Having used this simple graphic explanation of some of the benefits of ET, I then went on to discuss charters of testing. We discussed how each charter encapsulated an exploration of a function area or failure type in itself, and how each could recursively generate further, more focussed testing activity. As we talked I was reminded of a great talk by Elizabeth Keogh at an Agile Testing Exchange that I'd attended. Liz presented the idea of 'fractal bdd' where she described each story and epic as a smaller copy of a larger development activity, with bdd essentially comprising the same shaped process applied at many hierarchical levels. I felt that ET could be described in a similar way, and as Tom and I discussed the idea we elaborated our simple model.

If instead of a line we consider a circle of functionality. Much as with the line description, a scripted approach describes a uniform coverage of the circle with each test case a point on the circumference. With an ET approach, however, as each flaw in the circle is discovered we expand a new set of tests on discovery of the presence of an issue. This mini exploration will result in a more targeted testing exploration around this feature area, which we can represent as a circle off the original. Now imagine finding a more subtle flaw in that feature area, again you may expand your testing around that area to assess the problem and associated risks. In this way our uniform circle evolves into a fractal pattern, with branches of testing activity blossoming out from the original level as issues are identified and inconsistencies investigated. The recursive explorations may be smaller or larger than the first, just as a discovery in a simple retest can expose the need for a much larger investigation.

What I particularly like about this idea is that, in addition to an elegant way to contrast the natures of exploratory and scripted testing, it helps to provide some understanding of the potential issues people have with an exploratory testing approach. A uniform circle of test case 'points' is predictable. It is measurable in terms of circumference. As we complete the circle we can easily describe our progress and predict when we might finish. A fractal is more complex. We cannot measure the circumference, and it is harder to predict the remaining effort as we cannot know the scale of recursion activity that could be created at any point. All we can do is describe what we have done so far and some starting points of future activity in our remaining charters. This is understandably less appealing from a planning perspective. What we do provide, however, is a much richer and more elaborate level of detail through the process.

A usable model

I really liked this idea of fractal exploratory testing as compared to the uniform circle of prescripted testing, however I wasn't sure how well my enthusiastic ramblings on the subject had translated to Tom. I was, therefore, pleasantly surprised when Tom used this exact model to describe exploratory testing during his 'Agile Testing' presentation to my team. He presented the idea in his own words in a manner that both made sense, and reassured me that he had obtained a good understanding of the key benefits. To this end, the concept has served its purpose to me (although I shall certainly use the idea again) so here it is for general consumption.

References

Main Image: http://www.goodfon.com/wallpaper/312410.html
Other images are my own - can you tell?
Phil Kirkham said...

Interesting idea - and nice graphics
One comment though - you present scripted testing as a line with a uniform distribution. Not really true, if you're being a good tester and following a risk based approach then the distribution of test cases should be in the areas of high risk ?

Adam Knight said...

Phil,

Thanks for taking the time to comment. You make a very valid point, and one danger with simple models is the lack of representation of the nuances of different approaches. While it is possible to adopt a risk based approach to scripted test design I would suggest that this is not implicit in a scripted approach. A scripted approach that is driven primarily by a requirements traceability matrix, as I've suggested, has no implicit recognition of risk. As you suggest it is the responsibility of the good tester to adopt a risk based approach to consider risks in this model. Whilst identification of business risk can be included in the specification process, design risks often do not become apparent until much later as the software is developed. For me the primary sources of information on risk are:-
- Feedback from customers based on their business needs
- Feedback from developers based on the specification
- Feedback from developers based on writing the code
- Feedback from the tester from executing tests
The first should be a primary concern for any testing approach. The second is not implicit in either approach, however a good tester will obtain and use these sources of information. The third is not implicit in either approach, however this information is available at test design time in an exploratory approach where it may not be in a scripted model where test design and software coding occur in parallel. The last, however, is implicit in an exploratory model and is specifically unavailable in a scripted approach. It is this feedback loop of designing further tests based on the information obtained from previous tests that implicitly makes an exploratory approach more risk focussed than a scripted one.

I hope that makes sense -
Cheers

Adam

Phil Kirkham said...

Excellent - and very full - answer, thanks for the reply

Anonymous said...

Thats an awesome model... I think.
I agree that the linear representation of scripted testing might be a somewhat extreme case, but as long as one has that in mind it helps using simple models when explaining things.
I really like how the additional circles show how identifying various risks/issues leads to a myriad of new tests/risks/issues.
As you mention, this also reflect on the problem with test estimation. I am still looking for a good way to explain this when asked for the obligatory exact estimate at the beginning of a project, this model might be quite helpfull. I hope to try it out on some people over the next mnonths.

Thanks :-)
Geir

Adam Knight said...

Thanks for commenting Geir,

As with Phil you make a good point on the extreme case of a uniform distribution of scripted tests, I hope my answer to him helps. I may tweak the wording of the post to reflect the fact that this is a very simple model for the purposes of demonstration only.

Adam

Esko Arajärvi said...

I like the idea of visualizing the focusing of testing with a fractal shape. The mandelbrot set lines are nice for visualizing how new test cases spawn from problems.

I was thinking of visualizing also testing coverage in the same way. The shape should then be something that is contained to itself, as the whole could then represent the application. The best that I found for now was Sierpiński tetrahedron:

http://www.flickr.com/photos/davidpaulrosser/8101974127/in/photostream/

which also nicely visualizes all the parts of the software that were not tested. :-)

Anonymous said...

Hi Adam. While Phil and Geir are correct regarding script/risk correlation, and we're pedants as Testers; It's fairly obvious your 'graphs' are low fidelity and intended to be consumed as such.

For what it's worth - those who've been testing for a while ( 10+ years ) know the value of ET in Agile. Representing it is a fractal is the most accurate and easily absorbed visualisation I've seen to-date. Kudos.

- Shabu

Adam Knight said...

Shabu,

Thanks for the kind words.

One of the most useful things about writing a blog is the ability to get feedback on ideas from the testing community, either in simple terms of the amount of traffic/sharing on a post or through more direct comments. Any constructive criticism, such as Phil and Geir's comments, is very welcome, as are comments such as yours from experienced testers who appreciate an idea I've presented.

Thanks again

Adam

Adam Knight said...

Esko,

I originally had the Mandlebrot Set visualisation in mind for the main imagery on this post, however I felt that the symmetry inherent in the mathematically generated fractals such as this undermined the unpredictable chaotic nature of exploratory testing and test coverage. As Phil and Geir have both rightly suggested, we should be careful to ensure that models that present testing in a uniform or predictable way are understood as being simplifications. This is particularly relevant if using a symmetrical pattern like the Sierpiński tetrahedron to represent test coverage.

Thanks for commenting,

Adam.

James Thomas said...

Lovely, Adam, especially the comments. I just arrived in a place nearby via a different route

Ashwin Maru said...

Beautiful & quite representative model.

Anonymous said...

1. Not all projects could be represented in a circle, where would the circle start.
2. With this approach cant draw estimates upfront, which is required budgeting.
3. May suit product development, may not be economical for service companies.

Whatsapp Button works on Mobile Device only

Start typing and press Enter to search