Monday, 21 January 2013

Textual description of firstImageUrl

Fractal Exploratory Testing

Fractal Circles

 

Last summer I was lucky enough to have an undergraduate, Tom, on an internship plement working on a testing project for my team, I wrote up a summary of the experience here. Tom contacted me last week sending me an interesting article on pair testing that he'd encountered through his studies and thought I might find interesting. This contact jogged my memory on something that came up during Tom's time with us that I'd meant to write about at the time and forgotten, so here it is, hopefully better late than never.

A simple graphic

During Tom's internship I spent some time introducing him to the approaches to testing that I have adopted in the company and the schools of thought behind them. In one session I found myself trying to describe the process of exploratory testing, and specifically explain the difference between ET and a more traditional scripted approach that Tom was familiar with.

I drew a horizontal line across a whiteboard to represent a products feature set. I explained that, with a given set of functionality, a scripted approach would traditionally be driven by a traceability matrix that ensured test cases covered each of the functional requirements. In the simplest representation of the approach the result would be a uniform distribution of test cases across the feature set, which I represented as another horizontal line above the first. We then discussed the fact that specific areas of that system may have greater complexity associated with the underlying modules, a weaker design or simply poorer code which would result in a higher risk of problems in that area. I shaded these on the first line and explained how these details might only become apparent at the point of testing and so test cases designed in advance may not provide any greater focus on these areas than any others.

I went on to explain the concept of exploratory charters and the idea of executing tests, learning from the results and applying further tests based on the information obtained. As I talked I drew a second line which undulated across the board, with sharp peaks of activity as I we encountered the shaded areas of higher risk. Tom quickly appreciated from this simple description how this approach allowed the tester to focus their attention around problem areas as they became apparent.

The beauty of recursion

Having used this simple graphic explanation of some of the benefits of ET, I then went on to discuss charters of testing. We discussed how each charter encapsulated an exploration of a function area or failure type in itself, and how each could recursively generate further, more focussed testing activity. As we talked I was reminded of a great talk by Elizabeth Keogh at an Agile Testing Exchange that I'd attended. Liz presented the idea of 'fractal bdd' where she described each story and epic as a smaller copy of a larger development activity, with bdd essentially comprising the same shaped process applied at many hierarchical levels. I felt that ET could be described in a similar way, and as Tom and I discussed the idea we elaborated our simple model.

If instead of a line we consider a circle of functionality. Much as with the line description, a scripted approach describes a uniform coverage of the circle with each test case a point on the circumference. With an ET approach, however, as each flaw in the circle is discovered we expand a new set of tests on discovery of the presence of an issue. This mini exploration will result in a more targeted testing exploration around this feature area, which we can represent as a circle off the original. Now imagine finding a more subtle flaw in that feature area, again you may expand your testing around that area to assess the problem and associated risks. In this way our uniform circle evolves into a fractal pattern, with branches of testing activity blossoming out from the original level as issues are identified and inconsistencies investigated. The recursive explorations may be smaller or larger than the first, just as a discovery in a simple retest can expose the need for a much larger investigation.

What I particularly like about this idea is that, in addition to an elegant way to contrast the natures of exploratory and scripted testing, it helps to provide some understanding of the potential issues people have with an exploratory testing approach. A uniform circle of test case 'points' is predictable. It is measurable in terms of circumference. As we complete the circle we can easily describe our progress and predict when we might finish. A fractal is more complex. We cannot measure the circumference, and it is harder to predict the remaining effort as we cannot know the scale of recursion activity that could be created at any point. All we can do is describe what we have done so far and some starting points of future activity in our remaining charters. This is understandably less appealing from a planning perspective. What we do provide, however, is a much richer and more elaborate level of detail through the process.

A usable model

I really liked this idea of fractal exploratory testing as compared to the uniform circle of prescripted testing, however I wasn't sure how well my enthusiastic ramblings on the subject had translated to Tom. I was, therefore, pleasantly surprised when Tom used this exact model to describe exploratory testing during his 'Agile Testing' presentation to my team. He presented the idea in his own words in a manner that both made sense, and reassured me that he had obtained a good understanding of the key benefits. To this end, the concept has served its purpose to me (although I shall certainly use the idea again) so here it is for general consumption.

References

Main Image: http://www.goodfon.com/wallpaper/312410.html
Other images are my own - can you tell?

Wednesday, 16 January 2013

Textual description of firstImageUrl

Automated Testing Fax Machine



A couple of events recently have demonstrated to me that different organisations approach the subject of automated testing** with very different mindsets to the one that I adopt on the subject.

Firstly I interviewed a test candidate a couple of weeks ago who explained to me the automation approach at his employer. They were using an third party tool which drove the GUI screens through recording click locations on specific points of the screen, and validated results by comparing bitmaps of the resulting screens.

I also recently visited a test automation team in another company to share testing ideas. While I was there we compared the test automation approaches in our organisations. Being a large organisation they had standardised their approach around the former Mercury tools , now part of the HP testing suite. The approach was split along the traditional lines of performance testing and functional automation. As we discussed the latter, the team described a clearly defined process around the development of their automated tests. Testers working on scripted test cases in other teams would, on completion of the manual testing, submit specific test cases to the automation team to be converted into automated tests. These tests would involve driving the system through the GUI following the same steps as the 'manual' test case.
Whilst both of these processes were well defined and obviously achieving some success in the eyes of their adopters in both cases the development of tests would start with the recording of a human interaction with the system under test and then modify this. It was clear that both operations were constrained within the idea that the goal for functional automation was to automatically execute scripted, formerly manual, test cases. 

More than a fax machine

Being a sucker for metaphors (much to the annoyance of Phil Kirkham I'm sure), the resounding image conjured by this idea is one of a fax machine. This is a shining example of an application of technology being constrained to the recreation of the related pre-existing human activities. If we look, conversely, at the great strides that have been made in electronic communication over the years, each has arisen through taking a step further away from any association with physical activity from which they originated. The speed of Twitter communication, the richness of Facebook and the scale of email are a far cry from what could be achieved if we limited our imagination to the sending of physical paper documents electronically. All these different communication tools we now take for granted, yet for many organisations their approach to functional automation is still rooted in replicating manual activities.
I'm sure that there are many valid explanations why automated test approaches are rooted in this mindset:
  • It looks easier
  • In the sales pitch and the conference stand, GUI automation is an easier sell as you can apply automation easily against the existing product. Not the case. In the case of the bitmap based tool described by my interview candidate, any changes to the GUI , even changes to button colour , would result in the need to re-record the test case and re-save a new set of expected bitmaps. As the tests accumulate so does the level of rework that is required when changes are implemented that affect the test results. With a GUI based tool the first test may be quicker to set up, but subsequently the overheads of adding further tests will not reduce significantly afterwards as each new test must be recorded in the same way and can take days to create. A test harness that drives business logic through an API is likely to cost more for the initial set of tests but sensible design should allow very easy development of further scenarios through variation and manipulation of the controlling data, which can be easier than having to record and edit each new test.
  • It looks cheaper
  • The big sell here is that you don't need programming knowledge so you can maintain separation of test from 'expensive' programming disciplines. Again this is a false economy. Any saving in not needing programming input in testing operations I think is more than offset by the knowledge needed in the specific tool itself and the level of input could well be lower long term if the test code is sufficiently isolated from the test data that drives it. Additionally you have the cost of  reworking the tests more regularly with changes in the application. A more involved approach programmatically may involve a heavier starting investment but, designed well , the long term maintenance costs in both test application and test data should be lower. 
  • It is easier to budget
  • In the team that I visited, the automation engineer would complete a ROI calculation based on the time taken to develop the test against how many times it would be executed and accept or reject the development of the case. If manual test cases are perceived as having the same value as the automated execution of the same cases it is easier to budget and calculate the ROI of automation. This is, however, a flawed idea. When a person executes a test then they are observing a much wider range of criteria than an automated check, whether this is part of the explicit expected result or not. Attributing the same value to a human executed test as an automated comparison against a restricted set of check criteria is massively under-selling the value of human observation. Nevertheless it is done, as it affords an easy calculation of the return on investment in test automation. Calculating the value of a fundamentally different approach which acknowledges the relative strengths of automated and human driven interactive functional tests is much harder and therefore the approach is harder to sell.

Flexibility of purpose

When implementing automation into a testing operation it pays to maintain awareness that you have flexibility of purpose in the approaches that you choose. Replication of a human interactive execution of scripted tests through the GUI is just one of many available strategies. I'd suggest that there are usually other more economical and robust approaches by which the same, and a great deal more, information can be obtained. Therein lies the key point for me. We have the power and flexibility to design test automation around the information that we wish to obtain, irrespective of what can be achieved and observed by a human interacting with a user interface. If that information relates to the behaviour of the GUI when presented with those exact inputs, then this approach may well be appropriate. If the information that is being targeted is the behaviour of the business logic when presented with specific inputs then it is unlikely to be the best way to achieve this.

In the case of my visit I observed that the GUI application was one of three applications that interacted with the business logic in question, which indicates to me the presence of a business API against which they could probably execute a lot of their testing. This would provide a much more stable and controllable interface than a GUI and the creation and maintenance of tests would almost certainly be a lot easier given the appropriate approach such as a keyword or script driven structure. As it was, the creation of a new test was a multi-day operation of recording and modification, with the result that a very small percentage of tests passed the ROI review. I provided some feedback to the team manager from what I had observed and the possibilities of extending their automation away from the GUI. He was very receptive and the team were enthusiastic to look at different ideas, and I'm looking forward to discussing this with them in the future as they look at their options.

Questioning the approach of automating through the GUI is not a new approach and I know there are many testers out there promoting the idea of automating below the GUI against the business logic. I'm not advocating this approach particularly, as that investment may be equally inappropriate depending on the context. What I am saying is, when looking at test automation the options available, both in the tools chosen and in the testability of the application itself, extend well beyond what is possible as a human through the existing user interfaces. What I would advocate is using your imagination as to what is possible and drive testing activities, and the testability of the product, towards the information that you want to obtain. 
 
** whilst I know that some folks regard the term 'automation' and 'automated testing' as ambigous or misleading here I am using these terms based on a generally accepted understanding of their meaning. At some point I may try to tackle the ambiguity of these terms and the grey area of manual vs automated, but not in this post.

ShareThis

Recommended