Monday 14 July 2014

A Map for Testability

Here Be Dragons

Chris Simms (@kinofrost) asked me on Twitter last week whether I'd ever written anything about raising awareness of testability using a mind map. Apparently Chris had a vague recollection of me mentioning this. This is certainly something that I did, however I couldn't remember where I had discussed it. I've not posted about it, which is surprising as it is a good example of using a technique in context to address a testing challenge. As I mentioned in my Conference List, I have a testability target for this year. It therefore feels like an opportune moment to write about the idea of raising awareness of testability and an approach to this that I found effective.

Promoting Testability

As I wrote in my post Putting your Testability Socks On there are a wealth of benefits to building testability into your software. Given this, it is somewhat surprising that many folks working in Software don't consider the idea of testability. In environments where this is the case it is a frustrating task getting testability changes incorporated into the product, as these are inevitably perceived as lower priority than more marketable features. As Michael Bolton stated in his recent post, testers should be able to ask for testability in the products they are testing. The challenge comes in promoting the need for testability, particularly in products where it has not been considered during early development. This is a responsibility which will, in all likelihood, fall on the tester.

A great way that I found, almost by accident, to introduce the idea of testability in my company was to run a group session to the whole department on the subject. I say by accident as I'd initially prepared the talk for a UKTMF quarterly meeting and so took the opportunity to run a session on the subject internally in a company off-site meeting by way of a rehearsal for that talk. The internal presentation was well received. It prompted some excellent discussions and really helped to introduce awareness of the concept of software testability across the development team.

The Way Through the Woods

Even with a good understanding of testability in the organisation it is not always plain sailing. As I mentioned in my previous post, developments that persist without the involvement of testers are most at risk of suffering from lack of the core qualities of testability. It is hard to know how to tackle situations, such as the one I was facing, where lack of testability qualities are actually presenting risks to the software. The job title says 'software tester' so as long as we have software we can test, right?

On that occasion I took a somewhat unconventional approach to raise my concerns with the management team and present the problems faced in attempting to test the software. I created a mind map. As anyone who has read To Mind Map or not to Mind Map will know that I don't tend to use mind maps to present information to others. In this case I generated the map for personal use to break down a complex problem, and the result turned out to be an appropriate format for demonstrating the areas of the system that were at risk due to testability problems to others.

The top level structure of the map was oriented around the various interfaces or modes of operation of the software features. This orientation was a critical element in the map's effectiveness as it naturally focussed the map around the different types of testability problem that we were experiencing. The top level groupings included command line tools, background service processes, installation/static configuration and dynamic management operations such as adding or removing servers.

  • The installation/static configuration areas suffered from controllability problems due to their difficulty to automate and harness
  • The asynchronous processes suffered from lack of controllability and visibility around which knowing which operations were running at which time
  • The dynamic management operations lacked simplicity and stability due to inconsistent workflows depending on the configuration.

One of the key benefits of mind maps, as I presented in my previous post on the subject, is to allow you to break down complexity. After creating the map I personally had a much clearer understanding of the specific issues that affected or ability to test. Armed with this knowledge I was in a much better position to explain my concerns to the product owners, so the original purpose of the map had been served.

Presenting the Right Image

What I have said in my previous post on mind maps was that I don't tend to use them to present information to others, but if they are to be used for this purpose then they need to be developed with this in mind. In this case I felt that the map provided a useful means to assist in developing a common understanding between the interested parties and so tailored my personal map into a format suitable for sharing. I used two distinct sets of the standard Xmind icons, one to represent the current state of the feature groups in terms of existing test knowledge and harnessing, and the second representing the testability status of that area.

Mind Map Key

The iconography in the map provided a really clear representation of the problem areas.

Mind Map

Driving the conversation around the map helped to prompt some difficult decisions around where to prioritise both testing and coding efforts. I won't claim that all of the testability problems were resolved as a result. What I did achieve was to provide clear information as to the status of the product and the limitations that were imposed on the information we could obtain from our testing efforts as a result.

Highlighting the testability limitations of a system in such a way opens up the possibility to getting the work scheduled to address these shortfalls. It is difficult to prioritise testability work without an understanding amongst the decision makers of the impact of these limitations on the testing and development in general.

In an agile context such as mine then legacy testability issues can be added to the backlog as user stories. These may not get to the top of the priority list, but until they do there will at least be an appreciation that the testing of a product or feature will be limited in comparison to other areas. What's more it is far more effective to reference explicit backlog items, rather than looser desirable characteristics, when trying to get testability work prioritised.

Flexibility

Hopefully this post has prompted some ideas on how raise awareness of testability, both proactively and in light of problems that inhibit your testing. As well as this I think that the key lesson here is about coming up with the most appropriate way to present information to the business. In this case, for me, a mind map worked well. In all likelihood a system diagram would have been just as effective. Some of the customers that I work with in Japan use infographic type diagrams to great effect to represent the location of problems within a system in a format which works across language boundaries - something similar could also have been very effective here.

Testing is all about presenting information and raising awareness. The scenarios that we face and the nature of the information that we need to convey will change, and it pays to have a range of options at your disposal to present information in a manner that you feel will best get it across. There's absolutely no reason why we should restrict these skills to representing issues that affect the business or end user. We should equally be using our techniques to represent issues that affect us as testers, and testability is one area where there should be no need to suffer in silence.

References

Both my previous post Putting your Testability Socks On and Michael Bolton's recent Testability post ask for testability contain good starting references for further research on Testability.

Jeff Lucas said...

Adam - Thanks for this post! On some projects, I was forced to implement GUI test automation (on Java enterprise apps). In each case, I successfully lobbied to have accessibility standards made a priority which made the object identification and usage much easier with tools (and with users). That made developing resilient automation practical, which had major testability impact for releases. Strategic testability improvements can have major knock-on effects for years after implementation.

Whatsapp Button works on Mobile Device only

Start typing and press Enter to search