Tuesday, 21 October 2014

The Workaround Denier

Your software is a problem, to someone. It may be uncomfortable to accept but for somebody that uses your software there will be behaviours in it that inhibit their ability to complete the task that they are trying to achieve. Most of the time people don't notice these things as the limitations are within the realm of accepted limitations of computer technology (the word processor doesn't type the words as I think of them; the online shop doesn't let me try the goods before ordering). In some cases the limitation falls outside of accepted technological inadequacy, with the most likely result being a mildly disgruntled user having to reluctantly change their behaviour or expectations. In cases where the difference is more profound, but not sufficient for them to move to another product, it may be that the person involved will attempt to work around the limitation. The result in such situations is that the software features can be used in a manner that they have not been designed or tested for.

As a tester I find myself slipping towards hypocrisy on the subject of workarounds. Whilst I am happy to consider any (legal) workarounds at my disposal to achieve my goals with the software of others, when testing my own software my inclination is to reject any use of the system outside of the scope of operation for which it has been designed and documented. I think this position of 'workaround denial' is something that will be familiar to many testers. Any use of the product that suits outside our understanding of how it will be used is tantamount to cheating on the part of the user. How are we expected to test for situations that we didn't even know were desirable or practicable uses of the product?

An example last week served as a great contradiction to the validity of such a position, demonstrating how important some workarounds are to our customers. It also reminded me of some of the interesting and sometimes amusing workarounds that I have encountered both in software that I use and which I help to produce.

The software I use

I am constantly having to find ways around when one of the many software programs that I use on a daily basis doesn't quite do what I want. In some cases the results of trying to work around my problems are a revelation. As I wrote about in this post on great flexible free tools, the features that you want might be right under your nose. In other cases the actions taken to achieve my goals are somewhat more convoluted and probably sit well outside the original intention of the development team when designing the software.

  • The support tracking system that I implemented for our support teams does not support the ability for our own implementation consultants to raise or track tickets on behalf of their clients. It will assume that any response from our company domain is a support agent response and forward it to the address of the owner of the ticket. To avoid problems of leakage we restrict the incoming mails to the support mailbox on the exchange account, but this does limit the possibility of including our own implementation consultants as 'proxy customers' to help in progressing investigations into customer problems.
  • The bug tracking system that I currently use has poor support for branching. Given the nature of our products and implementations we are maintaining a number of release branches of the software at any one time and sometimes need to apply a fix back to a previous version on which the customer is experiencing a problem. With no inherent support for branching in our tracker I've tried a number of workarounds, including at times directly updating the backend database, all with limited success.
  • As I wrote about in this post, I was intending to replace the above tracking system with an alternative this year. Interestingly the progress on this project has stalled on the presence of exactly the same limitation, poor branching support in the tool we had initially targeted to move to, Jira. The advice within the Jira community suggested that people were resorting to some less than optimal workarounds to tackle this omission.
  • Outlook as an email client has serious workflow limitations for the way that I want to manage a large volume of email threads. I've found the need to write a series of custom macros in order to support the volume and nature of emails that I need to process on a daily basis. These include a popup for adding custom follow up tags so I can see not only that a follow up is required but a brief note of what I need to do, a macro to pull this flag to more recent messages in the conversation so that it will display on the collapsed conversation, and also the ability to move one or more mails from my inbox to the directory that previous mails in that conversation are stored.
  • The new car stereo that I purchased has a USB interface to allow you to plug in a USB stick of mp3 files to play. On first attempting to use this I found that all of the tracks in each album were playing in alphabetical order rather than the order that the songs appeared in the albums. A Google search revealed that the tracks were actually being played in the order that they are added to the FAT32 file system on the stick. Renaming the files using a neat piece of free software and re-copying to the stick resolved the issue. On reading various forums it appears that this was a common problem, but the existence of the workarounds of using other tools to sort the files was apparently sufficient for the manufacturer not to feel the need to enhance the behaviour.
  • The continuous integration tool Jenkins has a behaviour whereby it will 'helpfully' combine identical queued jobs for you. This has proved enough of a problem for enough people that it prompted the creation of a plug in to add a random parameter to Jenkins jobs to prevent this from happening, which we have installed.

The software I test

Given that my current product is a very generic data storage system with API and command line interfaces, it is natural that folks working on implementations will look to navigate around any problems using the tools at their disposal. Even on previous systems I've encountered some interesting attempts to negotiate the things that impede them in their work.

  • On a financial point of sale system that I used to work on it was a requirement that the sales person completed a fresh questionnaire with the customer each time a product was proposed. Results of previous questionnaires were inaccessible. Much of the information would not have changed and many agents tried to circumvent the disabling of previous questionnaires to save time on filling in new ones. The result was an arms race between the salespeople trying to find ways to reuse old proposal information, and the programmers trying to lock them out.
  • On a marketing system I worked on we supported the ability to create and store marketing campaigns. One customer used this same feature to allow all of their users to create and store customer lists. This unique usage resulted in a much higher level of concurrent use than the tool was designed for.
  • The compression algorithms and query optimisations of my current system require that data be imported in batches, ideally a million records or more, and be sorted for easy elimination of irrelevant data for querying. We have had some early implementations where the end customer or partner has put in place a infrastructure to achieve this, only for field implementation teams to try to reduce latency by changing the default settings on their systems to import data in much smaller batches of just a few records.
  • One of our customers had an issue with a script of ours that added some environment variables for our software session. It was conflicting with one of their own variables for their software, so they edited our script.
  • One of our customers uses a post installation script to dynamically alter the configuration of query nodes within a cluster from the defaults.
  • In the recent example that prompted this post a customer using our standalone server edition across multiple servers did not have a concept of a machine cluster in their implementation. Instead they performed all administration operations on all servers, relying on one succeeding and the others failing fast. In a recent version we made changes with the aim of improving this area such that each server would wait until they could perform the operation successfully rather than failing. Unfortunately the customer was relying on failing fast rather than waiting and succeeding so this had a big impact on them and the workaround they had implemented.

My former inclination as a tester encountering such workarounds was to adopt the stance of 'workaround denier', being dismissive of any non-standard uses of my software. More recently, thanks partly due to circumstance and partly in light of the very positive attitude of my colleagues, I've grown to appreciate the idea that being aware of and considering known workarounds is actually beneficial to the team. It is far easier to cater for the existence of these edge uses during development than to add support later. In my experience this is one area where the flawed concept of the increasing costs of fixing may actually hold true given that we are discussing having to support customer workflows that were not considered in the original design, rather than problems with an existing design that might be cheaply rectified late in development.

What are we breaking?

Accepting the presence of workarounds and slightly 'off the map' uses of software raises an interesting question of software philosophy. If we change the software such that it breaks a customer workaround, is this a problem? I don't believe that there is a simple answer to this. On one hand our only formal commitment is to the software as delivered and documented. From the customer perspective, however, their successful use of the software includes the workaround, and therefore their expectation is to be able to maintain that status. They have had to implement such a workaround to overcome limitations on the existing feature set and therefore could be justifiably annoyed if the product changes to prevent that. Moving away from the position of workaround denial allows us to predict problems that, whilst possibly justifiable, still have the potential to cause negative relationships once the software reaches the customer.

Some tool vendors, particularly in the open source world, have actively embraced the concept of the user workaround to the extent of encouraging communities of plug-in developers who can extend the base feature set to meet the unique demands of themselves and subsets of users. In some ways this simplifies the problem in that the tested interface is the API that is exposed to develop against.

(While this does help to mitigate the problem of the presence of unknown workarounds, it does result in a commitment to the plug-in API that will cause a much greater level of consternation in the community should these change in the future. An example that impacted me personally was when Microsoft withdrew support for much of the Skype Desktop API. At the time I was using a tool (the fantastic Growl for Windows) to manage my alerts more flexibly than was possible with the native functionality. The tool relied upon the API and therefore no longer works. Skype haven't added the corresponding behaviour into their own product and the result is that my user experience has been impacted and I tend to make less use of Skype as a result.)

Discovering the workarounds

The main problem for the software tester when it comes to customer workarounds is knowing that they exist. It is sometimes very surprising what users will put up with in Software without saying anything, as long as somehow they can get to where they want to be. The existence of a workaround is the result of someone tackling their own, or somebody else's, problem and it may be that they don't inform the software company that they are even doing this. It can take a problem to occur for the presence of the workaround to be discovered.

  • For the sales agents on the point of sale system trying to unlock old proposals, we used to get support calls when they had got stuck having exposed their old proposal data but now unable to edit them or save any changes.
  • The customer of the marketing campaign system reported issues relating to concurrency problems in saving their campaign lists.
  • For the team that edited the environment script, we only discovered the change when they upgraded to a later version and the upgrade had a problem with the changes they'd made and threw errors. Again this came in via the support desk.
  • For the team who reduced the size and latency of their imports, we only discovered the problem when they reported that the query performance was getting steadily worse.
  • For the recent customer who was taking a 'fail fast' approach to their multi-server operations, again the problem came in via the support desk exhibiting as a performance issue with their nightly expiry processes.

So the existence of a workaround is often only discovered when things go wrong. In my organisation the channels are primarily through the technical support team, and in running that team I get excellent visibility of the issues that are being raised by the customers and any workarounds that they have put in place.

For other organisations there may be additional channels through which information on customer workarounds can be gleaned. As well as being a useful general source of tester information on how your software is perceived, public and private forums are also the places where people will share their frustrations and workarounds. I've already mentioned that I used a public forums to discover a solution to my problem with my car stereo. My colleague also discovered the 'random parameter' add-in to Jenkins on a public forum, and it was public threads that we looked to in order to identify workarounds to the lack of branch support in Jira.

Prevention is Better than Cure

Responding to problems is one thing. If we work to gain an understanding of where the customers are getting into trouble I think that it is possible to anticipate places where customers try to work around limitations in Software and testers can be on the lookout for potential consequences of they do. Doing this, however, requires an understanding of customer goals and frustrations that sit outside of the defined software behaviour. I believe that the signs are usually there in advance if you look for them in the right places, perhaps a question from an implementation consultant working on customer site on understanding why a feature was designed in a certain way, an unsatisfied change request on the product backlog, or a post on a user forum. If we can make efforts to understand where the customers are getting frustrated then we can test scenarios where they might try to get around the problem themselves and establish how they might get themselves into trouble if they do. There are some testers in my team who actively help out with support so gain good visibility of problems. In order to encourage a wider knowledge of customer headaches throughout the test team we have started to run regular feedback sessions where a support agent discusses recent issues and we discuss any underlying limitations that could have led to these.

Of course, ideally we would have no need for workarounds at all, the software should simply work in the way that the users want. Sadly it is rarely possible to satisfy everyone's expectations. The responsibility to prioritise changes that remove the need for specific workarounds is probably not one that falls directly on testers. In light of this, is it something that they need to maintain awareness of? As I've stated my inclination was to dismiss them as something that should not concern testing - after all, we have enough challenges testing the core functionality. It is tempting to adopt the stance that the testing focus should be solely on the specified features, that we need to limit the scope of what is tested and the documented feature set used 'as designed' should be our sole concern. This is a strong argument, however I think that this belies the true nature of what testing is there to achieve. Our role is to identify things that could impact quality, or value the product provides to some stakeholder, to use Weinberg's definition. From this perspective the customer value in such scenarios is obtained from having the workarounds in place that allow them to achieve their goals. Rather than dismissing workarounds, I'm coming around to the idea that software testers would better serve the business if we maintained an understanding of those that are currently in place, and raise awareness of any changes that may impact on these. In our latest sprint, for example, we introduced some integrity checking that we knew would conflict with the post configuration script mentioned above, so as part of elaborating that work the team identified this as a risk and put in place an option to disable the check. This is exactly the kind of pragmatism that I think we need to embrace. If we are concerning ourselves with the quality of our product, rather than adherence to documented requirements, it appears to me to be the right thing to do.

Image: https://www.flickr.com/photos/jenlen/14263834826

Monday, 8 September 2014

The FaceBook Effect

I recently celebrated the birth of my 4th child. Whilst my wife was recovering from the birth I enjoyed the opportunity to take my older children to school and to speak to friends and other parents wanting to pass on their congratulations and wishes. One such day I was chatting with a friend of my wife's and the conversation strayed into an area which she felt particularly passionate about, which also struck a chord with me both in a personal and professional capacity. The friend was telling me that she was so excited to hear news of the birth that she had logged onto Facebook for the first time in months to check my wife's status. She explained that she had previously stopped using Facebook as she felt that it compelled her to present a false image of her life. Whilst I occasionally use Facebook I was inclined to agree with her that there is a lot of pressure on social media to present a very positive image of yourself and your life. Indeed this pressure is such that many people seem more focused on staging personal occasions to take photographs and post details to social media than on actually enjoying the occasion themselves.

But What's this got to do with Testing?

Whatever your position on social media, you're probably wondering why I'm recounting this conversation on a testing post. The reason that the conversation struck a chord with me on a professional level was because I think that there is a similar pressure in professional communities and social media groups, with those associated with Software Testing being particularly prone for reasons I'll go into.

With the advent of professional social media in the last decade, professionals now enjoy far greater interaction with others in their field than ever possible before. In general I think that this is a hugely positive development. It allows us to share opinions and discuss ideas and accelerates the distribution of new methods and techniques through the industry. Social media channels such as Twitter, LinkedIn and discussion forums also provide less experienced members of the community far greater access to experienced individuals with a wealth of knowledge and expertise than ever possible than when I started testing. More importantly social media allows us to be vocal and criticise activities which could damage our profession and find other individuals who share the same concerns. The recent rallying behind James Christie's anti ISO 29119 talk would simply not have been possible without the social media channels that allowed like minded individuals to find a collective voice in the resulting online petition (I don't suggest that you go immediately and sign the petition, I suggest that you read all of the information that you can, and make up your own mind. I'd be surprised if you decided not to sign the petition ). Social media has the power to give a collective voice where many individual voices in isolation would not be heard.

On the flipside of the positive aspects, Social Media carries an associated pressure - let's call it the 'Facebook effect' - where those contributing within a professional community feel the need to present a very positive image of what they are doing. It is easy to compare one's own work in a negative light and engender feelings of professional paranoia as a consequence. This is not something that is specific to testing and the phenomenon has been highlighted by those writing on other industries, such as this post highlighting problems in the marketing community.

For the many who make their living in or around the social media industry, the pressure to be or at least appear to be an expert, the best, or just a player is reaching a boiling point.

The message is clear. Advancing in the modern professional work is as much about climbing the social media ladder as the corporate one and in order to do that we need to present the right image.

Living up to the image

Based on the article above, and the references at the end of this post, it is clear that the negative side of social media affects other industries, so what is it about testing in particular that I think makes us particularly conscious of how we present ourselves?

Before putting this post together I approached a number of testers to ask whether they had ever experienced or observed the symptoms of the Facebook effect in their testing careers. I received some very interesting and heartfelt responses.

Most accepted the need to present a sanitised, positive image on social media

You don't want to wash your dirty linen in public

And the need for constant awareness that everything posted publicly was open to scrutiny

I know that anything I say is open to public 'ridicule' or open for challenge

Some went further to admit that they had experienced occasions where they felt paranoid or intimidated as a result of interactions with testing based social media. One tester I spoke to highlighted the problem of opening dialogues requesting help or input from others and being made to feel inferior

when you make an opening to someone in our industry asking their thoughts or opinions and they seem to automatically assume that this means you are a lesser being who has not figured it all out already

I don't think either myself or the person writing that feel that assumptions of this type are always the case, but I've certainly experienced the same thing. A few years ago, as an experienced tester finding my feet in the agile world, I found myself frustrated by the 'you are doing it wrong' responses to questions I posted in agile testing lists. I don't want a sanctimonious lecture when asking questions on my problems, I want some open help that acknowledges the fact that if I'm asking for assistance in one area it doesn't mean that I don't know what I am doing in others.

Why is testing so affected?

I think there are a number of factors that contribute to testing being particularly prone to the 'Facebook effect'.

  • what are we admitting?

    I obviously can't comment on how it is for other professions, but I think for testing the pressure of presenting a positive image is particularly prevalent due to the implications of any negative statements on perception of our work or organisations. Any admission of fault in our testing is implicitly an admission of the risk of faults in our products, or worse, of risks to their data as customers. Whilst we may be prepared to 'blame the tester' in the light of problems encountered, it is not the case that we want to do this proactively by openly admitting mistakes. Some testers also have the additional pressure of having competitor companies who can pick up on revelations of mistakes to their advantage. As a tester working for a product company with a close competitor told me:

We are watching them on social media as I am assuming they are watching us. So I do need to be guarded to protect the company (in which) I am employed.
  • what are we selling?

    Some of the most active participants in any professional communities will be consultancy companies or individuals, and testing is no different. These folks have both a professional obligation not to criticise their clients, and also a marketing compulsion to present the work that they were involved in as successful so as to present value for money to other prospective customers. The result is a tendency towards very positive case studies from our most vocal community members on any engagements, and an avoidance of presenting the more negative elements to protect business interests.

  • Where are we from?

    Testing is a new profession. I know that some folks have been doing it for a long time, but it just doesn't have the heritage of law, medicine or accountancy that provides stability and a structure of consistency across the industry. Whereas this does result in a dynamic and exciting industry in which to work, it also means that It workers operate in a volatile environment where new methodologies compete for supremacy. Attempts to standardize the industry may appear to be an attractive option in response to this, offering a safety net of conformity it a turbulent sea of innovation, however the so far flawed attempts to do so are rightly some of the greatest points of contention and result in the most heated debate in the world of testing today. The result is an industry where new ideas are frequent and it can be hard to tell the game-changing innovations from the snake-oil. Is it really possible to exhaustively test a system based on a model using MBT? Are ATDD tools a revolutionary link between testing and the business or really a clumsy pseudocode resulting in inflexible automation? In such an industry it is naturally hard to know whether you have taken the right approaches, and easy to feel intimidated by others' proclamations of success.

  • Where are we going?

    For some an online presence is part and parcel of looking for advancement opportunities. LinkedIn is particularly geared towards this end. Therefore presenting only the most successful elements of your work is a prudent approach of you want to land the next big job. Similarly for companies who are recruiting, if you want to attract talented individuals then presenting the image of a successful and competent testing operation is important.

Facing Your Fears

One of the problems that we face individually when interacting with professional social media is the fact that the same 'rose tinted' filtering that is applied to the information that we read on other testers and their organisations is not applied to our own working lives. We see our own jobs 'warts and all' which, during the leaner times, can lead to professional paranoia. This is certainly something that I have experienced in the past when things have not been going as well as I would like in my own work. I found that this was more of a problem earlier in my career and has got less as I gain more experience which provides perspective on my work in relation to others. The FaceBook Effect does still rear is head during periods of sustained pressure when I have little chance to work on testing process improvements, as I have experienced this year.

The manner in which we deal with these emotions will inevitably depend on the individual. I think that, whilst easy to fall into a pattern of negativity, there are responses that show a positive attitude and that can help to avoid the negative feelings that can otherwise haunt us.

  • look to the experienced

    Ironically it seems to be the most experienced members of a profession that are most willing to admit mistakes. This could be because many of those mistakes were made earlier in their careers and can be freely discussed now. It could be that having a catalogue of successful projects under your belt furnishes folks with the confidence to be more open about their less successful ones. It could also be that the more experienced folks appreciate the value of discussing mistakes to help a profession to grow and build confidence in its younger members. These are all things that I can relate to and as my experience grows I find an increasing number of previously held opinions and former decisions that I can now refer to personally, and share with others, as examples of my mistakes.

  • get face to face

    I wrote a while ago about an exchange that I did with another company to discuss our relative testing approaches. I have since repeated this exercise with other companies and have another two exchange visits planned for later this year. The exchanges are done on the basis of mutual respect and confidentiality, and therefore provide an excellent opportunity to be open about the issues that we face. There is an element of security about being face to face, particularly within the safe environment of the workplace, which allows for a open conversations even with visitors that you have known for a short time.

  • consultancy

    I don't rely extensively on external consultancy, however I have found it useful to engage the services of some excellent individuals to help with particular elements of my testing and training. In addition to the very useful 'scheduled' elements to the engagement, almost as useful is having an expert with a range of experiences available to talk to in a private environment. As I mention above, consultants should maintain appropriate confidentiality, and they will also have a wealth of experience of different organisations to call on when discussing your own situation. Having had the benefit of a 'behind the doors' perspective of other companies provides a far more balanced view of people's relative strengths and can put your own problems into a more realistic context as a result. There are few more encouraging occasions for a test leader in an organisation than being told that your work stands up in a positive light to other organisations (even if they aren't at liberty to tell you who these are) .

  • closed groups

    I was fortunate to be involved in a project recently that involved being a member of a closed email list. I found this to be a liberating experience. The group discussed many issues that affect testers and test managers openly and without fear of our words being misinterpreted by our organisations or others in the community. There were disagreements on a number of the subjects and I personally found it to be much easier discussing contentious issues with reference to my own position in the closed group environment. The problem with discussing internal issues in an open forum is obviously the risk that your candid talk is seen by the wrong eyes, a closed group avoids this problem and allows for open and candid discussion with sympathetic peers. In fact I obtained some really interesting input from exactly that group prior to writing this post.

  • trusted connections

    I am lucky to have some fantastic testers on my private email address list who I can turn to in times of uncertainty or simply to bounce ideas off before putting them into the public domain. For example I recently had some questions around Web testing. This is not something that I've looked at for some time, having been focussed on big data systems. I received some invaluable guidance from a couple of people in my contacts list without any stigma around my 'novice' questions, as the individuals I spoke to know me and respect the testing knowledge that I have in other areas. Their advice allowed me to provide an educated view back to my business and make better decisions on our approach as a result. As with the closed group, I approached a number of personal contacts for their experiences and opinions to contribute to writing this post.

Don't worry be happy

When my brother left his last job in water treatment engineering his colleagues gave him one piece of parting advice.

Lose the nagging self doubt - you are great at your job

So it could well be that I suffer from some familial predilection to being self critical. You may not suffer from the same. Whether you do or not, I think that when using social media as a community we should maintain awareness of how others will feel when reading our input. We should try to remember that just because others ask for help doesn't mean that they don't know what they are doing. We should consider admitting failures as well as successes so others can learn from our mistakes and gain confidence and solace in making their own.

If interacting with social media personally leaves you with a taste of professional paranoia - I recommend reading this excellent short post from Seth Godin, and remind yourself that the simple fact that you are looking outside your work to the wider professional community to improve your testing, probably means you're doing a fine job.

Other links

Image: Sourced from twitter @michaelmurphy https://twitter.com/michaelmurphy/status/492648065619492864

Monday, 14 July 2014

A Map for Testability

Here Be Dragons

Chris Simms (@kinofrost) asked me on Twitter last week whether I'd ever written anything about raising awareness of testability using a mind map. Apparently Chris had a vague recollection of me mentioning this. This is certainly something that I did, however I couldn't remember where I had discussed it. I've not posted about it, which is surprising as it is a good example of using a technique in context to address a testing challenge. As I mentioned in my Conference List, I have a testability target for this year. It therefore feels like an opportune moment to write about the idea of raising awareness of testability and an approach to this that I found effective.

Promoting Testability

As I wrote in my post Putting your Testability Socks On there are a wealth of benefits to building testability into your software. Given this, it is somewhat surprising that many folks working in Software don't consider the idea of testability. In environments where this is the case it is a frustrating task getting testability changes incorporated into the product, as these are inevitably perceived as lower priority than more marketable features. As Michael Bolton stated in his recent post, testers should be able to ask for testability in the products they are testing. The challenge comes in promoting the need for testability, particularly in products where it has not been considered during early development. This is a responsibility which will, in all likelihood, fall on the tester.

A great way that I found, almost by accident, to introduce the idea of testability in my company was to run a group session to the whole department on the subject. I say by accident as I'd initially prepared the talk for a UKTMF quarterly meeting and so took the opportunity to run a session on the subject internally in a company off-site meeting by way of a rehearsal for that talk. The internal presentation was well received. It prompted some excellent discussions and really helped to introduce awareness of the concept of software testability across the development team.

The Way Through the Woods

Even with a good understanding of testability in the organisation it is not always plain sailing. As I mentioned in my previous post, developments that persist without the involvement of testers are most at risk of suffering from lack of the core qualities of testability. It is hard to know how to tackle situations, such as the one I was facing, where lack of testability qualities are actually presenting risks to the software. The job title says 'software tester' so as long as we have software we can test, right?

On that occasion I took a somewhat unconventional approach to raise my concerns with the management team and present the problems faced in attempting to test the software. I created a mind map. As anyone who has read To Mind Map or not to Mind Map will know that I don't tend to use mind maps to present information to others. In this case I generated the map for personal use to break down a complex problem, and the result turned out to be an appropriate format for demonstrating the areas of the system that were at risk due to testability problems to others.

The top level structure of the map was oriented around the various interfaces or modes of operation of the software features. This orientation was a critical element in the map's effectiveness as it naturally focussed the map around the different types of testability problem that we were experiencing. The top level groupings included command line tools, background service processes, installation/static configuration and dynamic management operations such as adding or removing servers.

  • The installation/static configuration areas suffered from controllability problems due to their difficulty to automate and harness
  • The asynchronous processes suffered from lack of controllability and visibility around which knowing which operations were running at which time
  • The dynamic management operations lacked simplicity and stability due to inconsistent workflows depending on the configuration.

One of the key benefits of mind maps, as I presented in my previous post on the subject, is to allow you to break down complexity. After creating the map I personally had a much clearer understanding of the specific issues that affected or ability to test. Armed with this knowledge I was in a much better position to explain my concerns to the product owners, so the original purpose of the map had been served.

Presenting the Right Image

What I have said in my previous post on mind maps was that I don't tend to use them to present information to others, but if they are to be used for this purpose then they need to be developed with this in mind. In this case I felt that the map provided a useful means to assist in developing a common understanding between the interested parties and so tailored my personal map into a format suitable for sharing. I used two distinct sets of the standard Xmind icons, one to represent the current state of the feature groups in terms of existing test knowledge and harnessing, and the second representing the testability status of that area.

Mind Map Key

The iconography in the map provided a really clear representation of the problem areas.

Mind Map

Driving the conversation around the map helped to prompt some difficult decisions around where to prioritise both testing and coding efforts. I won't claim that all of the testability problems were resolved as a result. What I did achieve was to provide clear information as to the status of the product and the limitations that were imposed on the information we could obtain from our testing efforts as a result.

Highlighting the testability limitations of a system in such a way opens up the possibility to getting the work scheduled to address these shortfalls. It is difficult to prioritise testability work without an understanding amongst the decision makers of the impact of these limitations on the testing and development in general.

In an agile context such as mine then legacy testability issues can be added to the backlog as user stories. These may not get to the top of the priority list, but until they do there will at least be an appreciation that the testing of a product or feature will be limited in comparison to other areas. What's more it is far more effective to reference explicit backlog items, rather than looser desirable characteristics, when trying to get testability work prioritised.

Flexibility

Hopefully this post has prompted some ideas on how raise awareness of testability, both proactively and in light of problems that inhibit your testing. As well as this I think that the key lesson here is about coming up with the most appropriate way to present information to the business. In this case, for me, a mind map worked well. In all likelihood a system diagram would have been just as effective. Some of the customers that I work with in Japan use infographic type diagrams to great effect to represent the location of problems within a system in a format which works across language boundaries - something similar could also have been very effective here.

Testing is all about presenting information and raising awareness. The scenarios that we face and the nature of the information that we need to convey will change, and it pays to have a range of options at your disposal to present information in a manner that you feel will best get it across. There's absolutely no reason why we should restrict these skills to representing issues that affect the business or end user. We should equally be using our techniques to represent issues that affect us as testers, and testability is one area where there should be no need to suffer in silence.

References

Both my previous post Putting your Testability Socks On and Michael Bolton's recent Testability post ask for testability contain good starting references for further research on Testability.

ShareThis

Recommended