Wednesday 4 November 2015

The Living Test Strategy

What is a test strategy? It’s not necessarily as easy a question to answer as it may have been a few years ago. In previous testing roles I used to be able to explain clearly what a test strategy was - it was a document. The test strategy was the document that I created at the outset of a long development project with the purpose of defining and agreeing the approach that the testers on the project would take in testing each requirement and area of the software under test. I knew exactly how the document was structured, as it looked almost exactly the same each time that I produced it. Each time that I produced it we would undergo the ritual dance of review, amendment, re-publication and eventual sign off with development, project and/or product managers sufficient to convince all concerned that we were taking the testing seriously.

The value of the strategy from a testing perspective was limited. The majority of the content of the strategy would be boilerplate material that was transferable, not only from one project to the next but also possibly one product/company to the next without significant modification. Without the strategy document the testing performed would have been much the same. Often the testing work that was carried out was done so not because it was defined in the test strategy that we should do so, but because as testers we felt that it was a good idea.

I have many examples of where the approach I’ve taken deviated from the defined strategy, a couple of the better ones are:

  • One on data analytics engine product the developers created an in-house database comparison tool which I adopted and used to compare data sets through various load/append/update processes to validate the integrity of our indexed fields - this was never defined in the test strategy but proved invaluable in exposing issues in the data maintenance processes
  • On a primarily test script driven implementation project we adopted a phase of initial exploration to assess the stability of each new release prior to progressing into the scripted test execution stages in order to save time working through test scripts against sub-par releases from the supplier

The important point in these examples was that, whilst a strategy was defined and agreed at the start of the project, decisions on the approach taken were made later to increase the value of testing. These decisions were made as a result of the discoveries that were made as the testing progressed. Whilst overall the approach taken would tie in with the approach defined in the strategy document, the actual approach that was taken had less to do with the strategy and more with the experiences of myself and my colleagues.

In my recent work I have moved away from producing a test strategy document. Aside from the fact that I found that it had limited value in the situations where I’d created one previously, as I’ve moved more explicitly to a testing approach based around exploration and establishing relevant tests based on arising needs throughout an agile sprint based approach, the creation of a test strategy document in advance smacks of futility. This creates something of an information vacuum when it comes to discussing the testing approach, and one that some might feel uncomfortable with. If we dispense with a test strategy document, where does our strategy reside? How do we define what we do and how we test? In order to explore those questions further I’m going to look at what I see as the two different uses of the test strategy and how we might consider these being addressed in an agile organisation built on foundations of exploration: Defining the strategy for the team, and explaining the strategy to others.

Picking the Right Team

Alex Ferguson, Arsene Wenger, Brian Clough, Steve Hansen , John Madden, Vince Lombardi, - these names might be more familiar to some than others, but they are all highly successful sports managers. In achieving the huge successes that each one has, I doubt very much that any of these, or any other top sports manager has sat his team down with a 40 page word document and gone through a multi-phase review process until the entire team, coaching staff and board of directors are happy to commence with starting a game. Certainly a strategy is communicated and shared prior to each game through the team training and team discussions. The strategy of the successful manager starts way before that though - the strategy is encapsulated in the team itself. The starting point of the strategy lies in the players that are in those training sessions and team talks to start with. The top managers will hire and construct their teams based on the skills of the individual players involved and the needs to fill the squad with the breadth and depth of skills to be successful. If a manager wants to play fast, one touch attacking football then he will hire differently to one that wants to stifle play and soak up pressure.

I see hiring in testing as similarly an element of the strategy (I refer here specifically to testing, however this is just as valid for development overall). The structure of the team plays a massive part in the strategy. There is no point, for example, in hiring testers who relish in the predictable execution of scripted tests if you want to promote an exploratory culture. Similarly if you want a high level of predictability and rigorous documenation then filling your team with headstrong and expressive testers with a disdain for filling in documentation is going to be counter-productive. I’ve been fortunate to avoid too many attempts to enforce ‘top down’ changes in approach onto teams that were hired into a very different culture, however when I have seen it done I’ve seen high levels of friction and resistance - the team was simply not the right one for the new strategy.

In 2013 I guest wrote a piece for Rob Lambert’s blog on ‘T-shaped testers and square shaped teams’ . One thing that was implied in that piece, but perhaps not made explicit, was that the creation of the ‘square shaped team’ is a Test Strategy defining activity. For me, testing strategy starts with the testing skills of the individuals that we hire into our teams. As a result boilerplate, factory specifications have as little place in my hiring process as they do elsewhere in our development processes. Just as each hire into a sports team will be done based on the abilities of the existing players and the areas where skills shortages exist, so each hire into my testing team is done be based on complementing and reinforcing the skills already present to create a team that has the capabilities of delivering the approach that our strategy targets.

Every Player has a Role

Getting the right skills and attitudes in the team is a critical starting point in delivering strategy. Nevertheless, as many star-filled sports teams have demonstrated, having great players does not necessarily guarantee success. As important an element in a successful strategy is that the roles that each of the team are appropriate for the team to cover all of their responsibilities. If you put your star attacking players in defensive roles, or fail to ensure that each player knows their responsibilities when defending, then the result will be a lot of confusion.

A second critical element of a team strategy is therefore in ensuring that the responsibilities of the individuals and teams are understood. In my last post I looked at the nature of exploration and the multiple levels at which the approach that a tester or team is taking can be defined using charter based language. As an experiment in a group session I recently asked my team to define what they saw as their charters at a general level within the organisation and the teams they worked. What was clear from the exercise was that, whilst we’ve not historically explicitly defined individual responsibilities, every member of the team had a strong understanding of the expectations of their role and the part they played in the overall testing activity. Individuals also naturally tended to define their charters at different levels depending on their own level of experience and responsibility, with those less experienced members of the team encapsulating their work at a lower, more detailed level than those with more experience or responsibility requiring a higher level appreciation of the testing strategy.

One clear consensus coming from the team was that providing more explicit role definitions out of management would be counter-productive, as new needs were constantly arising with the team approach shifting to incorporate these. Individuals felt comfortable to adjust, sometimes through self-organising and sometimes a little more directed, but always able to shift their individual focus to incorporate new activities and responsibilities into the overall remit of the testing group. As I discussed in my last post - this ability to change and redefine approach at different levels is an characteristic of a successful exploratory approach and a key component of our testing strategy.

Explaining Strategy

So, from a team and test management viewpoint I believe that a testing strategy is encapsulated in the individuals that we have in the team and the responsibilities and charters that they fulfil. Having a strategy that is only known to the team, however, is sometimes not sufficient. Sometimes it is necessary to define testing strategy to others outside the group, and one argument for a test strategy document is that it helps to get the testing approach agreed and ‘signed off’ by other interested parties. I’d argue here that there are other more effective means that this aspect of test strategy can be achieved. Yes there is a need to communicate our testing strategy outside of the testing group, however do we really need to predefine all of the testing activities to a detailed level in order to achieve this? In my experience the individuals involved in the test strategy review process found the process a tiresome one as they did not necessarily have the knowledge of testing approaches, techniques or tools to assess whether the approach was the most appropriate, or even valid. The result was therefore an inclination to refer to industry best practices and stock approaches as a means to fill the void of understanding and reduce the risk of personal culpability. “Are we adopting industry standard best practices here?” is a question that anyone with little or no understanding of a subject can rely on to provide input into a strategy review process, neatly placing the responsibility of approach on the ‘industry standards’ and the onus of responsibility onto the testing team to satisfy the implications.

I find personally that development managers and product owners would prefer not to have responsibility for understanding the finer details. What most would prefer to have an overview of the testing approach at a more abstract level, and leave the details of execution to those whose job it is to understand them. To this end I’ve found that a well placed presentation summarizing a testing approach for those outside the team achieves a quicker, clearer understanding of the testing strategy than reading through pages of details on the fine details of how each requirement is to be tested.

A shaky defense

Another final reason for presenting the entire testing strategy in front of management in advance of testing is a more cynical one. Sometimes this is done as an attempt to protect against a ‘blaming the tester’ scenario. Some may labour in the mistaken belief that getting a test strategy signed off in advance affords some protection from subsequent blame if problems are discovered, on the basis that ‘management signed it off’. This is a false premise though for exactly the same reasons. We cannot expect other parties to have the same level of insight into the appropriate testing approach as the person creating the strategy, and therefore attempting to lay some culpability at the feet of others should that strategy prove to be flawed will have limited success.

I’d personally rather take responsibility for the strategy details through the structuring of a skilled team and maintaining flexibility of strategic choice through the process, than be restricted to a specific approach on the basis of diminishing the blame later.

Image: https://www.flickr.com/photos/laurencehorton/7289066310/

Ann MacKenzie said...

A very interesting and productive approach to test planning and execution - based on the experience of the testers that you have at hand, summarize their efforts to get a buy-in, rather than using an extended and detailed plan that probably won't get used and leave the execution details to those on the ground....

Anonymous said...

As someone who is currently writing a test strategy for automation, this is a very useful post for me.
I tend to agree with your post however, unfortunately not every organisation is flexible enough to allow for the test strategy to be devised based on the skill set available in house (Probably lack of understanding, staff, or just plain ignorance...), which will force the test strategy back to the shallow, superficial document we have seen before.
Tailoring a strategy to the players as you described; with already existing expectations from higher up in the food chain, is not a task for the faint hearted.

I am looking forward to your talk tomorrow at South West Test.

Whatsapp Button works on Mobile Device only

Start typing and press Enter to search