Friday, 17 September 2010

A Set of Principles for Automated Testing

The act of introducing new members to the team can act as a focus for helping the existing members to clarify their approach to the job. One of the things that I developed to work through with new members is a presentation on the automation process at RainStor, and the principles behind our approach. This post explores these principles in detail and the reasoning behind them. Although these have grown very specifically to our context (we write our own test harnesses), I think that there are generally applicable elements here that merit sharing.

Separate test harness from test data


The software that drives the tests, and the data and metadata that define the tests themselves, are separate entities and should be maintained separately. In this way the harness can be maintained centrally, maintained to reflect changes in the system under test, and even re-written, without having to sacrifice or risk the tests themselves.

Users should not need coding knowledge to add tests


Maintenance of test data/metadata should be achievable by testers with knowledge of the system under test, not necessarily knowledge of the harness technology.

Tests and harnesses should be portable across platforms


Being able to use the same test packs to execute across all of our supported platforms gives us an instant automated acceptance suite to help to drive platform ports and them continue to provide an excellent confidence regression set for all supported platforms.

Tests are self documenting


Attempting to maintain two distinct data sources in conjunction with each other is inherently difficult. Automated tests should not need to be supported by any documentation other than the metadata for the tests themselves, and should act as executable specifications for the system to describe behaviour. Test metadata should be sufficient to explain the purpose and intention of the test such that this purpose can be maintained should maintenance be required on that test.

Test harnesses are developed as software


The tests themselves are a software product that serves the team, and developments should be tested and implemented as such.

Tests should be maintainable


Test harnesses should be designed to be easily extensible and maintainable. At RainStor harnesses consist of a few central driving script/code modules and then individual modules for specific test types. We can add in new test types to the system by the dropping in of script modules with simple common inputs/outputs into the harness structure.

Tests should be resilient to changes in product functionality


We can update in the central harness in response to changes in product interfaces without the need to amend the data content of thousands of individual tests.

Tests allow for expected failures with bug numbers


This can be seen as a slightly contentious approach, and is not without risk, however I believe that the approach is sound. I view automated tests as indicators of change in the application. Their purpose is to indicate that a change has occurred in an area of functionality since the last time that that function underwent rigorous assessment. Rather than having a binary PASS/FAIL status, we support the option of having an result which may not be what we want but is what we expect, flagged with the related bug number. In this way we can still detect potentially more serious changes to that functionality. In this way we are maintaining the tests purpose as a change indicator, without having to re-investigate every time the test runs, or turn off the test as a failing test.

Tests may be timed or have max memory limits applied


As well as data results, the harnesses support recording and testing against limits of time and system memory that will be used in running a test. This helps in driving performance requirements and identifying changes in memory usage over time.

Tests and results stored in source control


The tests are an executable specification for the product. The specification changes with versions of the product, so the tests should be versioned and branched along with the code base. This allows tests to be designed for new functionality and performance expectations updated whilst maintaining branches of the tests relevant to existing release versions of the product.

Test results stored in RainStor


Storing results of automated test runs is a great idea. Automated tests can/should be used to gather far more information that simple pass/fail counts (see my further explanation on this here). Storing test results, timings and performance details in a database provide an excellent source of information for:-
* Reporting performance improvements/degradations
* Identifying patterns/changes in behaviour
* Identifying volatile tests

As we create a data archiving product, storing the results in here and using this for analysis provides the added benefit of "eating our own dog food". In my team we have the longest running implementation of our software anywhere.

These principles have evolved over time, and will continue to do so as we review and improve. In their current form they've been helping to drive our successful automation implementation for the last three years.

Copyright (c) Adam Knight 2009-2010

Monday, 13 September 2010

Professional Washer-Upper

A recent discussion on the Yahoo Agile Testing discussion group covered the subject of whether a separate testing role was still required in development teams practising TDD/BDD. Here I aim to use an example from a very different field to examine the benefits of both generalising specialists and having individuals devoted to even basic roles.

Professional Washer-upper



When I was at high school I had a weekend job washing dishes at a busy local restaurant. The job involved a number of responsibilities

  • operating the dishwashers for the crockery

  • Keeping the kithchen stocked with crockery to ensure orders could go out

  • Manually supporting the chefs in washing pans and cookware to meet demand

  • Operatng the glass washer and keeping the bar stocked with glasses to meet demand


I also, when required, could step in for others to help with their tasks
  • serve behind the bar (barman)

  • serve food (server)

  • clear tables (server)

  • make desserts (server)

  • cook starters (sous chef)

Similarly, other members could step in and cover each others jobs when required e.g. servers worked the bar early in the evening in the rush before most people were seated. On midweek nights, the restaurant was quieter and my tasks were shared among the servers and chefs. On very busy nights (e.g. New Year) we drafted people in to help with my tasks so that I could take on of the workloads of others. I had a number of relevant skills and could operate in a number of roles, yet if you asked anyone in the restaurant (including me), my job wash Washer-upper as this was my primary role and provided sufficient work to merit a devoted individual.

The restaurant could have adopted the approach of not having a washer-upper. The work would still have needed doing, but could have been fulfilled by other members of the team (e.g. every server washing all trays he/she cleared, all chefs washing their own pans). I was, however, very good at washing up. I knew what needed to be done to meet the needs of the rest of the team and how often. I knew the environment and had optimised my approach within it to the extent that it took 3 servers to cover when I was called off to other jobs. Given that someone was constantly required to be washing up, it made sense to have an individual devoted to that job who was better at it than the other team members.

The multi-skilled team



I think this example is a great case of a multi-skilled team of what Scott W Ambler calls Generalising Specialists or as Jurgen Appelo calls them T-shaped people. For low workload situations the number of individuals is reduced and the coverage of roles distributed across them. For more intensive workloads the benefits of having generalising specialists become apparent. Each individual has a key area of responsibility, however has the knowledge to step in and cover other roles as the pressures and bottlenecks ebb-and flow through the course of an iteration (sitting).

The benefits of devoted attention



Much as the many aspects of the washer-upper's position, the banner of Software Tester for the purposes of discussions such as the recent one on the Yahoo Agile Testing group, can be viewed as a matrix of roles and responibilities (which I feel is growing, not shrinking, but that's another topic). Some teams will operate by sharing these roles and responsibilities across the team without individuals assigned to the testing position, and will be successful. The testing roles, however, will still be present and need to be filled.

The question posted recently was whether TDD or ATDD/BDD will render the traditional testing role redundant. I don't think so. If a job as simple as the washer-upper can demonstrate the benefits of having skilled individuals concentrating on maximising effectiveness in an area of responsibility, then this benefit is only going to be amplified as the difficulty and complexity of the role increases. Having individuals with specific testing expertise whose primary concern is on this subject area has certainly payed dividends in my organisation, where the effectiveness and scale of testing performed (and consequentially knowledge and confidence) are far greater now than were experienced when reliance was far more on developer led testing.

As to whether it is sufficient for a tester to have only testing skills and responsibilities, that is another question for another post.

Copyright (c) Adam Knight 2009-2010

ShareThis

Recommended