Friday, 21 May 2010

The Kitchen Sink - why not all tests need automating

I've recently been working on testing a Windows platform version of our server system. A major part of that work was to port the test harnesses to work in a Windows environment. I'd completed the majority of this work, and with most of the tests running, checked and passing, I began to tackle the few tests remaining that were not simple to resolve. After the best part of a day struggling to get very few tests working I decided to take a step back and review what the remaining tests were atually trying to do. I very quickly decided that the best approach was to get the tests working after all but rather to remove them from the test suite.

For example, the first one that I looked at tested an error scenario where a system function was called where permission had been revoked from the input file to that process. Although a valid test, the likelihood of this issue occuring in a live environment was slim and the potential risk to the system low. The test itself, on the other hand, involved bespoke scripts within the test with high maintenance requirement when porting and a high risk of failure.

I contacted the tester who created the test and put it to him that this type of test was possibly more suited to initial exploratory assessment of the functionality involved rather than full automation and repeated execution. He accepted this and we agreed to remove the test.

I took this as an opportunity to review with the team what tests needed adding to the regression packs and when. Some of the key points that should be considered:-


  • Once a test has passed, what is the risk of a regression occurring in that area?

  • How much time/effort is involved in developing the test in the first place compared to the benefit of having it repeatable?

  • Is the likelihood of the test erroring in itself higher than the chance of it picking up a regression?

  • Will the test prove difficult to port/maintain across all of your test environments?



Just because we can automate a test doesn't mean that we always should. Aim to perform a cost/benefit analysis of having the test in your automation arsenal versus the cost of running and maintaining the test. It may becomes apparent that the value of the test is less than the effort it takes to develop, execute and maintain. In this situation then the best course of action may be to execute manually as an exploratory test in the initial assessment phase, and focus our automation efforts on those tests that give us a bit more bang for our buck.

Copyright (c) Adam Knight 2009-2010

Thursday, 13 May 2010

Why automated testing is more than just checking

For my first post in a while (after a period of intensive DIY - house is looking great, blog is a bit dusty!) I thought I'd expand on a point I have made a few times recently in various forums. In discussion further to my post Testing the Patient I commented that I would not start referring to automated tests as checks, because my tests gather more information than is required to execute the checks. Here I expand on this to provide some examples of information that can be usefully gathered by automated tests over and above the information needed to feed checking mechanisms.

Test results


This sounds a bit obvious but if a check is being performed against a file or set of information then it makes diagnosing issues much easier if the actual data being checked is available after the fact, and not just the result of the check. Knowing that the result of a test was 999 provides far more information than knowing it failed a check criteria that the result should not exceed 64.

Also for file/data driven tests such as many of those used at RainStor we apply conversions to remove inconsistent/random information from test results prior to comparison against expected results. Storing the original result as well as that modified for comparison allows for valuable de-risking investigation to be performed to ensure that you are not masking issues through the process of modifying results.


Application Logs


If the application has a logging mechanism it is useful to copy the logs associated with a test run and store with the results of the run. These can be an excellent diagnostic tool to assist the tester in investigating issues that may have arisen in the tests, even after the test environment may have been torn down.

System logs


Tracking and storing operating system logs and diagnostics such as /var/log/messages or the Windows event log is again a great aid towards examining application behaviour and investigating potential issues after the tests have completed. This information is not only useful for debugging but also to check for potential issues that may not have been picked up by the specific check criteria in the tests themselves.

Test timings


Many of my tests relate to specific performance targets and so have associated timings that form part of their success criteria. Even for those that don't, storing the time taken to run each test can provide some very useful information. Some of the automated tests that I run do not perform any checks at all, but simply gather timings which I can use to model application performance through the course of the test and identify unhealthy patterns or likely points of failure.

Also if, as I do, you store a history of test results in a database then this allows you to graph the changing behaviour of the test over time through multiple iterations of the software and identify patterns or key dates when behaviour changed.

Application Memory Usage


As with test timings, some of my tests have a specific memory criteria which is checked, however for the majority of tests I log the system memory usage of the application through the test. Again by storing this information in a database we can track the memory behaviour of the system over time and identify key dates when memory issues may have been introduced. Knowing the memory usage of the application can also be a valuable piece of information when debugging intermittent faults under high load.

Test Purpose


Not strictly something that is gathered by the test, but storing the test purpose in the test results makes it much easier for anyone investigating the cause of test failures, especially if this is not the person who wrote the tests.

Test Application Logs


Most successful test automation implementation are the ones that view their test harnesses as products in their own right, and in that regard these should generate their own application logs which can also be stored with the results of a test run. Should unexpected results arise from the tests these logs provide a wealth of information on the activities carried out by the test application/harness and any issues encountered which may have been the cause of the exception.

If you have gone to the effort of implementing a suite of automated tests then a lot of the hard work is done. Take the time to consider how you can use the power of these tests to do more than just checking and instead reap a wealth of useful information that can be used to assist in and improve your testing process.

Copyright (c) Adam Knight 2009-2010

ShareThis

Recommended