This sounds a bit obvious but if a check is being performed against a file or set of information then it makes diagnosing issues much easier if the actual data being checked is available after the fact, and not just the result of the check. Knowing that the result of a test was 999 provides far more information than knowing it failed a check criteria that the result should not exceed 64.
Also for file/data driven tests such as many of those used at RainStor we apply conversions to remove inconsistent/random information from test results prior to comparison against expected results. Storing the original result as well as that modified for comparison allows for valuable de-risking investigation to be performed to ensure that you are not masking issues through the process of modifying results.
If the application has a logging mechanism it is useful to copy the logs associated with a test run and store with the results of the run. These can be an excellent diagnostic tool to assist the tester in investigating issues that may have arisen in the tests, even after the test environment may have been torn down.
Tracking and storing operating system logs and diagnostics such as /var/log/messages or the Windows event log is again a great aid towards examining application behaviour and investigating potential issues after the tests have completed. This information is not only useful for debugging but also to check for potential issues that may not have been picked up by the specific check criteria in the tests themselves.
Many of my tests relate to specific performance targets and so have associated timings that form part of their success criteria. Even for those that don't, storing the time taken to run each test can provide some very useful information. Some of the automated tests that I run do not perform any checks at all, but simply gather timings which I can use to model application performance through the course of the test and identify unhealthy patterns or likely points of failure.
Also if, as I do, you store a history of test results in a database then this allows you to graph the changing behaviour of the test over time through multiple iterations of the software and identify patterns or key dates when behaviour changed.
Application Memory Usage
As with test timings, some of my tests have a specific memory criteria which is checked, however for the majority of tests I log the system memory usage of the application through the test. Again by storing this information in a database we can track the memory behaviour of the system over time and identify key dates when memory issues may have been introduced. Knowing the memory usage of the application can also be a valuable piece of information when debugging intermittent faults under high load.
Not strictly something that is gathered by the test, but storing the test purpose in the test results makes it much easier for anyone investigating the cause of test failures, especially if this is not the person who wrote the tests.
Test Application Logs
Most successful test automation implementation are the ones that view their test harnesses as products in their own right, and in that regard these should generate their own application logs which can also be stored with the results of a test run. Should unexpected results arise from the tests these logs provide a wealth of information on the activities carried out by the test application/harness and any issues encountered which may have been the cause of the exception.
If you have gone to the effort of implementing a suite of automated tests then a lot of the hard work is done. Take the time to consider how you can use the power of these tests to do more than just checking and instead reap a wealth of useful information that can be used to assist in and improve your testing process.
Copyright (c) Adam Knight 2009-2010