Thursday 13 May 2010

Why automated testing is more than just checking

For my first post in a while (after a period of intensive DIY - house is looking great, blog is a bit dusty!) I thought I'd expand on a point I have made a few times recently in various forums. In discussion further to my post Testing the Patient I commented that I would not start referring to automated tests as checks, because my tests gather more information than is required to execute the checks. Here I expand on this to provide some examples of information that can be usefully gathered by automated tests over and above the information needed to feed checking mechanisms.

Test results


This sounds a bit obvious but if a check is being performed against a file or set of information then it makes diagnosing issues much easier if the actual data being checked is available after the fact, and not just the result of the check. Knowing that the result of a test was 999 provides far more information than knowing it failed a check criteria that the result should not exceed 64.

Also for file/data driven tests such as many of those used at RainStor we apply conversions to remove inconsistent/random information from test results prior to comparison against expected results. Storing the original result as well as that modified for comparison allows for valuable de-risking investigation to be performed to ensure that you are not masking issues through the process of modifying results.


Application Logs


If the application has a logging mechanism it is useful to copy the logs associated with a test run and store with the results of the run. These can be an excellent diagnostic tool to assist the tester in investigating issues that may have arisen in the tests, even after the test environment may have been torn down.

System logs


Tracking and storing operating system logs and diagnostics such as /var/log/messages or the Windows event log is again a great aid towards examining application behaviour and investigating potential issues after the tests have completed. This information is not only useful for debugging but also to check for potential issues that may not have been picked up by the specific check criteria in the tests themselves.

Test timings


Many of my tests relate to specific performance targets and so have associated timings that form part of their success criteria. Even for those that don't, storing the time taken to run each test can provide some very useful information. Some of the automated tests that I run do not perform any checks at all, but simply gather timings which I can use to model application performance through the course of the test and identify unhealthy patterns or likely points of failure.

Also if, as I do, you store a history of test results in a database then this allows you to graph the changing behaviour of the test over time through multiple iterations of the software and identify patterns or key dates when behaviour changed.

Application Memory Usage


As with test timings, some of my tests have a specific memory criteria which is checked, however for the majority of tests I log the system memory usage of the application through the test. Again by storing this information in a database we can track the memory behaviour of the system over time and identify key dates when memory issues may have been introduced. Knowing the memory usage of the application can also be a valuable piece of information when debugging intermittent faults under high load.

Test Purpose


Not strictly something that is gathered by the test, but storing the test purpose in the test results makes it much easier for anyone investigating the cause of test failures, especially if this is not the person who wrote the tests.

Test Application Logs


Most successful test automation implementation are the ones that view their test harnesses as products in their own right, and in that regard these should generate their own application logs which can also be stored with the results of a test run. Should unexpected results arise from the tests these logs provide a wealth of information on the activities carried out by the test application/harness and any issues encountered which may have been the cause of the exception.

If you have gone to the effort of implementing a suite of automated tests then a lot of the hard work is done. Take the time to consider how you can use the power of these tests to do more than just checking and instead reap a wealth of useful information that can be used to assist in and improve your testing process.

Copyright (c) Adam Knight 2009-2010
steveo1967 said...

I like your thinking on this Adam.

Can you still call the automated part tests?

It appears you are using automation to aid your testing and to create data for any investigative work that needs to be done later. I think Michael Bolton has made a very good point about this type of automation.

Any automation that can aid your testing is good and IMO the less we look at checks vs tests and more at actually using our thinking skills to test the better.

regards

John

Adam Knight said...

John,

Thanks for the feedback.

>Can you still call the automated part tests?

Yes certainly. In my opinion a test is simply something that is performed to obtain information about an entity such as a person, object or software system. There are many examples of tests that are simply to gather information rather than being associated with boolean check criteria e.g. Medical tests, fitness tests, IQ tests, litmus tests. The automation is a resource to execute tests and limiting the information returned to a set of pass/fail results may be missing out on the opportunity to gather far more detailed and useful information.

Whatsapp Button works on Mobile Device only

Start typing and press Enter to search