Friday, 20 May 2011

Automated Tests - you on a lot of good days

Automated testing is something of a contentious subject which invokes a lot of passion in the testing community and much is written on the dangers of automated testing and the risks of employing automation in testing. I'm certainly of the opinion that automation is something that should not be undertaken lightly. The costs involved in getting it wrong can vary from wasted time and effort up to an artificially high level of confidence in the system under test. While the potential pitfalls of automation receive a lot of attention, on the flipside I think that one of the key potential benefits of automation can be easily overlooked.

A snapshot of knowledge



When I pick up testing on a feature, my understanding of that area and the context in which it operates increases dramatically as I immerse myself into the testing of that area. For the period in which I am testing that feature, I am holding in my head far more information on it than if I was coming at it cold. As my understanding increases my exploratory tests improve and my ability to identify key tests to ensure the correct operation of that feature increases. If, at this point, I am able to encapsulate some of that knowledge into a set of automated tests, in essence I am capturing a snapshot of that elevated state of understanding. As I move onto other features my understanding of that feature will fade. The tests that I designed at the time will not. Well designed automation will repeatedly check aspects of the product that I thought were important at the time that I best understood that area and customer requirements that drove its development.

On many occasions I review test suites which focus on areas that I have not worked on in a while and find myself quietly impressed at the understanding of that area that I must have had at the time of creation.

There is an advert running in the UK at the moment for a vitamin supplement that claims to help you be you on a "really good day". I'm not suggesting that automation will ever replace the insights of a skilled tester, but a well designed test pack can capture small glimmers of that insight when at its highest to use to drive ongoing checks. You on lots of "really good days".

Copyright (c) Adam Knight 2011

Saturday, 14 May 2011

Under your nose - uncovering hidden treasure in the tools you already use

At the recent STC meetup in Oxford Lisa Crispin performed a lightning talk on using an IDE to manage and edit your tests. Managing the files that constitute our automated test packs is not an easy process, particularly around the maintainence of these files in SVN, and I liked the idea of using an IDE to help the team with this. On investigating the potential benefits of using an IDE for our team, I was surprised to discover that actually, some excellent productivity benefits in a similar vein could be achieved by making full use of the tools that we were already using in the team.

False start


I already had IntelliJ and eclipse installed on my laptop for developing Java test harnesses and so my first inclination was to follow Lisa's suggestion of using one of these IDEs to manage the test files.

After thinking things through and experimenting with some ideas I realised that this was not really going to work for me and my specific context. The reason for this is that most of the time we develop our test packs on remote test servers not local machines. Although there are facilities for remote editing in eclipse, for example, this relied on services running on the target machines which would have severely diminished the flexibility of this approach. Many of the benefits of using such applications would also not apply to our context given the fact that our test file structure and format is specific to our harness and facilities such as dependency and syntax checking/auto-complete would not be applicable.

A different perspective


Although the idea of using a programmatical IDE appeared to be a non-starter for us, I still felt strongly that some of the benefits of such an approach could still help us if we could achieve them through other tools. On researching specifically the requirements that would directly help us to improve our remote interactive test development, I found that many of the features that I was looking for were available through addins or configuration options for tools that we were already using.

Custom file editing and svn management


On researching explorer tools with custom editor options I discovered that the Notepad++ tool actually supports a solid explorer addin Explorer. As well as file browsing there are also excellent file search and replace features. Combining with the icon overlay feature of tortoiseSVN and the support for the standard explorer context menu gives subversion integration.


In addition to the Explorer searching, the "OpenFileInSolution" addin provides indexing of the project file system for fast searching, and the "User defined language" feature allows me to add syntax highlighting for our syntax commands to pick up on simple syntax errors in command input files.

Remote file editor


Another item on the hitlist was the ability to edit remote files directly in an editor. I discovered another useful Notepad++ addin "NppFTP" that achieves exactly this. This presents a remote explorer window within the NotePad++ application that allows me to quickly access a remote directory and edit test files within my text editor.

Remote file management


Finally my search moved onto the ability to remotely manage files in SVN. Some googling around led me to this great post on Using WinSCP to work with remote SVN repositories. Again, WinSCP was a tool that we already used extensively in the team. Based on the hints available here I quickly established a custom toolbar in WinSCP to add, check status, check-out, check-in and revert files and directories using custom commands.

Custom file actions


Using the power of the custom command support in WinSCP has allowed me to progress further than I was expecting on the level of interaction with our test file packs. Custom commands have allowed us to create functions based on the common actions and bespoke file relationships that are unique to our test development environment. These operations include:-
  • Updating results files from test run directories into the corresponding source pack through a single button operation
  • Adding custom metadata files in for existing tests and test packs via a click and prompt operation
  • Copying existing tests with all associated meta files with a single click operation


All of these activities were obviously achievable through shell scripting, but the addition of simple commands in our remote scp client to perform these actions makes it a far simpler process to interact remotely with the test servers and work with our test pack files.

I know this is not rocket science, and I suppose in essence that is my point. We were already using both WinSCP and Notepad++ in the team, yet had not really investigated the power of the tools under our noses to make our lives easier. It was only through the process of looking for the benefits offered by other applications that I discovered the features that were already at my disposal.

Next time you fire up the tools that you use every day without thinking, why not take the time to have a closer look at how you can make them work harder for you.

Copyright (c) Adam Knight 2011

Tuesday, 3 May 2011

Situation Green - The dangers of using automation to retest bugs

I am a great believer in a using structure of individual responsibility in items of work done rather than monitoring individual activity with fine grained quantitative measurement and traceability matrices. I do think, however, that it is part of my responsibility to monitor the general quality of our bug documentation, both in the initial issue documentation and also the subsequent retesting. On occasion I have had bugs passed to me with retest notes along the lines of "regression test now passes". Whenever I see this my response is almost always to push the bug back to the tester with recommendations to perform further testing and add the details to the bug report. While I understand the temptation to treat a passing automation check as a bug retest, this is not an activity that I encourage in any way in my organisation.

Giving in to temptation

I'm sure that we are not the only test team to feel pressure. When faced with issues under pressure the temptation is to focus on such activity as to remove that issue and restore a state of "normality". The visible issue in a suite of automated tests (or manual checks) is the failing check, and resolving a bug to the extent that the check returns the expected result can seem to be the appropriate action for a quick resolution. The danger that is apparent with this approach, however, is that it results in "gaming" the automation to the extent that we ensure that the checks pass, even though the underlying issue has not been fully resolved. We can focus on resolving the visible problem without the requisite activity to provide confidence in the underlying issue that caused the visible behaviour. Some simple examples:-

  • Fixing one checked example of a general case
    Sometimes a negative test case can provide an example e.g. of our error handling in a functional area. If that check later exposes unexpected behaviour, then a resolution on that specific scenarion could leave other similar failure modes untested. I've seen this situation where a check deleted some files to force failure in a transactional copy. When our regression suite uncovered an change in the transactional copy behaviour the initial fix was to check for the presence of all files prior to copy, fixing the test case but leaving open other similar failures around file access and permissions.
  • Updating result without ensuring that purpose of test is maintained
    There is a danger in focussing on getting a set of tests "green" that we actually lose the purpose of the test. I've seen this situation where a check shows up new behaviour, a tester verifies that the behaviour is as a result of new behaviour and so the new result is updated into the automation repository, but the actual original purpose of the check was lost in the transaction.

These are a couple of simple examples but I'm sure that there are many cases where we can lose focus on the importance of an issue through mistakenly concentrating on getting the automation result back to an expected state. No matter how well designed our checks and scenarios, this is an inherently risky activity. Michael Bolton refers to the false confidence of the "green bar".

Re-Testing

I always try to focus on the fact that retesting is still testing, and as with all testing, is a conscious and investigative process. Our checks existing to warn us of a change in behaviour which requires investigation. They are a tool to describe our desired behaviour, not a target to aim at. As well as getting the check to an expected state, our activity during retesting should, more importantly, be focussed on:-
  • Performing sufficient exploration to give us confidence that the no adverse behaviour has been introduced across the feature area
  • Examining the purpose and behaviour of the check to ensure that the original intention is still covered
  • Adding any further checks that we think may be necessary given that we now know that there is a risk of regression in that area

If we fall into the trap of believing that automation equates to testing, even on the small scale of bug retests, we risk measuring, and therefore fixing, the wrong thing. I am a huge proponent of automation to aid the testing effort, however we should maintain awareness that test automation can introduce false incentives into the team that can be just as damaging as any miguided management targets.

Copyright (c) Adam Knight 2009-2011

ShareThis