Tuesday, 17 April 2012

Textual description of firstImageUrl

Not One Problem - Why We Shouldn't treat Test Automation as a Solution Delivery Project


I recently posted a tweet saying
"You decide to include test automation on your software development project. Congratulations, you now have two software development projects."
Whilst I hold this to be a valid statement, on reading it back I think that it misses the point for a lot of test contexts, including mine. The software development that I work on is not a project, it is an ongoing development. Although there are goals and timescales, these are part of an ongoing iterative process to continuously develop and support a software product. Likewise the test automation efforts that support this development are not a project, this is an important distinction to make.

To be or not to be ... a Project?


A project by definition is a finite activity to run over a fixed period delivering a finite set of specified goals. The classic project model is one of gathering requirements, selecting or developing a solution that best meets those requirements and implementing this solution. Once the solution has been implemented the "users" can then achieve their business goal through the utilization of the features of that product. The temptation when faced with a test automation challenge is to treat this like a traditional business problem which can be addressed by a project based approach. There are many tool vendors out there who market to this model, selling test automation solutions with promises such as "Code Free" designs and "Point and Click" test development once the solution has been implemented. Unfortunately there is a fundamental flaw in the principle of a test automation solution, and that is that test automation does not represent a fixed problem to solve. There are plenty of well defined business problems which can be addressed through a solution delivery project based on a static set of user requirements. The automated testing of an ongoing dynamic software product development is not one of them.

A moving target


The very reason for introducing test automation is to allow us to obtain measurements and run checks to gain confidence in the behavior of a constantly changing system. The advent of agile software development has introduced a higher level of flexibility into software development than was previously seen in many organisations. This furnishes companies with the ability to change focus and adapt to emerging markets quickly with rapid changes to their software solutions. For many organisations, including my own, the target markets and therefore functionality of their products will end up changing significantly over time, with the result that the demands placed on the test automation will change as well.

If I was to write up a list of requirements for a software test automation solution for my product as of 5 years ago, it would read very differently that the same list drawn up today. At that time our software was primarily implemented as a single server database archiving solution with a limited workflow. If at that time we had taken the approach of implementing a rigid test automation solution from the outset then this would have severely limited our scope to adapt with the changes to the software over time to the multi-platform, multi-server data management system that we now test.

Anticipating change


In order to tackle how to approach test automation the key first step is understanding that you are not trying to address one problem. You are addressing a series of problems, each one new and unique and unlikely to have been predictable and the inception of the testing activities. These problems are a inevitable result of working on a changing and evolving software product. Adopting the approach of our automation, not as an up front delivery project but as an ongoing and incremental development activity in its own right, puts us in the driving seat to adapt to these problems as they arise. Here are a few of the changes that we've been able to implement into our test automation over the last few years.
  • Move from a single operating platform to over 40 platform/storage combinations
  • Moving from a static import then querying archiving structure to a more dynamic data management approach
  • Integrating parallel java test harnesses for multi-threaded query testing
  • Allowing the iteration of test suites to scale import and query activity
  • Implementing random execution order into the test packs
  • Adding support for executing test packs across multiple servers
  • Enabling the execution of multiple test packs in parallel against a suite of test servers
  • Adding in support for testing new API interfaces
  • Implement more interactive html based reporting allowing quicker drill-down into failure reasons

This week the team and I are working on some changes to isolate the test archives from each other more cleanly to allow implementation level options to be tested in isolation. Sometimes we do face challenges in the prioritization of automation development efforts alongside new feature testing. As I wrote about in this post on writing your own test harness, what we do have is the flexibility to adapt as new problems arise. As Adam Goucher pointed out when he referenced that post, this is not limited to writing your own harness, the careful choice of customizable automation tools can also furnish this flexibility. In both cases what is required is an approach that test automation is not something that can be defined up front and used for evermore. Instead it is a continuous development effort that sits in parallel with the development of the main product, every wave of change in the latter reflected in the former, and with the success/failure of each very much related to the effective delivery of both.

No comments:

Post a Comment

Thanks for taking the time to read this post, I appreciate any comments that you may have:-

ShareThis

Recommended