One of the great benefits in working in an process where testers are involved from the very start of each development, testing and automating concurrently with the programming activities, is that the testability of the software becomes a primary concern. In my organisation testability issues are raised and resolved early as an implicit part of the process of driving features through targeted tests. The testers and programmers have built a great relationship over the last few years such that the testers are comfortable in raising testability concerns and know that everyone will work together to address these.
As is natural when you have benefitted from something for a while, I confess that I'd started to take this great relationship for granted. A recent project has provided a timely reminder for me on just how important the issue of testability is...
Holes in our socks
We've recently introduced a new area of functionality into the system that has come via a slightly different route than most of our new features. The majority of our work is processed in the form of user stories which are then elaborated by the team along with the product owner in a process of collaborative specification. This process allows us to identify risks, including testability issues, and build the mitigating actions into our development of the corresponding features.
In this recent case the functionality came about from a bespoke engineering exercise by our implementation team driven by a customer, which was then identified as having generic value for the organisation and so brought in-house to integrate. The functionality itself will certainly prove very useful to our customers, but as the initial development has been undertaken in the field in a staged project, issues of testability have not been identified or prioritised in the same way as they would in our internal process. We've already identified a number of additional work items required in order to support the testability of the feature long term through future enhancements. Overall it is likely that the testing effort on the feature will be higher than if the equivalent functionality had been started within the team with concurrent testing activities. Given the nature of the originating development it is understandable why this happened, but the project has served as a reminder to me of the importance of testability in our work. It has also highlighted how much more effective iterative test driven approaches are at building testability in to a product than approaches where testing is a post development activity.
In response to a request for improving testability, a senior programmer in a previous employment once said to me "Are you suggesting that I change the software just to make it easier for you to test?". In a word, yes.
Improving the testability of the software provides such a significant benefit from a tester's perspective that it seems surprising how many software projects I'm aware of where testability was not considered. In the simplest sense by improving testability we can reduce the time taken to achieve testing goals by making it quicker and easier to execute tests and obtain results. Testability also engenders increased confidence in our test results through better visibility of the states and mechanisms on which these results are founded, and consequently in the decisions that are informed by these.
The benefits of improved testability are not limited to testing either. From working on supporting our system I know that improved testability can drive a consequential improvement in supportability. The two have many common characteristics such as relying on the ability to obtain information on the state of the system and the actions that have been performed on it.
Adding testability can even see an improved feature set. I read somewhere that Excel's VBA scripting component was originally implemented to improve testability, and has gone on to become one of its key user features for many users (sadly I can't source a reliable reference for this - if anyone has one please let me know).
So what does this have to do with socks?
When researching testability for a UKTMF session a few years ago I came across this presentation by Dave Catlett of Microsoft, which included a great acronym for testability categories - SOCK (Simplicity, Observability, Control and Knowledge). I'm not a great fan of acronyms for categorisations normally, as they tend to imply exhaustiveness on a subject and fall down in the case of any extensions. As with all things testing, James Bach also has a set of excellent testability heuristics which include similar categories, with the additional one of Stability. As luck would have it, in this case the additional category fits nicely onto the end of Catlett's acronym to give SOCKS (it would have been very different if the additional category was Quotability or Zestfulness). As it is, I think the result is a great mnemonic for testability qualities:-
Simplicity aids testability. Simplicity primarily involves striving to develop the simplest possible solutions that solve the problems at hand. Minimising the complexity of a feature to deliver only the required value helps testing in reducing the scope of functionality that needs to be covered. Creeping the feature set or Gold plating may appear to be over-delivering on the part of the programmer, however the additional complexity can hinder attempts to test. Code re-use and coding consistency also fall into this category. Re-using well tested code and well understood structures improves the simplicity of the system and reduces the need for re-testing.
I feel that simplicity is as much about limiting scope as it is about avoiding functional complexity. I've grown accustomed to delivering incrementally in small stories where scope is negotiated on a per story and per sprint basis. Working on a larger fixed scope delivery has certainly highlighted to me the value in restricting scope to target specific value within each story, and the ensuing testability benefits of this narrow focus.
The ability to monitor what the software is doing, has done and the resulting states. Improving log files and tracing allows us to monitor system events and recreate problems. Being able to query component state allows us to understand the system when failures occur and prevent misdiagnosis of issues. When errors do occur, reporting a distinct message from which the point of failure can be easily identified dramatically speeds up bug investigations.
This is one area where we have identified a need to review and refactor recently in order to improve the visibility of state changing events across the multiple server nodes of our system. In addition to being a great help to testers, this work will also have the additional benefit of improving the ongoing supportability as well.
Along with observability, control is probably the key area for testability, particularly so if wanting to implement any automation. Being able to cleanly control the functionality to the extent of being able to manipulate the state changes that can occur withing the system in a deterministic way is hugely valuable to any testing efforts and a cornerstone of successful automation.
Control is probably the one area in my most recent example where we suffered most. Generally when implementing asynchronous processes we have become accustomed to asking for hooks to be integrated into the software that allow them to be executed in a synchronous way. The alternative is usually implementing sleeps in the tests to wait for processes to complete, which results in brittle, unreliable automation.
Exposing control in this way is achieved much more quickly and easily at the point of design rather than a retrofitting activity afterwards. I remember working on a client-server data analysis system some years ago which, as part of its feature set, also included a VBA macro capability. This was testability gold, as it allowed me to create a rich set of automated tests which directly manipulated the data objects in the client layer. The replacement application was in development for over a year before being exposed to my testing team, by which time it was too late to build in a scripting component. We were essentially limited to manual testing, which for a data analysis system was a severe restriction.
Knowledge, or Information, in the context of testability revolves around our understanding of the system and the behaviour that we are expecting to see. Do we have the requisite knowledge to critically assess the system under test? This can be in the form of understanding the system requirements, but can also include factors such as domain knowledge of the processes into which the system must integrate and an understanding of similar technologies to assess user expectations.
In the team in which I work knowledge issues in the form of missing information or lack of familiarity with technologies are identified early in the elaboration stages. The approach to address these can vary from simply raising questions with the product owner or customer to clarify requirements to a targeted research spike researching specific technologies or domains. As we are seeing, with longer term developments the learning curve for the tester coming into the process becomes much steeper and testability from each testers perspective is diminished. Additionally with less immediate communication the testers have less visibility of the early development stages and consequently a weaker understanding of design decisions taken and the rationale behind them. It has taken the testers some time to become as familiar with the decisions, designs, technologies and user expectations involved in our latest project as those where they are actively involved in the requirement elaboration process.
The 'additional S' - I can see why it was not included in Catlett's acronym as this is not an immediate testability characteristic, however as James suggests it is an important factor in testability. James defines stability specifically in terms of the frequency and control over changes to the system. Working in an agile process where the testing occurs very much in parallel with active programming changes, functional changes are something to be expected so implementing these in a well managed and communicated way is critical. I find that the daily stand-ups are a great help in this regard. Having had experience in the past of a code base which was under active development by individuals not involved in the story process, I know how much it can derail the testing effort having changes appear in the system which are not expected by the testers and have not been managed in a controlled fashion.
I'd also be inclined to include stability of individual features under this category. It is very difficult to test a system in the presence of functional instability in the form of high levels of functional faults. The reasons for this are primarily that issues mask issues. The more faults that exist in the system, the greater the chance of other faults lying inaccessible and undetected. Additionally investigating and retesting bugs takes significantly longer than testing healthy functionality. Nothing hinders testing, and therefore diminishes testability, like an unstable system.
In hindsight I think that the big takeaway from this experience is that lack of testability becomes more likely the later you leave the exposure to testing of your software. Following an agile development process has a natural side effect of developing testability in to the product as you go along. As George Dinwiddie points out in his post on the subject - if you drive your development through your tests then you will naturally build testability into each feature as you go along. After enjoying this implicit benefit of our development approach for years, this value couldn't have been demonstrated to me more effectively than working on a feature that had not been developed in this way.
I hope it is clear that I make no claim as to have invented the categorisation of testability concepts, I just like the Socks acronym and find it a useful breakdown to discuss my own experiences on the subject. When presenting on this, as with most posts/presentations, my first steps were to write down my own ideas on the topic before researching other reference. In doing this I did come up with a similar set of groupings, so was inevitably pleased when I found a good correlation with the references I've mentioned. For these, and other good links on the subject, please look below:-
Heuristics of Software Testability, James Bach.
Improving Testability – Dave Catlett, Microsoft (presentation)
Design for Testability – Bret Pettichord
design for Testability - George Dinwiddie