Friday, 19 November 2010

Dont call me Technical

I spent a very pleasant day today at the Agile Testing and BDD Exchange at SkillsMatter in London today. In general the day was a good one, slightly more focus on tools this year, I have to admit to preferring the more structure and approach based talks that were involved last year.

One of the subjects that caused some twitter discussion after the event came about through one of the presenters questioning whether testers were 'technical' enough to be comfortable with working with the programming syntaxes being presented.

To highlight the issue I'll rewind to an earlier talk in the day when Dave Evans of SQS and Erik Stenman presented an experience report on agile testing at Klarna. Erik discussed the fact that Klarna's online retail transaction processing software was written in Erlang. Erik and asked the audience how many were programmers, and how many of those were familiar with Erlang. There was no sense of condescension, it was simply a show of hands of those familiar with that language.

Compare this to the later talk in which a similar question was asked of testers, yet it was framed not in the context of familiarity with the programming language in question, but in more general terms of how 'technical' the testing contingent were (i'm not sure if this was the exact term used but it was the implication, and was the term carried into the subsequent twitter discussions)

As Mike Scott of SQS put it:-

Why do people assume testers are not technical. Lets stop this now. Please don't patronise us.

Mike makes a valid point, but still (probably for brevity in a tweet) uses the 'technical' categorisation. Lanette Creamer provided an excellent response:-

I agree. Also, what is "technical"? It means different things to different people.

This couldn't be more true. The chap sat next to me was Jim, a tester from my team at RainStor. He did not put his hand up. Now I've seen this guy read and understand SQL queries longer than some novels and find faults with them through visual static analysis. Of course he is a technical tester. In fact the 'developers' in our team, all competent C/C++ programmers, treat Jim's SQL knowledge with something approaching reverence. He is an invaluable member of our team as his "technical" database skills are fundamental to the database domain in which we operate. His lack of familiarity with object oriented programming language syntaxes, however, was sufficient for him to not show his hand to be counted as one of the 'technical' testers in the room.

Given the accepted wisdom of having a multi-skilled team, isn't it about time we also accepted the value of multi-skilled testers, and that 'technical' is a categorisation that falls significantly short in that context. When discussing the skills of developers we do not try to impose such broad labels, we talk in a positive sense about the specific skills that individual developers possess. When discussing the various programming, scripting, analysis, database, operating system and other skills that testers may possess, it would be nice if the same courtesy was extended.

Copyright (c) Adam Knight 2009-2010

Tuesday, 9 November 2010

A confession - on assumptions

Hi everyone, my name is Adam, I am a software tester and I make assumptions.

Not much of a confession I admit, however assumptions are something of a dirty word in software testing. If not addressed face on they can become hidden problems, rocks just under the surface waiting to nobble your boat when the tides change.

As a tester I am constantly making assumptions. This is an unfortunate but necessary part of my work. Where possible I always try to avoid assumptions and drive to obtain specific parameters when testing. Sometimes, particularly early on in a piece of development, it is not always possible to explicitly scope every aspect of the project. In order to avoid "scope paralysis" and put some boundaries in place in order to progress with testing, it is sometimes necessary to make assumptions about the required functionality and the environment into which it will be implemented and used.

These assumptions could relate to the users, the implementation environment, application performance or the nature of the functionality. e.g.:-
  • It is assumed that the customer all servers in a cluster will be running the same operating system

  • It is assumed that the user will have familiarity of working with database applications and related terminology

  • It is assumed that the customers will have sufficient knowledge to set up a clustered file system so our installation process can be documented from that point onward

  • Given a lack of explicit performance criteria it is assumed that performance equivalent to similar functionality will be acceptable

  • It is assumed that the function will behave consistently with other functions in this area in terms of validation and error reporting

I don't see anything wrong in making assumptions, as long as it is identified that this is what we are doing. As part of our testing process I encourage testers in my organisation to identify where they are making assumptions and to highlight these to the other stakeholders when publishing the agreed acceptance criteria for each story. In this way we identify where assumptions have had to be made and allow these to be reviewed and the safety and the risks involved in making those assumptions to be assessed. We identify implicit assumptions and expose them as explicit constraints, gaining confirmation from the product owner and/or customer to provide ourselves with confidence that the assumptions are safe.

Despite this process of identification and review, I recently encountered an issue with a previously made assumption. This highlighted the fact that simply identifying and reviewing assumptions during the development of a piece of functionality is not sufficient. Once you have made an assumption during the development of a function, in essence you remake that assumption every time you release that same functionality in the future until such time as:-

  • You cease to support the functionality/product

  • No more function, no more assumption - job done

  • You change the functionality and review the assumptions at that point

  • At this point I encourage my team to re-state any assumptions made about the existing functionality for re-examination. A recent example involved our import functionality. As part of an amendment to that functionaliy the tester stated the assumption that an existing constraint on the import data format would apply in the case of using the amended software. We questioned this and, after conferring with the customer, established that it was no longer a safe assumption given the way that they wanted to implement the new feature. In this way the explicit publishing and examination of a long held constraint helped to avoid a potential issue that would have affected the end customer.

  • You get bitten because the assumption stops holding true.

  • This last alternative happened to me recently. As part of a functional development a couple of years ago some assumptions were explicitly stated in the requirement regarding the nature of the data used in that function. Over the course of the next two years the customer base was extended and the range of data sources for the functionality extended. As no extensions to the functionality appeared necessary to support the new use cases, no further developments were done and the assumptions were not revisited. The environment in which the product was being used changed, rendering the assumption invalid and resulting in a issue with a specific set of source data. The problem that manifested itself was very minor, actually resulting from a problem in the application that the data was sourced from, but it did highlight the dangers involved in making assumptions and not reviewing them. I've since altered the way in which assumptions are documented during our development process to allow for easier identification and review in future.

Assumptions are easy to make. They are even easier to remake, every time the feature in question is re-released. Identifying and confirming the assumptions at the point of making them is a good step, but it is still a risky approach. Assumptions are static in nature and easy to forget. Customer environments, implementation models and usage patterns change much more quickly and forgotten assumptions can become dangerously redundant if not constantly reviewed. I'll be improving my process of assumption documentation, examination and re-examination in coming weeks. Is this a good time review what assumptions you've made in the past that are still being made - it may do you some good to stand up and confess.

Copyright (c) Adam Knight 2009-2010