Thursday, 24 November 2011

Recruitment By Example

I find it hard to recruit great testers...

Really hard.


Finding the appropriate candidates with an enthusiasm for the job, combined with the aptitude and skills to excel in my organisation proves extremely difficult. One of the greatest challenges for me is working with recruitment consultants and getting them to understand my needs in testing candidates. Working in a data storage/database environment means that certain skills are particularly valuable to us, SQL/Databases and Unix operating system experience with shell scripting being the primary ones.

Historically the biggest drain on my time when actively recruiting has been the time taken to read CVs and attend telephone interviews to filter out the candidates who list interesting skills on CVs, yet their experience proves to be very limited. This problem can even extend to the candidate having merely tested systems running with such technologies without ever having interfaced with them directly.

I recently met a very enthusiastic recruitment consultant who had taken the time to attend a Tester Gathering in Bristol. Being in the market for a new supplier I decided to give them a try. I decided that with a new relationship I would try a new approach to getting them to understand our needs, and tried a slightly unconventional method to getting them to understand our testing requirements.

Specification By Example Applied

I'm a great believer in the power of examples to help drive our software development. Using examples helps to provide a shared understanding between stakeholders with potentially different domain languages and perspectives. I wondered whether a similar approach might help in our recruiment process to bridge the gap of understanding between the recruiters understanding of my candidate requirements and my own.

I selected 3 CVs that I had received in the last 2 years from potential candidates:-
  • The first listed Unix and SQL knowledge, however had no evidence of direct experience to back up this claim other than in a list of technologies used in the implementation of projects they had worked on
  • One was a candidate who appeared to have the relevant skills and experience, however, on a short phone interview had not been able to back up this experience with any demonstration of understanding
  • The third was a CV which showed clear examples of projects in which she had directly interfaced with the relevant technologies and the role that she had performed in that project. In addition to this she had provided further examples of improvements she had introduced into her working environment to improve the testing effort and steps she had personally taken to improve her testing knowledge
The third CV was from one of the testers already in our team.

I invited the new recruitment suppliers into my office and spent about an hour working through each CV in turn. I highlighted key points to look out for that set off alarm bells for me in CVs one and two. These included factors such as the providing of long lists of technologies that made up the solution tested rather than ones they had actually worked with; a focus on certification and bug metrics and no evidence of a drive to self educate or improve their work. I then went through things about the third CV that marked them out as an exceptional candidate. I think this was something of a tiresome process for the consultants, however they admitted at the end of the session that they had found the session very useful as they had a much clearer picture of the candidates that I was looking for. I also made it clear that I would rather have one good tester put forward for the job rather than 10 poor ones.

There is always a risk with trying something different, and I wondered if taking this approach might not ingratiate me well with my new supplier. Actually it was remarkably effective. As a result of this, instead of twenty inappropriate CVs to filter through in the first few weeks, I received one, an a good one at that. Since then I have not received what I would describe as a poor CV from that agency, and soon after recruited a fantastic candidate that they provided. Obviously I cannot prove whether the agency would not have delivered such a good service without the unconventional start, but I've certainly not had such a great start from any other agency. If you are going through the pain of trying to recruit good testing candidates I recommend getting your agents in and working through some examples of the CVs that you are, and are not, looking for, to drive your recruitment specification.

Thanks to Rob Lambert for prompting this post. Image http://www.flickr.com/photos/desiitaly/2105224119

The Thread of an idea - Adopting a Thread Based approach to Exploratory Testing

In all of our testing activities my approach is very much to treat our current practices as a point in a process of evolution. Here I write about an excellent example of how we evolved our testing approach over time moving away from the idea of test cases to a more flexible thread based strategy that was more suited to our approach to testing and our need to parallelise our testing activities

Not the tool for the job

When I started in my current role the team was attempting to manage their testing in an open source test case management tool. The process around tool use was poorly defined and rooted very much in the principles of a staged test approach, planning test cases up front and then executing these repeatedly across subsequent builds of the software. This was a hugely unwieldy approach given that we were attempting to work with continuous integration builds. The tool was clearly not suitable for the way that we were operating, with some managing their work outside the tool and bulk updating at a later point.

Needless to say as soon as I was able I moved to replace this system.

Deferring Commitment

I had enjoyed some success in previous teams with a lightweight spreadsheet solution backed by a database. I thought that this would be an excellent replacement, so I worked to implement a variation on this into the team. In an attempt to address the issue of not having a fixed set of test cases to execute, David Evans provided a good suggestion that using an estimate of test cases might alleviate the immediate problem by allowing us to track based on estimated work. This allowed us to defer commitments on the actual tests performed until such time as we were actually working on the feature. The estimates would naturally converge with the actual tests executed as the feature progressed. This did provide a significant improvement, however I still found that the tracking of these estimates and counts was not giving me any useful information in terms of understanding the status and progress of testing during the sprint.

You're off the Case

The conclusion that we were rapidly arriving at was that, counting test cases for us provided little or no valuable information. May of our tests resulted in the generation of graphs and performance charts for review. The focus of our testing revolved around providing sufficient information on the system to the product owners to allow them to make decisions on priorities. Often these decisions involved some careful trade-off in performance between small data performance and scalability over larger data, there was no concept of a pass or fail with these activities. Counts of test cases held little value in this regard as test cases as artifacts describe no useful information to help understand the behaviour of the application under test or visualise any relative performance differences between alternative solutions. It was more important to us to be able to convey information on the impact of changes introduced than support meaningless measurements of progress through the aggregation of abstract entities.

In an attempt to find a more appropriate method of managing our testing activities, we trialled the use of Session Based Exploratory Testing, an approach arising from the Context Driven School of Software Testing and championed in particular by James Bach and Michael Bolton. What we found was that, for our context, this approach also posed us some challenges. Many of our testing investigations involved setting up and executing long batch running import or query processes and then gathering and analysing the resulting data at the end. This meant that each testing activity could potentially have long periods of 'down time' from the tester perspective which did not fit well with the idea of intensive time-boxed testing sessions. Our testers naturally parallelised their testing activity around these down times to retest bugs and work on tasks such as automation maintenance.

The thread of an idea

Whilst not wanting to give up on the idea of the Exploratory Testing Charter driven approach, it was clear that we needed to tailor this somewhat to suit our operating context and the software under test. My thinking was that the approach could work for us, but we needed to support the testers working on parallel investigation streams at the same time. Wondering if anyone else had hit upon similar ideas of a "Thread-Based" approach I did some research and hit upon this page on James Bach's blog.

I am a great believer in tailoring your management to suit the way that your team want to work rather than the other way around, and this approach seemed to fit perfectly. Over subsequent weeks we altered our test documentation to work, not in terms of cases but in terms of testing threads. As discussed here we drive to identify Acceptance Criteria, Assumptions and Risks during the story elaboration which help to determine the scope of the testing activity. In a workbook for each user story we document these and then publish and agree them with the product management team. (BTW - svn apache add-in running alongside a Wiki is an excellent way of sharing documents stored in subversion).

The initial thread charters are then drawn up with the aim of providing understanding of, and confidence in, the acceptance criteria and risks identitied.

Initially each testing thread has:

  • A charter describing scope - we use Elisabeth Hendrickson's excellent guide on writing charters to help us
  • An estimate of time required to complete the charter to the level of confidence agreed for acceptance of the story. This estimate covers any data generation and setup required to prepare, and also the generation of automated regression tests to cover that charter on an ongoing basis, if this is appropriate.

We still use Excel as the primary store for the testing charters. This allows me to backend onto a MySQL database for storing charters and creating dashboards which can be used to review work in progress.



As the investigation on the threads progresses, information is generated to fill out the testing story:-

Given that we were now tracking threads rather than cases this allowed for greater flexibility in the format of our testing documentation - details of the testing activities performed are tracked in Mind Maps or Excel sheets.
  • Mind Maps are generated using XMind and hyperlinked from the Thread entry in the main sheet. For details on using Mind Maps to track testing activities I highly recommend Darren McMillan's blog. There is little more that I can add to his excellent work.
  • In terms of the Excel based exploration, testers can document their decisions and activities in additional entries under the Charter in the main testing sheet, using a simple macro input dialog for fast input on new testing activity as here:-
  • More detailed results, data tables and graphs can then be included in additional worksheets.
  • Details of any automated test deliverables generated as a result of that charter are also reported at the thread level.

Reeling in the Benefits

From a Test Management perspective I have found significant benefits working in this way
  • Variety of testing output is supported and encouraged - rather than gathering useful information and then stripping this down to standardise the results into a test case management structure, we can provide the outputs from the testing in the most appropriate form for understanding the behaviour and feeding the relevant decisions
  • Managing at charter level allows me to understand the activities that the testers are doing without micro-managing their every activity. We have regular charter reviews where we discuss as a team the approach to each feature and pool ideas on other areas that need considering
  • Estimate on remaining effort is maintained - this can be fed directly into a burndown chart. As an additional advantage I've found that our estimates in work outstanding are much more realistic now that these correlate with specific testing charters than my previous method of obtaining an estimate based on the work remaining on the story.

In the few months that we've been working with this approach it has slotted in extremely well with our testing activities. As well as making it easier for me to visualise progress on the testing activities, it has freed up the team to record their investigations in the form that they feel is most appropriate for the task at hand. The process of team review has improved team communication and really helped us to share our testing knowledge, driving our charter reviews using our shared test heuristics in a team based review session.

As with all things, the most appropriate approach to any problem is subject to context. If your context involves similar long setup and test run times, or sufficient distractions to prevent the use of session based approach, it may be that this is a possible alternative to progress away from the restrictions of a test case based strategy.

Image: http://en.wikipedia.org/wiki/File:Spool_of_white_thread.jpg

Eurostar Talk - An Evolution Into Specification By Example

The slides for my talk at EuroSTAR on "An Evolution into Specification By Example" should be available on the EuroSTAR site.

Further Reading from Me

Regarding some of the points raised there is some further reading that may be of interest:-

On Reporting Confidence and Collecting Criteria, Assumptions and Risks

On some of the benefits of Writing your own test harness using a set of principles for test automation

Other Relevant Reading

Some external relevant links:-

Gojko Adzic's Book on Specification By Example and the summary points here

James Bach on Thread based Test Management and the Case against Test Cases

Some posts from Michael Bolton on potential problems with reporting "Done" "The Undefinition of Done" and "The Relative Rule and the Unsettling Rule"


For anyone that has attended the talk and wants to comment or ask any questions please comment on this post and I'll be happy to discuss anything with you further.

Friday, 18 November 2011

Birmingham STC Meetup

On Tuesday I attended the STC Meetup in Birmingham, the highlight of which was a talk by James Bach on transitioning to Context Driven Testing from other schools. Videos of all the evenings talks, including my own lightning talk on where testers can learn about monitoring software's interaction with its environment, are available on Simon Knight's website.

Friday, 11 November 2011

Sleepwalking into failure

"I feel happier when I have come to the same conclusion as experts in my field independently by making my own mistakes"
This was a statement that I posted on twitter last week, which had responses from a few people including a very witty response from citizencrane
"Don't skydive"
and a response from Lisa Crispin
"I'd just as soon not make the mistakes, but I guess it is a better learning experience!"
Whilst I agree that it is instinctively better to avoid making mistakes, I think that allowing ourselves the ability to try new things and make mistakes is essential to learning. In the week after my comment I saw some interesting ones on the same subject:-

this one from testerswain
For me, making mistakes while #testing is a great learning opportunity and more benefitial than gathering knowledge from books
and this one from Morgsterious
Learning new ways is not a matter of being told, but one of risking and discovering in a loving, trusting context." - Satir
So there are clearly other folks out there who feel the same way. Ironically though, in my post the mistakes that I was referring to were not those that arise from trying new things, but from specifically not doing this...

A questioning approach


Rather than simply accepting the contents of textbooks and certification programs on how testing should look, I strive to question the validity of everything that I do. I try new ideas in place of existing practices that I think are founded on invalid principles. Trying new methods invariably introduces the risk of failure. I've tried new things and failed, but in every case I have learned valuable lessons. The failures that teach me little, and I therefore regret the most, are the ones that arise through specifically not questioning what I do. These were the failures I was referring to in my original quote, the failures of adopting an approach because of accepted wisdom rather than validity and appropriateness for the context. The failures of sleepwalking through false rituals and meaningless metrics. Consider the following scenarios:-
  1. I read an article by a respected tester highlighting the invalidity of an approach that I am still using, with no thought on my part for how valid this is.
  2. I would naturally question my own approach and consider whether I could make any changes to improve my own testing based on what I have learned. I would also feel slightly embarrassed and potentially lose confidence in my own abilities and worth.
  3. I read an article by a respected tester highlighting the invalidity of an approach that I have myself already questioned and changed.
  4. I'd feel great. My confidence in my own understanding of my profession would have received a huge boost thanks to my own critical thinking being backed up by someone I have a great deal of respect for.
Both of these have happened to me, I know which makes me happier. For example when I read Michael Bolton questioning the concept of "Done" in this post and here , I feel justified in my own approach and writing on my own issues with this concept.

Evolving the profession


I believe that testing as a profession is constantly evolving. All of the testers that I hold in high regard are not those that espouse "best practices" and rigid models for success. Instead the individuals that I most respect are those that question the status quo and constantly look to improve testing as a profession (see the blog list on the right for a starting list).

While respecting the thought leaders in my field, many of the most vocal are working are offering consultancy services and reference books (Lisa Crispin, mentioned above is a notable exception. Whilst being a successful author she has also worked as a tester in the same agile team for many years). I believe that the art of improving testing should not be the preserve of the consultants and authors. Even with the most principled of individuals there will inevitably be a different bias for those trying to differentiate their own service in a market from a tester in permanent employment striving to ensure the medium-long term applicability of the methods they adopt. I believe that as a professional I have a responsibility to contribute to the same questioning process and try to improve my craft through my learning on a continuous product development, which is necessarily different from contract engagements. This recent rallying call from James Bach for all testers to spend time thinking, writing and coming up with new testing ideas, applies equally to testers in all types of testing situation. By adopting a questioning approach and writing about my own experiences, both sucesses and failures, I feel that I am at least making the effort to contribute to the testing body of knowledge, and not just sitting back and relying on the excellent individuals that I refer to to question my profession for me.

The alternative, where I neither question myself, nor read any material from any other respected testers to improve my own testing, well that really isn't an option that I care to consider.

ShareThis