Friday, 24 June 2011

Small mercies - why we shouldn't knock "it works on my machine"

I go to quite a few test events around the UK. Conferences, User groups and meetups. A thread of conversation that is a perennial favourite at such events is the ridiculing of programmers that, when faced with a bug in their code, claim
"It works on my machine"
Testers revel in the pointlessness of such a statement when faced with the incontrovertible evidence that we have amassed from our tests. We laugh at the futility in referencing behaviour on a development machine as compared to the evidence from our clean server and production-like test environments. Surely it is only right that we humiliate them for this. Well I say one thing to this

At least they've tried it.


At least the programmer in question has put the effort in to test their code before passing it to you. Of course this should be standard practice. More than once, however, I've encountered far worse when discussing bugs with the programmer responsible:-
"I haven't tried it out but let me know if you get any problems"
"According to the code it should work"
"I can't think of any reason why this should fail"
"You've deliberately targetted an area of code that we know has issues" (I'm not joking)
Programmers work in development environments. These are usually a far cry from the target implementation environment, hence the need for test systems (and possibly testers). The development environment is the earliest opportunity that the programmer has to run the functionality and get some feedback. If the alternative is to waste time pushing untested functionality into a build and a test harness and use up valuable testing resources on it then I'm sorry, but I will take "It works on my machine" over that any day.

Copyright (c) Adam Knight 2011 a-sisyphean-task.blogspot.com Twitter: adampknight

Tuesday, 21 June 2011

Textual description of firstImageUrl

Be the We - on doing more for the team

A turn of phrase that I've heard many times working in software development teams goes like this:-
"We should really be doing ..."
followed by some practice or process that the speaker feels would improve either their job or the team's development activity as a whole. In this context the "We" enjoys a level of mysterious anonymity more commonly reserved for the "They" of "I'm surprised they allow that" and "I can't believe that they haven't done something about it" fame that are the cause of society's ills. The implication is that, by raising the issue as a "We" problem, the speaker has upheld their end of things and the failure to implement the behaviour in question is now a team responsibility.

Who is the We?


So who is the mysterious "We". I'll tell you, it's you, or it should be. It's me, or I hope it is. It is one of the people that I have worked with in the past that have stood out for over and over again taking action of their own back to improve the team environment in which they work. It is the person who watches my back and offers to help when I get too busy, as I do for them. It is the person I want in my team and look for whenever I read a CV.

I review hundreds of tester CVs as part of my job. All of them have testing experience. All of them. Most of them contain the following:
  • a list of the projects they've worked on with project descriptions
  • lists of the technologies making up the environment that the software was implemented in
  • a list of the testing activities performed on each project
If this is the extent of the content then the CV will usually not get considered for even a quick phone interview. Testing experience and tool knowledge alone are not enough. I am a strong believer in taking a team approach to process, however relying on the team's achievments to forward your career when you are not an active participant in those achievements will not get you very far. Put simply, if I can see no evidence that an individual has acted on their own initiative to improve the working lives of themselves and their colleagues then they won't fit into my organisation. We cannot afford to have people in the team who won't step up and be the "We" when we want to improve ourselves.

But what can I do?


Sometimes it is hard to know where to start, but if you regularly find yourself thinking "It would be so much better if we did this ..." , then you have a good starting point. Don't sit around and wait for someone else to do it, tackle it yourself.
  • It would be great if We had some test automation
  • Great - do some research, brush up your scripting and get started. If you don't know scripting or tools, there's nothing like having a target project to work on to develop some new skills.
  • We really should try some exploratory testing sessions
  • Again - fantastic idea, book yourself out some time, do some reading on the subject and get cracking. As I wrote about in my post on using groups to implement change, once you demonstrate the value you'll get some traction with other team members and bingo, you've introduced a great new practice to the team.

The opportunities will depend on your context, but even (and may especially?) in the largest and most rigid of processes there are opportunities to improve through removing inefficiencies or improving the quality of information coming out of your team. The costs are small, maybe some of your own time to read up and learn a new skill. The benefits are huge. You'll earn the respect of your colleagues, improve your chances of promotion and, more importantly, become really good at what you do. Next time you find yourself thinking "Wouldn't it be great if we did ...", try rephrasing to "Wouldn't it be great if I'd introduced ...". Think how much better that sounds, and start becoming the person that, when others say "We", they mean you.

Copyright (c) Adam Knight 2011 a-sisyphean-task.blogspot.com Twitter: adampknight
image: Dog-team-at-Seventh-All-Alaska-Sweepstakes

Saturday, 4 June 2011

Follow the lady - on not getting tricked by your OS when performance testing

Recently my colleagues and I were involved in working on a story to achieve a certain level of query performance for a customer. We'd put a lot of effort into trying to generate data which would be representative of the customer's for the purpose of querying. The massive size of the target installation, however, prevented us from generating data to the same scale so we had created a realistic subset across a smaller example date range. This is an approach we have used many times before to great effect in creating acceptance tests for customer requirements. The target disk storage system for the data was NFS, so we'd created a share to our SAN mounted from a gateway Linux server and shared out to the application server.

False confidence

Through targeted improvements by the programmers we'd seen some dramatic redutions in the query times. Based on the figures that we were seeing for the execution of multiple parallel queries, we thought that we were well within target. Care was taken to ensure that each query was accessing different data partitions and that no files were being cached on the application server.

Missing a trick

We were well aware that our environment was not a perfect match for the customers, and had flagged this as a project risk to address. Our particular concerns revolved around using a gateway server instead of a native NAS device as it was a fundamental difference in topology. As we examined the potential dangers it dawned very quickly that the operating system on the gateway box could be invalidating the test results.

Most operating systems cache recently accessed files in spare memory to improve IO performance, and Linux is no exception. We were well aware of this behaviour and for the majority of tests we take action to prevent this from happening, however we'd failed to take it into account for the file sharing server in this new architecture. For many of our queries all of the data was coming out of memory rather than from disk and giving us unrealistically low query times.

Won't get fooled again


Understanding the operating system behaviour is critical to performance testing. What may seem to be a perfectly valid performance test can yield wildly innaccurate results if the caching behaviour of the operating system is not taken into account and excluded. Operating systems have their own agenda in optimising performance which can operate in conflict with our attempts to model performance to predict beaviour when operating at scale. In particular our scalability graph can exhibit a significant point of inflection when the file size exceeds that which can be contained in the memory cache of the OS.

In this case, despite out solid understanding of file system caching behaviour, we still made an error of judgement as we had not applied this knowledge to every component in a multi tiered model. Thankfully our identification of the topology as a risk to the story and subsequent investigation flushed out the deception in this case and we were able to retest and ensure that the customer targets were met in a more realistic architecture. It was a timely reminder, however, how vital it is to examine every facet of the test environment to ensure that we do not end up as the mark in an inadvertent three card trick.

(By the way - from RHEL 5 onwards linux has supported the hugely useful method to clear the OS cache
echo "n" > /proc/sys/vm/drop_caches
Where n=1,2 or 3. Sadly, not all OSs are as accomodating)

Copyright (c) Adam Knight 2011
a-sisyphean-task.blogspot.com
Twitter: adampknight

ShareThis

Recommended