Tuesday, 25 June 2013

An Extreme Reaction

 

Being one of the thousands of people who use the British public transport system to get to work I like to think I have a pretty high pain threshold when it comes to problems and disruptions to my daily routine. Late trains and disruptions are a common part of my commute, however a recent event elicited an emotional response from me far greater than the situation in isolation merited. The event, and my reaction, provided me with an interesting example of the mindset of technology users in the face of enforced changes, and how our subjective perception of importance can shift dramatically given a context in which user options have been restricted. I also gained some compelling evidence of the importance of considering your environmental variables when testing.

We hate change

In November last year ticket barriers were installed at each entrance to my local station which required a valid train ticket to enter the station. Typically prior to this I had purchased my tickets from a machine in the station, however if queues were bad or machines faulty I could also go to the manned kiosk or buy a ticket on the train, risking only a mildly grumpy train manager. After the barriers were installed my only option to access the station by the street entrance that I used was a machine which was placed outside the automatic barriers. Going to the manned kiosk was still possible but required a long walk around the road to the main entrance on the other side of the station, and buying on the train was no longer an option. 

My natural inclination was to be wary of the changes, however after a couple of weeks of painless commuting I settled into things and reluctantly accepted that the changes weren't as bad as I expected.

Environment Variables

One cold day soon after, running a little late, I was pleased to find the queue for the ticket machine empty. I selected my season ticket and tried to enter my code on the touch screen. I quickly realised that many of the characters, particularly those around the edge of the screen, were unresponsive. I was having to press really hard on the screen to get anything to register, and then it could be the character below or above the one I wanted requiring me to use the equally non-functional backspace and try again. Three characters in and I had already taken far longer than I should need and it slowly dawned on me that it was taking so long to enter my number that I was at serious risk of missing the train ...

A little perspective

Now is probably a good time to add a little perspective. The train I catch is at 8.15am, the next train I can catch is at 8.31am. If I miss my train I'll catch the next one and be in 16 minutes later to work, which has little impact on my day. Occasionally I am delayed at home and decide myself to take the later train to avoid rushing. Sometimes I take it to catch up with a friend who travels on the same one, the day after writing this I took the later train to wait and travel with a colleague who was running late. Taking the later train, whilst not normal, is a perfectly acceptable alternative for me.

An irrational panic

...back at the ticket machine the availability of a train 16 minutes later was the last thing on my mind. As I struggled to mash the digits of my code into the screen I started to enter a state of frustrated panic. This infernal machine was preventing me from catching my train. Prior to the barriers machine failure occurred quite often, and would have registered as nothing more than an inconvenience in my journey. The absence of any other options now meant that my only path to catching my train was this infernal machine. I started to shout at it,  I find this always helps with errant technology, particularly if your main goal is accumulating 'funny looks' from passers by. With only a couple of minutes to go I was on the last character, a P which was located right in the corner of the screen. It wouldn't work. No matter of pressing, even with the pressure of both hands, would get the letter P to register on the screen. I started to panic. I could hear the train pulling into the station. If a genie had popped up at that moment offering 3 wishes my first and only one would have been for that @¥&£#% P to work so I could get my ticket. Realising that the cause of the malfunction was probably the temperature I leaned over the screen and began to frantically breathe on it to try to get it to warm up ( I couldn't risk using my hands for fear of accidentally firing another button). To passers by my alternate breathing into the screen and pumping my fists on it must have looked like I was trying to bring it back to life through some kind of technological CPR. Just like in the movies, at the last moment my efforts finally paid off, the P registered and I got my ticket. Running through the barriers I shouted at the staff as I raced past "THAT MACHINE DOESN'T WORK WHEN IT GETS COLD". Racing recklessly down the platform steps I managed to get on the train with seconds to spare and collapse into my seat in a flurry of anger, and adrenaline fuelled elation.

Rationality in reflection

On first reflection my actions were totally irrational and quite foolish. The sensible course of action would have been to calmly take the few minute walk around to the manned desk, buy a ticket and take the later train. I probably lost more productive time in arriving to work flustered than I gained in getting that train. So why was I so determined to get that one on this occasion? I think that the reasons become more apparent if we examine the emotional context that was in place coming into this incident:-

  • An unwanted change had been imposed
  • I'd recently had some changes imposed on me which affected an established pattern of operation. Whilst as an individual I am quite receptive to changes and am often an instigator of change in my organisation, when a change impacts an established routine then it is rarely welcome. I'm not alone in this, many people have a natural tendency to favour the current situation and established behaviours over change. When the change was announced I had started to mentally accumulate reasons why it would not be welcome. Even though, for example, I hadn't done it for months I was still annoyed that the presence of the barriers would prevent me from taking my children onto the platform to see the trains. This is referred to as a "status quo bias" and is associated with a tendency to place a higher emphasis on the potential losses of a change than any corresponding potential benefits. As soon as it had occurred phrases such as 'I knew it' and 'typical' started echoing around my internal monologue. If I'm honest I was probably intentionally magnifying the severity of the problem to justify my previous objection to the situation. 

  • I was not the beneficiary of the change
  • All of the benefits of the installation of the barriers were going to the train companies. These things were not there to make my life easier. My emotional position on the change was that it was going to be for the worse and there were few obvious reasons for me to change this.

  • My options had been restricted
  • Whilst I occasionally had problems with the previous system I usually had the flexibility to work around them. As I mentioned in addition to the machine I used to have the options of buying at the kiosk or on the train. These were not readily available to me any more which placed a much greater emphasis on the machine working. The tendency to oppose any perceived restriction in one's personal autonomy is an extremely powerful bias known as "Reactance". Reactance applies when we feel that something is removing our personal freedom to choose.

  • The fault could have been detected
  • This was probably the final straw in my over-reaction. The fact is that this was a machine that had been designed to operate outside in Britain yet didn't work in freezing temperatures. For most people this would be annoying. For a software tester it was exasperating. My whole frustrating encounter was the result of a singular failure to consider the possible environmental variables when testing the machine.

So the event occurred in an emotional context of: a strong set of biases against the recent change; a removal of alternatives; a piece of technology which hadn't been tested for its target environment; oh, and a disgruntled software tester.  

A Troublesome Business

The situation above may seem far fetched, however this is exactly the kind of emotional context into which my software, and many other business technologies, are implemented. With regards to the product I work on, our customers are often looking to replace an established product or process. The primary value in these implementations is usually to achieve the equivalent functionality as an existing system at a much lower cost. In this context the people involved in implementing, administering and using the system are rarely the beneficiaries of the value in the change. They are also likely to have an emotional investment in the existing process given their experience the technologies and associated tools involved.

Business software implementations are often in the context of:-.   

  • replacing an existing process - thereby exposing the risk of status quo bias toward the existing system
  • being imposed on the users rather than being their choice - engendering reactance based emotions against the software before it is even switched on 
  • restricting the flexibility of operation - again giving rise to reactance based frustration at not having the flexibility previously enjoyed. Users can't, for example, stick a post it note on a computer form or write in the margin of an html page

The result is many low level stakeholders can harbour strong levels of bias against the implementation of the exact nature shown by my own experience. Whilst as a software vendor this is sometimes frustrating it is also totally understandable and something that software developers should anticipate. Business based software testers in particular need to consider the presence of negative bias in their implementations and examine the system accordingly. 

Occasionally I see comments from testers of business software almost jealously bemoaning the way that Facebook and Twitter thrive with little testing and a volume of functional flaws that would be unacceptable for business use. What we need to remember in business targeted software is that software used in a social context is rarely imposed and is usually done in the presence of a range of options (twitter, Google+, FaceBook, LinkedIn, Xing) out of which the user will have made a personal choice. This is much more likely to result in a positive emotional state when using product, and a more forgiving approach to faults.

To counter potential negative feelings towards our business software we need to focus on making sure the product and associated services are geared towards overcoming such a position. Users experiencing feelings of resentment towards our software are unlikely to enthusiastically digest every word of our documentation, so it is important that the software usability is considered. Is it consistent with the systems that we are replacing, or self-explanatory if not? Is it easy to identify and recover if mistakes are made? As well as the functionality at hand do the testers understand the greater goals of the customers. Documentation still needs to be accessible and searchable and written from the perspective of not only achieving key tasks but making necessary decisions, rather than a flat reference structure. Technical support need to be helpful and willing to step the users through their problems in the early stages to develop positive customer relationships and help to nurture a culture of positive and skilled use on the part of the customer.  

Of all the software implementations I have experienced, the ones I'm most proud of are when we've started out meeting some resistance to the product and turned this around into a situation where those users actively recommend it to others. Situations where initial resistance is overcome through our diligent work to understand the customer needs, test that the product allows them to meet these, and support them in doing so, creates some of the strongest advocates of our software and company.  The results are definitely worth it, as investment in startup companies can come on the back of references from existing customers. So the future of an organisation can depend heavily on the quality of product delivered in the present and the ability to generate raving fans. Failing to understand the target environment and failing to consider the emotional context that you are delivering into, and you may just end up with raving mad users.

Wednesday, 12 June 2013

Testing Big Data in an Agile Environment

 

Today my Testing Planet article on Testing Big Data in an Agile environment went online on the Ministry of Testing site. This is a timely post as I am in the process of preparing for a couple of talks on the subject of testing big data in the coming months.

  • I'll be running a session at the July UKTMF Quarterly Forum on 31st July discussing the practicalities of testing big data and the challenges that testers face.
  • In october I'm also presenting at Agile Testing Days a talk entitle Big Data Small Sprint on the challenges that we face trying to test a Big Data Product in short agile iterations

I'll try to post some more on the practicalities of testing a big data product as it is a hot topic in software at the moment. I've previously hosted a skype chat on the subject for testers working in big data environments to share their problems and would be happy to consider something similar again - please comment if you are interested. If you are along at either of the events above please come and say hello - especially if you are facing a big data challenge yourself. For now please take a look at the article, I'd be pleased to receive any feedback that you have on it.

Wednesday, 5 June 2013

Sticking to Your Principles

Running both the testing and support operations in my organisation affords me an excellent insight into the issues that are affecting our customers and how these relate to efforts and experiences of the testers who worked on the feature in question. I recently had reason to look back and examine the testing done on a specific feature as a result of the feature exhibiting an issue with a customer. What became apparent to me in looking back was that, during that feature development, I had failed to follow two important principles that my team have previously worked to maintain in our testing operations.

A dangerous temptation

When employing specialist testers within an agile process one of the primary challenges is to maintain discipline in testing features during the same sprint as the coding. Over the time that I've been involved in an agile process the maintenance of this discipline has occasionally proved difficult in the face of external pressures. I feel it has been key to our continued successful operation as a unified team.

During the development of a tricky area last year we found that the testing of a new query performance feature was proving very time consuming. The net result of this was that we didn't have as much testing time as originally thought to devote to another related story in the sprint backlog. Typically in this situation we would aim to defer the item until a subsequent sprint or to shift roles to bring someone else in as a tester on that piece. For various reasons in this case we decided to complete the programming and then test in the following sprint.

Some reading this will be surprised that an agile team would ever consider testing and developing in separate sprints. Believe me it is not the case for all teams who describe themselves as agile. A few years ago at the 2009 UKTMF Summit I attended a talk by Stuart Reid "Pragmatic Testing in Agile projects" in which Stuart suggested Testing in a subsequent sprint to coding was a common shape for agile developments. Many testers that I have spoken to in interview reinforce this notion, claiming to have worked in agile teams with scrums that were solely for the testing process with a single build delivery into the testing sprint. In fact one individual I have interviewed from a very large company described a 3 stage 'agile' process to me where coding, test script writing and test script execution were all done in seperate sprints.

The purpose of this post is not to criticise these teams, however I do personally believe that this is an approach that favours monitoring over responsibility at the expense of the true benefits that agile can deliver. In my experience the benefits of testing in the same sprint is that we can provide fast feedback on new features and provide a focus on quality during development. Without this benefit then developments can quickly exhibit the characteristic problems of testing as an isolated activity after coding. Even on an individual feature level, delaying feedback from testing until after the programming has 'finished' results in a significant change in the tester to developer dynamic. The problems reported by the tester are distracting from, rather than contributing to, the active projects for the programmer, something I explored more here. Some teams may achieve success through delayed testing in isolated sprints, for our team it marks a retrograde step from our usual standards.

Unrepresentative confidence

The second lapse in principles arose in the completion of the story.

When the testing commenced it was clear that the functionality in question was potentially impacted by pretty much every administration operation that could be performed on the data that was stored. The tester in question diligently worked exploring a complicated state model and exposed a relatively high number of issues compared to our other developments. A lot of coding effort was required in order to address these issues, however this was done under extra pressure of having estimated scant programming work for that story in the sprint it was being tested in the belief that it was essentially completed.

As I discussed in this post I use a confidence based approach on reporting story completion to allow for the many variables that can affect the delivery of even the simplest features. At the end of the story in question, under my guidance, the tester reported high confidence in all of the criteria on the basis that all of the bugs that they had found had been retested successfully. I did not question this at the time, however in hindsight I should have suggested a very different report on the basis of the nature and prevalence of the bugs that had been encountered. At the end of the sprint all of the bugs that had been found were fixed. Reporting high confidence on this basis belied the number of issues that had been discovered and the corresponding likelihood of there being more issues.

To hijack a famous testing analogy, if you are clearing a minefield and every new path exposes a mine, there is a good chance that there are still mines to be found in the paths you haven't tried yet.

This problem can arise equally through the arbitrary cut-off of the sprint timebox as a finite set of prescribed test cases. If after completing the prescribed period/test cases there are no outstanding issues it is hard to argue for further testing activity, however it is my firm belief that testing reporting should be sufficiently rich and flexible to convey such a situation. As I'd discussed with my team when introducing the idea, the reporting of confidence is intended to prompt a decision - namely whether we want to take any action to increase our confidence in this feature. The existence of a high number of issues found during testing is sufficient to diminish confidence in a feature and merit such a decision, despite those issues being closed. In this case we should have decided to perform further exploratory testing or possibly review the design. As it was the feature was accepted 'as was' and no further actions taken.

Problems exposed

We recently encountered a problem with a customer attempting to use the feature in question. Whilst the impact was not severe, we did have to provide a fix to a problem which was frustratingly similar to the types of issues found during testing.

I'm aware of the dangers of confirmation bias here, and the fact that we encountered an issue does not necessarily indicate that this would have been prevented had we acted differently. We have seen other issues from features developed much more in line with our regular process, however there are some factors which make me think we would have avoided or detected this one by sticking to the principles described.

  • The issue encountered was very similar in nature to the issues found during testing, it was essentially a hybrid of previous failures recreated through combining the recreation steps for known problems
  • The solution to the problem after a group review with the senior developers was to take a slightly different approach which utilised existing and commonly tested functions. This type of review and rework is just the sort of activity that we would expect do if testing was exposing a lot of issues while the coding focus was on that area and rework would have been considered more readily.

Slippery Slope

While being something of a 'warts and all' post I think this example highlights some of the dangers of letting standards lapse even briefly. It is naive to think that mistakes won't be made. With short development iterations there is scant time to resolve when this happens. For this reason I think that a key to maintaining a successful agile development process is to identify lapses and to increase effort to regaining standards quickly. As the name suggests a sprint is a fast paced development mechanism. In the same way that agile sprints can provide fast paced continuous improvement, degradations in the process can develop just as quickly. Any slip in standards can quickly become entrenched into a team if not addressed, and I've seen a few instances where good principles have been lost permanently through letting them slip for a couple of sprints.

ShareThis

Recommended