Sunday, 23 September 2012

Textual description of firstImageUrl

The Problem with Crumpets - on Information and Inconsistency

My 3 year old daughter has a milk allergy. I'm not talking about an intolerance, although those can be pretty bad, I mean a full on allergic histamine reaction to any dairy products either ingested or touching her skin. When we tell other parents about this a common sentiment is that they can't imagine how they would cope in such a situation. But we do cope, just as many other parents cope with similar situations and, sadly, other conditions that are much more severe. While this was a significant hurdle for us to tackle in the weeks after we found out, over time we've adjusted our own behaviour to take Millie's condition into account to the extent that for much of the time it is nowadays not at the forefront of our minds.

Accidents aside, when we do encounter problems they can usually be attributed to one of two situations.
  1. A lack of information about a product
  2. Inconsistency in a product with our expectations

The Known Unknowns

Lack of information hits us when we cannot tell whether a product contains milk or not. Whilst I have to say in the UK the food labelling is excellent, we do still encounter situations at home and abroad where we cannot be sure whether a foodstuff contains dairy. This is incredibly frustrating when we are trying to feed my daughter. She is a great eater and will try most things so it saddens us when we are unable to give her things that are probably perfectly safe as we don't have the information on the ingredients.

Whilst very frustrating, lack of information is not specifically dangerous. We are conscious of our absence of knowledge and can take steps to improve this, or adopt a low risk strategy in response. This usually involves disrupting a shop assistant's otherwise peaceful day to fetch packaging and read out long lists of ingredients. Sometimes, as a last resort, it involves just ordering my daughter chips / french fries.

Almost as bad as the complete absence of information is the situation where allergy information has been provided, but it has clearly not been considered under what circumstances such information might be required. Restaurant allergy lists that mark entire meals as containing dairy when actually it is only the dressing on the side salad, or list all of the ingredients that the restaurant uses but leave an almost impossible task of mapping these back to the meals on the menu, are prime examples. Information that is not available in the appropriate format or location when it is required can be as bad as no information at all. Burying the failings of your system deep in the documentations and proudly pulling this out shouting 'RTFM' when your users raise support requests is about as sensible a strategy as telling customers standing in your restaurant that your allergy list is only available on your website (this has happened to us).

My key point here is that lack of or poor quality information may not directly cause mistakes, but it certainly creates frustration and extra work. If your product is not intuitive to use and has poor user documentation then the customers may not necessarily be getting themselves into trouble but they will have to work harder to find out how to achieve their goal. Your support desk is probably busier than it needs to be answering questions, just as my wife and I use up the shop assistants time running round reading packaging. Alternatively they might act out of frustration and try to plough on regardless and get themselves into trouble. Again the result is likely to be a costly inquiry to your support team.

The danger of the unknown unknown

A potentially bigger problem that we face is inconsistency. When products, product ranges or companies are inconsistent in their use of dairy then it can have grave consequences for my daughter. At least with a lack of knowledge we are aware of our situation. When we encounter inconsistency we may not possess a similar awareness, instead falsely believing that we are acting in a position of knowledge which is far more problematic. Some examples:

  • Asda brand crumpets are dairy free but Marks and Spensers are not (both are UK supermarkets).
  • Jammie Dodger standard size biscuits contain no dairy, but the smaller snack size versions have milk in.
  • Heinz ketchup does not contain milk but Daddies ketchup does
  • Hellmans original mayonnaise has no dairy but the reduced fat mayonnaise contains cream (yes, they honestly add cream to reduce the fat content)
  • McDonalds in the UK have a full allergy list in store plus a kids meal that is dairy free, thereby providing a safe (if less than appealing) food option when travelling; McDonalds in France have no allergy list and no dairy free meal options - even the burger buns contain milk.
  • Some of the serving staff at TGI Fridays in my home town are aware of their allergy list but some are not and so do not consult it when suggesting safe dairy free options

As you can probably tell, all of these are examples we've personally encountered, with varying degrees of disaster. It is when we have encountered situations of inconsistency that Millie has been most at risk. We act, assuming ourselves to be in a position of knowledge, yet that assumption is incorrect. The impact can vary from having to disappoint our daughter that actually she can't have the meal/treat we just promised, to her having a full blown allergic reaction having eaten a crumpet that Millie's grandmother mistakenly believed to be safe.

The key point here is that in the presence of lack of information, customers may still act out of frustration but will be aware that there is an element of risk. With inconsistent behaviour that awareness of the risks of their actions may not be present. As testers a key part of our job is to understand the context in which the users are using the system and the associated behaviours that they will expect. Michael Bolton recently wrote a post extending the excellent HICCUPS mnemonic with regard to consistency heuristics that help to consider the different consistency relationships that might exist for our product.

Some ways that I have used to try to consider other viewpoints in our testing:-

  • Developing personas and identifying the products and terminology that those personas will relate yours with.
  • Sometimes the context in which you are looking for consistency is not obvious and some care must be taken if the appropriate oracles are to be identified. I have a friend who once deleted a lot of photographs from his camera as he selected the 'format' option thinking it would allow him to format his photos. For the context of the SD card as a storage device the word format has one meaning consistent with computer hard disks and other such devices, but in the context of photography the term 'formatting' has quite another meaning, with which the behaviour was inconsistent.

  • Researching other products associated with your market and using these as oracles in your testing.
  • This may change over time. My organisation has historically worked in the database archiving space and a lot of our testing oracles have been in that domain. As we have grown into Hadoop and Big Data markets a new suite of associated products and terminologies have started to come into our testing.

  • Question functional requirements or requirements in the form of solutions
  • Try to understand not only the functionality required but also the scenarios in which people may need to use that functionality. As I wrote about here using techniques such as the 5-whys to understand the situations that prompt the need for a feature can help to put some perspective around the relationships that you should be testing and identify the appropriate oracles.

  • Employing team members with relevant knowledge in the market in which your product is sold
  • As I wrote about in this post it is a great idea to develop a skills matrix of relevant knowledge and try to populate the team with a mix of individuals who can test from different perspectives. Testers can share their knowledge of the expectations of the roles that they have insight into and help to construct more realistic usage scenarios.

Consideration of the Situation

Most products from supermarket bakeries have somewhere on the packaging 'may contain nuts or other allergens'. If the requirement for food labelling is that it warns of the potential presence of allergens then this label does achieve that and would pass a test of the basic criteria. As a user experience, however, it is extremely frustrating and results in an approach of not buying perfectly suitable products, time consuming requests to supporting staff or unnecessary risk taking on the part of us, the customer. I'm sure that the restaurant allergy lists that I referred to above would look very different if the designers had tested the scenario of actually trying to order a meal for someone suffering an allergy, rather than just delivering the functional requirements of listing all of the allergens in their food.

A critical testing skill is the ability to be considerate of the situation and experiences of the users and their resulting expectations. Delivering and testing the required functionality is only one aspect of a solution. In addition to testing the 'what' we should also consider 'why' and 'when' that functionality will be required and the situations that demand it. If the customer cannot utilize our features without making mistakes or requiring further information due to missing information and inconsistencies with their expectations then neither your customers or your support team are unlikely to thank you for it.

As a daily user of allergy labelling it is very clear to me those companies and products aim to meet the basic requirements of food labelling and those that go beyond this to provide consistency across their range, clear labelling and useful additional information at the point at which it is required. Needless to say it is these organisations and products that we seek out and return to over and over again.

image :

Thursday, 13 September 2012

Textual description of firstImageUrl

Starting Early 2 - Internship Review

The end of August marked the completion date for Tom our intern. As I wrote about in a previous post we'd offered Tom an internship placement with my team during his summer break. He spent 8 weeks with the company, the first two being spent job shadowing and doing work experience, followed by a 6 week individual test automation project.

Building Foundations

Over the first two weeks we spent time introducing Tom to the principles that we followed in developing our software, and the importance of testing to our success. We spent time on both specific development and testing concepts and also developing more general business skills to support these. Watching Tom struggle through the first week trying to keep his eyes open after lunch reminded me of the shock to the system that I had working full time after a student lifestyle. I like to think that it was no reflection on his level of interest in what he was doing.

A critical and often overlooked factor in successful testing is the ability to communicate effectively with the business. This is a skill area that graduates aren't necessarily exposed to through their academic studies so I spent some time with Tom on making presentations, attending meetings and managing emails. Tom practised these skills researching and making a presentation on 'Testing in an Agile Context' to the team.

An Agile Internship

In the subsequent weeks we split Tom's test automation project into a set of user stories, each delivering value to the test team. One of our senior developers helped Tom with creating a set of unit tests and a continuous integration build. The other testers and I helped him with creating his testing charters and reviewing his testing. I introduced him to ideas around exploratory testing, using great articles by Elisabeth Hendrickson and James Bach to introduce essential principles.

I think the style of work and the level of interaction among the team was very new to Tom. He relished the working environment and took to his tasks with enthusiasm and diligence. He completed the initial story well and went on to deliver a second to add additional valuable functionality.

Wrapping Up

Tom wrapped up his internship with a presentation on his experiences to our VP. He had had a fantastic time and learned a huge amount. Tom's feedback on the team was that, despite the fact that everyone was obviously busy they always had time to help him. This is a really important part of our culture so I'm pleased that in his short time, working on a relatively low priority project, Tom still developed this perception. He felt that we were a great company and just the sort of place that he would like to work in future.

An Educational Gap

Over the summer I hope that Tom has developed a solid understanding of agile methods and the importance of testing in a successful software company. Based on conversations with Tom it was apparent that these were subjects that suffered from limited coverage in his University course. According to Tom on his group development project there were no marks attributed to the error handling or stability of the delivered software. Demonstrating that the final solution had been tested was not a requirement for the project. In my earlier post I lamented the lack of exposure to testing as a career option in universities. The lack of a requirement to demonstrate testing at all is a more fundamental concern. One of the biggest problems I've seen in software that I have tested has been that validation and error handling have been secondary considerations after the initial functionality has been written. Not every graduate programmer will adopt such an approach (just as not every tester measured on their bug counts will focus on raising arbitrary cosmetic issues), however it relies heavily on the integrity of the individual not to slip into bad habits. How can we expect anything other than a trend towards delivery of the happy path functionality alone when it is clear that this is exactly the approach that is promoted in university projects?

Tom returns to his final year with a good understanding of testing and agile approaches, and a copy of Gojko Adzic's 'Specification by Example'. When questioned over whether his experience would assist him in his final year he was unsure, given the lack of exposure to agile in the university CS syllabus. I am more hopeful - I subsequently persuaded him that an agile style approach to his dissertation project on AI modelling, delivering incrementally more complex models, would be an ideal tactic to ensure he did not overrun. This is obviously a small pebble in a large ond, but if more companies offer these placements and expose students to the methods and skills being used in commercial software development, along with the importance of testing, then maybe the quality of our commercial applications will benefit as these students progress into their careers.

Monday, 3 September 2012

Textual description of firstImageUrl

A contrast of cultures

Whilst travelling back on the train from UKTMF in January I happened to sit opposite another of the conference attendees on the train Steve. We got chatting and I found that Steve ran the BA and testing for insurance company Unum, where we realized we had a common acquaintance. A tester who had worked in a team I ran in a previous role now worked for Steve's department. As we talked about our common acquaintance and our own jobs it soon became apparent that, although we both ran testing operations, the cultures and approaches in our relative teams were significantly different. I described to Steve how we operated in an agile environment with high levels of collaboration and discussion and an exploratory approach to testing backed with high levels of automation. Steve's environment, on the other hand, was one of a classic staged approach with extensive documentation at each stage and heavy use of scripted manual testing. As we talked an idea started to form of doing an 'exchange' scheme, with members of our teams visiting the other to examine their processes and learn more about their relative cultures.

An Exchange Visit

A few weeks later one of my team spent the day in Unum's Bristol offices. She spent time with testers and managers in the company learning about their approach to testing and the processes and documentation supporting this. She returned from the day with a wealth of information and a hefty set of sample documentation. Our whole department attended a session where she presented her findings from the day. As Steve had described she explained that the process was very formal, with a strong reliance on scripted manual testing in HP Quality Centre. What was also clear was that Steve's team took quality seriously and were achieving very high levels of customer satisfaction with their approach. Legislation was a key driver in requirements resulting in long predictable feature timescales. The fact that their feature roadmap was fixed months in advanced allowed a more rigid documented approach to be successful. The long turnaround times on new features were both accepted and expected by the business.

The return trip

One week later we received a visit from one of the senior testers from Unum, Martin. He'd was a very experienced tester within the organisation, having moved to testing from a role in the business 15 years earlier (Steve later explained that around 4 in 5 testers they recruit are hired internally from the business). I spent the morning with Martin discussing our approach to testing and explaining our use of: -
- continuous integration
- automated acceptance testing
- collaborative specification with examples
- thread based exploratory testing
Before one of my team took him through some examples of our exploratory testing charters.

Martin was really interested in our approach and I think the success we were achieving with very lightweight documentation was a real eye opener. He appreciated the fact that our approach really gave us flexibility to pursue new features and change our priorities quickly, allowing us to be competitive in fast moving markets.

We discussed whether a more lightweight agile approach might be suitable for some of Unum's projects. Our visitor's gut feeling was that they would struggle with larger projects due to the lack of management control through the documentation and surrounding metrics, and would most likely have to trial it on some smaller developments. Bugs for them, for example, were an important measurement of both tester and developer efficiency. The fact that our team often didn't raise bugs , preferring a brief discussion and demo, was not something that would have fitted well. I can understand this, however a potential pitfall on taking this approach is that the small scale projects would be likely to be short term with limited scope, yet the biggest asset of an agile approach is delivered through an ongoing process of continuous improvement. The greatest benefits that I've seen through agile have been as a result of a maintained commitment to the principles of collaboration, team autonomy and usually testing automation that underpin it. Doing short term 'trials' of an agile approach would be selling the ethos short. Martin and Steve both understood this, and felt that it would hinder their adoption of agile methods.

Summing up

This week Steve was kind enough to visit our office to review the exchange and discuss any future steps. I plan to visit another of Unum's offices later this year to learn about their performance testing, and Steve has suggested to his development team that they consider sending one of their number to visit our team as well.

It may seem counter-intuitive that two teams with such different cultures would benefit from such an exchange, however I have found it really useful. I can understand why the differences exist between our relative organisations, and there is still a huge amount that we can learn from each other. Theirs is an older company with proven success with a mature methodology. Given their historical success they are naturally wary of new methods but doing an exchange like this shows that Steve and his team are have an open mind about different approaches and a pragmatic view on their ability to adopt them.

The biggest lesson that I have taken away from the exchange was that, whatever methodology people are working with, great teams can and will deliver great software. Having worked on testing proects in more traditional methodologies before, this was more of a timely reminder to me than a new idea. Having now worked in a successful agile environment I know that my preference will now always be for an leaner more iterative approach. Steve's department may be doing things 'old school' but they are doing it in a positive and professional way, which means there is a lower incentive to change. I would rather work for a department like Steve's than an organisation who has adopted agile methods as a knee-jerk reaction to failures due to bad management and cultural shortcomings.

image :