Wednesday, 23 September 2015

Learning Letters

Being a parent furnishes you with the wonderful opportunity of watching your children learn. As my older children have developed I've found that their range of learning subjects has progressed quickly to include things that are unfamiliar to me, mainly through having been introduced to the world since I was at school. This could be daunting in that I am exposed to my children seeing limitations in my own knowledge (the illusion of parental infallibility is easily shattered, for example through my cumbersome attempts at playing minecraft). Nevertheless, I prefer to see it as an exciting opportunity to learn myself and share with them the joy of learning new things.

Making Sense

One of the most interesting aspects of watching children learn comes when they start to learn how to read letters and numbers. Sometimes it takes seeing a system through the experiences of another trying to learn it to expose the flaws inherent in the system that aren't apparent to a person more familiar with it. Watching my children attempt to learn the symbols that go into making up their letters and numbers really brought home to me some of the illogical and unintuitive problems in our common symbology.

A great example happened recently with my middle son. We'd spent time learning all of his letters in a picture book, to the extent that he could recognise each letter without the pictures. I was somewhat surprised when I presented him with a pre-school story book and he couldn't identify the letter 'a'. When I pointed it out to him his response was even more surprising - he said "That's not an 'a'". I looked at the page and realised that he was quite right.

The 'a' in his story book looked like this

Whereas the one in his letters book looked like this

How could I expect him to know that the extension over the top of the letter here was meaningless, when in these two letters

A much smaller difference has a profound significance.

Another one that all of my children have found hugely confusing is when different characters are different only through reflection or orientation. 6 and 9 for example can be very confusing particularly when working with children's games such as bricks and cards which can be rotated in hand. p, q , b and d are similarly often the last ones to be confidently learned.

And don't get me started on equivalent characters such as upper and lower case. So P is a capitalised p, S is a capitalised s but a capital q is Q, what?

When you consider a child's learning journey it is hardly surprising that they get confused. We spend their very early years teaching them how to identify shapes by their properties, irrespective of their position,

  • a rectangle oriented horizontally or vertically is still a rectangle
  • a shape with thee sides and 3 vertices is a triangle irrespective of the relative lengths of the sides.

Then we introduce a language and numbering system using a series of symbols where properties are far less relevant. Characters with completely different properties can represent the same letter, and we can change a character into another one simply by rotation or reflection.

There is little logic in the system. The number of rules that we'd have to provide even to understand the basic alphabet and number characters used in English would be huge. Whilst the simple rules of character representation in learning letters may be explicitly defined for our children - through letter books and number charts and the like - the understanding of the range of different ways that the characters in our alphabet can be represented is tacit knowledge. We build up our knowledge through example and experience, incrementally building our rule set around what constitutes an 'a' or a 'q' until we have sufficient rules to read most fonts that we encounter.

Even now on occasion I am presented with a character such as this,

And have to examine the context in which it is used to establish the letter represented. In this situation I depend on a set of in-built heuristics based on how the symbol is presented - e.g. Is it in a word with other symbols that I recognise - to identify what it is intended to represent?

I'm now pretty sure it's a 'T', or possibly an 'F' , but there's still a little uncertainty. Is the word part of a phrase or sentence that makes the meaning clear?

Now the character represented is clear. So, unthinkingly, when reading the cover of this book I've applied a series of personal heuristics to identify the letter 'T'.

For the most part I am not generally aware of the depth of knowledge that I am tapping into when interpreting a new font or text. I would find it extremely difficult to construct an explicit set of instructions for a human or computer to identify this character based on my knowledge prior to seeing it.

Presenting our Software

I was recently reading this great set of tips for technical authors from Tom Johnson. One that really struck a chord with me was number 3 - "Developers almost always overestimate the technical abilities of their audience".

Developers often create products with a certain audience level in mind, and usually they assume the audience is a lot more familiar with the company's technology than they actually are.

As someone who works across the boundaries between development teams and support, I have a great affinity with this phenomenon. As we develop our software we unknowingly build rich and complex layers of tacit knowledge across the development team as we work with the system that we then rely on during development activities.

When working with my query system, for example, there is a wealth of knowledge within the relevant development team around the shapes of queries and how they are likely to execute against certain data structures. Some queries may be a natural fit for our software and execute in parallel scalably across machines, others may force more restricted execution paths due to the SQL syntax used and the resulting stages of data manipulation required. These models that are built up over time support a level of understanding which rarely pervades beyond the walls of our development offices. When our customers first start to create and execute their queries they are typically not considering these things. Yes, they may start to build up their knowledge should a query not meet their expectation, and they work through explain plans, query documentation or work with a consultant to better understand the system. In my experience this type of activity is most often based around solving a specific problem rather than constructing a deep knowledge of query execution behaviour.

Working with my support teams helps to maintain perspective on the levels of product expertise that exist among our user communities. This is not to say that we don't have very capable users, it is simply that developing and testing a product affords us a level of understanding to the extent that expert knowledge becomes second nature and it can be hard not to code and test from this status of elevated knowledge. More than once I've seen referrals on system behaviour from the support team to a development group responded to with an initial level of surprise that the user is attempting to use the system in the manner described. With an open interface such as SQL providing high levels of flexibility over use this is somewhat inevitable. Given that some SQL is generated by other applications rather than through direct input, we can't necessarily rely on sensible structuring of SQL, let alone that it is done so in the manner that our system prefers.

Making Tea

When I was at school a teacher gave my class a fascinating exercise - we had to describe how to make a cup of tea to an alien unfamiliar to earth (who spoke perfect English, obviously). The teacher then went on to highlight our mistaken assumptions such as in an alien knowing how to 'put the kettle on', and the possibly amusing outcomes of such an instruction.

Naturally we wouldn't expect testers to have to work from such an extreme starting point of user experience. We do, however, probably want to maintain awareness of our own levels of tacit knowledge and try to factor this in when testing and documenting the system. For me it is about looking for gaps or inconsistencies in the feature set where we might unknowingly be glossing over the problems through our own knowledge.

  • Are there inputs that to the user could be considered equivalent yet yield different results? SQL is awash with apparently equivalent inputs that can yield different results, for example the difference between 1 and 1.0 might appear trivial, however they can result in different data types in the system with implications for query performance. The difference between these two characters

    can be as trivial to the user as the editor in which they typed their text, however they can have a huge difference if copied into an application input.
  • Are there very different features or workflows with only subtle differences in labelling? I was recently using a web system where the "Resources" tab took me to a completely different location to the "User Resources" link in the side menu. I had taken them to be the same and so failed to find the content I was looking for.
  • Are abbreviations or icons used without labelling or explanation? On an internal system that I recently started using there are two areas which are referred to by three letter acronyms, one of which has the same characters as the first with 2nd and 3rd letters reversed. My team and I still struggle to know which one we are referring to in conversation. In this post I recount a situation where my lack of familiarity of the 'post' icon commonly used in android resulted in an embarrassing mistake and my rejection of a popular blogging app as a result.
  • Is team specific language exposed to the user in labelling or internal documentation? Internal terminology can leak out via various channels and confuse the customer. Our customers know the features according to the manuals and marketing literature, not necessarily the in-house terminology. Using team specific terms in labelling or internal facing documentation will result in inconsistency and confusion as those terms leak out via logs and support channels.
  • Is the same value referenced consistently throughout, or is terminology used interchangeably? A big personal bugbear of mine is when I register with an application entering my email address amongst other fields, and then on revisiting the site I am prompted for my "username". Hold on - I don't remember entering a username. Was I given a username? Should I check my emails and see if I was mailed a username? Or should I try my email address as that is what I used to identify my account? But surely if the site wanted my email address it would prompt for that and not my "username", wouldn't it? Unless they are storing my email address in a username field on the database and forgetting the origin of the value when prompting for credentials.

When exposing an interface which shares characteristics or expectations with other systems, an important consideration is whether we need to test on the basis of a generic knowledge and consistent terminology rather than application specific knowledge or organisational jargon. Otherwise we may risk a reaction similar to my son's on first encountering his "not an 'a'" when the software reaches the users "that's not a username, it's an email address!"


John Stevenson - Tacit and Explicit Knowledge and Exploratory Testing

Bach/Bolton - Exploratory Testing 3.0

Markus Kuhn - Acii and Unicode Quotation Marks

Wednesday, 15 July 2015

Room 101

My last post marked the 101st that I’ve published on this blog. Firstly I’d like to say a hearty thanks to you if you’ve read any of my posts before, I’ve had a great time writing this blog so far and getting comments and feedback.

Secondly, this seems like a good time to reflect on some of the posts here and the nature of writing, speaking and presenting material generally to a professional community. A couple of events recently have prompted my thinking about ideas that people present, perhaps in talks or blog posts, that grow to become less representative of that persons opinions or methods over time, unbeknownst to the folk who may still be referencing the material.

A snapshot in time

At Next Generation Testing I had the pleasure to meet Liz Keogh, who is an influential member of the Lean/Agile community and a great proponent of the ideas of BDD. We discussed ideas in a specific talk of hers that I’d taken inspiration from and applied her idea on fractal BDD projects to a different subject to write my Fractal Exploratory Testing post. Liz admitted to me that her ideas and understanding of the projects she was referring to had moved on since giving the talk. It had been a useful model to present an idea at the time, but the talk did not reflect Liz’s current thinking based on her subsequent experiences. Liz had since moved on in her ideas, however for me her thinking was frozen in time at the point that I’d seen her present that talk.

I also had the recent opportunity to take on some new testers into my team. As part of their induction I talked them through some of the principles and ideas that define us as a team. A couple of times I found myself presenting ideas that I’d written about previously, and were working well when I put the introduction slides together a couple of years earlier, but had since fallen out of use. The ideas were current when the posts were created but had not endured long term acceptance outside of my own personal use.

Room 101

On UK television there is a television show called Room 101. In it celebrity guests argue to have ideas, objects or behaviours committed to “Room 101”, which represents banishment to a place reserved for our worst nightmares. As any regular readers of my posts will know, I’m a great believer in admitting our mistakes and being honest about our failures as well as our successes as a community. Having just completed my 101st post, it seemed appropriate to publish a ‘Room 101’ list of some of the ideas that, while not my worst nightmares, maybe don’t reflect my current way of thinking, or perhaps haven’t been quite so successful as I was hoping, or simply weren’t well written. So here are some of the posts that, if I’m honest with myself, are are not quite as relevant or worthy of attention than I’d originally believed them to be.

Context driven cheat sheets - A Problem Halved

I’m a little gutted about this one because I truly believe in this approach and the value of it. The idea is that you generate a ‘cheat sheet’ akin to the famous one from Hendrickson, Lynsday and Emery one, but with entries specific to your own software. This worked really well in my company for a time, however I simply couldn’t sustain enthusiasm from the team in maintaining it. The additional overhead on adding entries to the cheat sheet resulted in few attempts to update it outside occasional bursts of endeavour from myself or one of the other testers. We did review the idea in a testing meeting and everyone agreed that it was a fantastic idea and incredibly useful and agreed that it is sad that we can’t seem to maintain it, but being brutally honest the information in our cheat sheet is rather stale.

Internal Test Patterns - A Template for Success

This is an interesting one as the original purpose of the post was to use test patterns as a documentation tool, to document different structures within our automated testing so that others understand them. In this light the use of our internal test patterns has fallen out of use, we don’t embed the pattern name into the automation structure as a rule so can’t easily identify the pattern used for any one test suite.

The patterns have proved useful, however, when it comes to review and training. I still refer to the test patterns, particularly the anti-patterns, when reviewing our automation approaches with the aim of improving our general structuring of automated tests. They’re simply not extensively used by others.

As a useful tangent on this subject - if you are interested in the idea of automation patterns then Dorothy Graham recently introduced me to a wiki that she is promoting documenting test automation patterns.

Skills matrix - Finding the Right Balance

I don’t use this technique any longer. I did find it useful at the time when I was putting together a small team, however I found it too easy to manipulate the numbers to provide reinforcement of my own existing opinions on what I need in the team, that I now simply bypass this matrix and focus on the skills that I believe us to be in need of.

A Difficult Read - Letting Yourself Go

I can see what I was trying to say here, but honestly reading it back it doesn’t read well and lacks a real coherent message. Definitely my top candidate for Room 101.

If at first you don’t succeed, try a new approach

What is really interesting about the list above is that some of the ideas that haven’t worked quite as well as I thought seem to be the ones that I am most convinced are great ideas. Perhaps overconfidence in the fact that I’ve personally found them really useful has meant that I don’t try as hard when promoting them internally as I assume they’ll succeed. Whatever the reason, trying and promoting new ideas is an activity that is a core part of my work ethic, and there are almost inevitably going to be some ideas and approaches that work better than others. I strongly believe that it is still worth writing about and promoting these. As with Liz’s talk, perhaps through what Jurgen Appelo calls "The Mojito Method" - ideas shared may inspire something in others even after the originator no longer finds them valuable.

As I move into my second century of posts, I’m thinking of expanding my subject matter slightly to cover the fact that I’m involved in a variety of areas, including testing, which are rooted in the customer interfacing disciplines of a software product company. Integrating our agile organisaton into a larger , more traditional, organisation also presents some interesting challenges which might merit mention in a post at some point. I hope that future posts are as enjoyable to write as they have been up until now. If there is the odd idea in there that doesn’t work or read quite as well as I’d hope, I apologise, and at the same time hope that it might still prove useful to someone in some unexpected way.

If you have your own candidates for blog "Room 101" please let me know by commenting, I'd love to hear from you.

Monday, 8 June 2015

The Who, Why and What of User Story Elaboration

The Elaboration process is one of the pivotal elements of our Agile process. At the start of every user story that we work on, the individuals working on that story will get together in an elaboration discussion. The purpose of the discussion is to ensure that we have a common understanding of the story and that there are no fundamental misunderstandings of what is needed between the team members.

Typically in my product teams the discussion is captured on a whiteboard in the forum of a mind map highlighting the key considerations around the story. The testers have the responsibility to write this up in the form of the acceptance criteria, risks and assumptions which they then publish and share to a wider audience. For the most part this process works well and allows us to create a solid foundation of agreement on which to develop and test new features.

Occasionally though we've been inclined to get ahead of ourselves and delve too quickly into technical discussions around a solution before we've really taken the time to understand the problem. The technique that I present here is one that I've found useful to inject a little structure into the discussion and ensure the focus is on the beneficiaries of the story and understanding of the problem.

The Cart Before the Horse

For some stories we encounter the developers involved may have some early visibility of the story and a chance to start thinking about possible solutions. Early visibility of a story provides a good opportunity to think about the potential problems and risks that various approaches might present. At the same time, bringing ready baked solutions into the elaboration also carries the risk of pre-empting the story elaboration by taking the focus into solution details too early. Occasionally I've seen elaboration meetings head far too quickly into the "Here's what we're going to do" discussion before giving due attention to the originators and reasons behind the request. Even with a business stakeholder representing the needs of the users in the elaboration, the early injection of a solution idea can frame the discussion and bypass discussing key elements of the problem.

In order to avoid this, here's a simple approach that I use in elaboration discussions to provide some structure around the session and ensure that we consider the underlying needs before committing to discussion of possible solutions.


I start with a central topic on a whiteboard of the story subject and create the first mind-map 'branch' labelled 'Who'. This is the most important question for a user story, who are the beneficiaries of the story? If using a 'As a .. I want .. So that' story format should have a clear picture of the primary beneficiary, however that format contains precious little information to add meat to the bones. As a product company we have different customers fitting the 'As a' role who may have subtle differences in their approach and capabilities which the discussion should consider. Sometimes the primary target for the story is a role from one type of organisation, yet the feature may not be of value to the same roles in other types of company if it is not developed with consideration to them.

Also the "As a ..." format has no concept of ancillary beneficiaries who may be relevant when discussing later criteria. Whilst stories should aim to have a primary target role, we should certainly consider other roles who may have a vested interest in the means or method implemented. We can also identify these at this stage which can help to ensure the needs of all relevant beneficiaries are considered as the criteria for success are discussed.

By revisiting the "Who" question in the elaboration we can ensure that we flesh out the characters of all interested parties, including specific examples of relevant customers or users to give depth to the nominal role from the story title. Establishing a clear concept of 'Who' at the outset ensures the discussion is anchored firmly around the people we're aiming to help and provides an excellent reference point for the subsequent discussions.


The next question that goes on the board is 'Why'. This is the most important question in the whole process, however until we've established the 'Who' it is difficult to answer properly. The why is where we explore the need that they have, the problem that they are facing. It is incredibly easy to lose sight of the problem as solutions are discussed to the extent that feature developments can over-deliver and miss core agile principles of minimising work done. Having the "Why" stage allows all of the members of the discussion to understand the problem and put key problem details on the shared whiteboard as an anchor for any subsequent discussion of solutions.

A commonly referenced technique for getting to the root value of a requirement is the '5 Whys'. Whilst this is a useful technique that I've used in the past for getting to the bottom of a request where the root value is not clear to the team, I think that it is heavyweight for most of our elaboration discussions. A single 'Why' branch on the board is enough to frame the conversation around the value that the target beneficiaries would hope to get from our development of that story.

Having 'Why' as the second step is important. David Evans and Gojko Adzic write in this post about changing the order of the 'As a... I want... So that' structure because it incorrectly places the 'what' before the 'why'. Instead they place the value statement at the front - "In order to .... As a ...I want". By following the process of adding branches to the elaboration board in order we similarly give the target value the appropriate priority and focus.


The 'what' relates to what the user wants to be able to do. This is where we really pin down the 'I want....' items that define the acceptance criteria. The primary objective of the elaboration is to look to expand beyond the one liner of the story definition. The testers responsibility here is to capture the criteria by which we will judge whether sufficient value has been delivered to deem the story completed to an acceptable level (NB I'm intentionally avoiding the term 'done' here as I have issues with the binary nature of that status.). The key here is that we are still avoiding discussing specific solutions. The "What" should be clear but not specific to a solution.

As well as the primary feature concept we're also looking to capture other important criteria here around what the beneficiaries of the story would expect from the new feature. This might include how it should handle errors and validation, usability considerations, scale-up and scale-out considerations and how it should behave with other processes and statuses within our system. The benefit of a mind map here is that we can revisit the previous branches as items come up in later discussion that merits revisiting so we can add items to the other branches if we identify items not previously thought of (such as system administrator stakeholder considerations) at this stage.


Finally, once the other subjects have been explored, do we consider the 'How?'. Some might argue that it is inappropriate to think about solutions at all during story elaboration. I'd suggest that avoiding the subject entirely, particularly if there is a solution idea proposed, feels contrived. As I discussed in the post linked earlier, we try to look beyond criteria during the elaboration process to identify potential risks in the likely changes that will result. This identification of potential risks can help to scope the testing and ensure we give adequate time to exploration of those risks if we pursue that approach. It is only one the 'Who, Why And What' have been clarified that we have sufficient contextual information to discuss the 'How' and the consequential risks that are likely to arise in the product as a result.

What happened to "Where?" and "When?"

These could be valid questions for the discussion, however I'd consider them optional in terms of whether to include them here. Both "Where?" and "When?" may be valid depending on the context of the story:

  • Where could include whether an action is performed on a server or from a client machine
  • When could include exploring the scheduling or frequency of execution of a process

I feel that the inclusion of these as branches in the discussion every time would make the process feel rigid and cumbersome, and would only include them if relevant to the subject of the story.

A placeholder for a conversation

I've often heard user stories described as 'a placeholder for a conversation'. That's an excellent reflection of their role. We can't expect to achieve the level of understanding necessary to progress a development activity from a one line statement, no matter how well worded, without further conversations. The elaboration is the most important such conversation in our process. It is where we flesh out the bones of the story statement to build a clear picture of the beneficiaries of the story and the value that they are hoping to gain.

The 'Who, Why, What....' approach to elaboration meetings provides a useful tool in my elaboration tool-kit to ensure we give the relevant priority to understanding these individuals or roles and their respective needs. I've run elaborations in the past where we 'knew' what we were going to do in terms of a solution, only for this to change completely as we worked through the details of the process here. In one example the solution in mind was too risky to attempt given that we needed a solution that was portable to an existing production release with minimum impact. In others we've revisited the proposed solution on the basis of expanding on the level of understanding that the intended user will have of the inner workings of our system. Even when a solution is apparent, nothing is guaranteed in software and plans can change drastically as new information emerges. Using this technique helps to avoid coupling our acceptance criteria too closely to a specific solution and rendering them invalid in the situations where it needs to be reconsidered later.