Wednesday, 15 July 2015

Room 101

My last post marked the 101st that I’ve published on this blog. Firstly I’d like to say a hearty thanks to you if you’ve read any of my posts before, I’ve had a great time writing this blog so far and getting comments and feedback.

Secondly, this seems like a good time to reflect on some of the posts here and the nature of writing, speaking and presenting material generally to a professional community. A couple of events recently have prompted my thinking about ideas that people present, perhaps in talks or blog posts, that grow to become less representative of that persons opinions or methods over time, unbeknownst to the folk who may still be referencing the material.

A snapshot in time

At Next Generation Testing I had the pleasure to meet Liz Keogh, who is an influential member of the Lean/Agile community and a great proponent of the ideas of BDD. We discussed ideas in a specific talk of hers that I’d taken inspiration from and applied her idea on fractal BDD projects to a different subject to write my Fractal Exploratory Testing post. Liz admitted to me that her ideas and understanding of the projects she was referring to had moved on since giving the talk. It had been a useful model to present an idea at the time, but the talk did not reflect Liz’s current thinking based on her subsequent experiences. Liz had since moved on in her ideas, however for me her thinking was frozen in time at the point that I’d seen her present that talk.

I also had the recent opportunity to take on some new testers into my team. As part of their induction I talked them through some of the principles and ideas that define us as a team. A couple of times I found myself presenting ideas that I’d written about previously, and were working well when I put the introduction slides together a couple of years earlier, but had since fallen out of use. The ideas were current when the posts were created but had not endured long term acceptance outside of my own personal use.

Room 101

On UK television there is a television show called Room 101. In it celebrity guests argue to have ideas, objects or behaviours committed to “Room 101”, which represents banishment to a place reserved for our worst nightmares. As any regular readers of my posts will know, I’m a great believer in admitting our mistakes and being honest about our failures as well as our successes as a community. Having just completed my 101st post, it seemed appropriate to publish a ‘Room 101’ list of some of the ideas that, while not my worst nightmares, maybe don’t reflect my current way of thinking, or perhaps haven’t been quite so successful as I was hoping, or simply weren’t well written. So here are some of the posts that, if I’m honest with myself, are are not quite as relevant or worthy of attention than I’d originally believed them to be.

Context driven cheat sheets - A Problem Halved

I’m a little gutted about this one because I truly believe in this approach and the value of it. The idea is that you generate a ‘cheat sheet’ akin to the famous one from Hendrickson, Lynsday and Emery one, but with entries specific to your own software. This worked really well in my company for a time, however I simply couldn’t sustain enthusiasm from the team in maintaining it. The additional overhead on adding entries to the cheat sheet resulted in few attempts to update it outside occasional bursts of endeavour from myself or one of the other testers. We did review the idea in a testing meeting and everyone agreed that it was a fantastic idea and incredibly useful and agreed that it is sad that we can’t seem to maintain it, but being brutally honest the information in our cheat sheet is rather stale.

Internal Test Patterns - A Template for Success

This is an interesting one as the original purpose of the post was to use test patterns as a documentation tool, to document different structures within our automated testing so that others understand them. In this light the use of our internal test patterns has fallen out of use, we don’t embed the pattern name into the automation structure as a rule so can’t easily identify the pattern used for any one test suite.

The patterns have proved useful, however, when it comes to review and training. I still refer to the test patterns, particularly the anti-patterns, when reviewing our automation approaches with the aim of improving our general structuring of automated tests. They’re simply not extensively used by others.

As a useful tangent on this subject - if you are interested in the idea of automation patterns then Dorothy Graham recently introduced me to a wiki that she is promoting documenting test automation patterns.

Skills matrix - Finding the Right Balance

I don’t use this technique any longer. I did find it useful at the time when I was putting together a small team, however I found it too easy to manipulate the numbers to provide reinforcement of my own existing opinions on what I need in the team, that I now simply bypass this matrix and focus on the skills that I believe us to be in need of.

A Difficult Read - Letting Yourself Go

I can see what I was trying to say here, but honestly reading it back it doesn’t read well and lacks a real coherent message. Definitely my top candidate for Room 101.

If at first you don’t succeed, try a new approach

What is really interesting about the list above is that some of the ideas that haven’t worked quite as well as I thought seem to be the ones that I am most convinced are great ideas. Perhaps overconfidence in the fact that I’ve personally found them really useful has meant that I don’t try as hard when promoting them internally as I assume they’ll succeed. Whatever the reason, trying and promoting new ideas is an activity that is a core part of my work ethic, and there are almost inevitably going to be some ideas and approaches that work better than others. I strongly believe that it is still worth writing about and promoting these. As with Liz’s talk, perhaps through what Jurgen Appelo calls "The Mojito Method" - ideas shared may inspire something in others even after the originator no longer finds them valuable.

As I move into my second century of posts, I’m thinking of expanding my subject matter slightly to cover the fact that I’m involved in a variety of areas, including testing, which are rooted in the customer interfacing disciplines of a software product company. Integrating our agile organisaton into a larger , more traditional, organisation also presents some interesting challenges which might merit mention in a post at some point. I hope that future posts are as enjoyable to write as they have been up until now. If there is the odd idea in there that doesn’t work or read quite as well as I’d hope, I apologise, and at the same time hope that it might still prove useful to someone in some unexpected way.

If you have your own candidates for blog "Room 101" please let me know by commenting, I'd love to hear from you.

Monday, 8 June 2015

The Who, Why and What of User Story Elaboration

The Elaboration process is one of the pivotal elements of our Agile process. At the start of every user story that we work on, the individuals working on that story will get together in an elaboration discussion. The purpose of the discussion is to ensure that we have a common understanding of the story and that there are no fundamental misunderstandings of what is needed between the team members.

Typically in my product teams the discussion is captured on a whiteboard in the forum of a mind map highlighting the key considerations around the story. The testers have the responsibility to write this up in the form of the acceptance criteria, risks and assumptions which they then publish and share to a wider audience. For the most part this process works well and allows us to create a solid foundation of agreement on which to develop and test new features.

Occasionally though we've been inclined to get ahead of ourselves and delve too quickly into technical discussions around a solution before we've really taken the time to understand the problem. The technique that I present here is one that I've found useful to inject a little structure into the discussion and ensure the focus is on the beneficiaries of the story and understanding of the problem.

The Cart Before the Horse

For some stories we encounter the developers involved may have some early visibility of the story and a chance to start thinking about possible solutions. Early visibility of a story provides a good opportunity to think about the potential problems and risks that various approaches might present. At the same time, bringing ready baked solutions into the elaboration also carries the risk of pre-empting the story elaboration by taking the focus into solution details too early. Occasionally I've seen elaboration meetings head far too quickly into the "Here's what we're going to do" discussion before giving due attention to the originators and reasons behind the request. Even with a business stakeholder representing the needs of the users in the elaboration, the early injection of a solution idea can frame the discussion and bypass discussing key elements of the problem.

In order to avoid this, here's a simple approach that I use in elaboration discussions to provide some structure around the session and ensure that we consider the underlying needs before committing to discussion of possible solutions.


I start with a central topic on a whiteboard of the story subject and create the first mind-map 'branch' labelled 'Who'. This is the most important question for a user story, who are the beneficiaries of the story? If using a 'As a .. I want .. So that' story format should have a clear picture of the primary beneficiary, however that format contains precious little information to add meat to the bones. As a product company we have different customers fitting the 'As a' role who may have subtle differences in their approach and capabilities which the discussion should consider. Sometimes the primary target for the story is a role from one type of organisation, yet the feature may not be of value to the same roles in other types of company if it is not developed with consideration to them.

Also the "As a ..." format has no concept of ancillary beneficiaries who may be relevant when discussing later criteria. Whilst stories should aim to have a primary target role, we should certainly consider other roles who may have a vested interest in the means or method implemented. We can also identify these at this stage which can help to ensure the needs of all relevant beneficiaries are considered as the criteria for success are discussed.

By revisiting the "Who" question in the elaboration we can ensure that we flesh out the characters of all interested parties, including specific examples of relevant customers or users to give depth to the nominal role from the story title. Establishing a clear concept of 'Who' at the outset ensures the discussion is anchored firmly around the people we're aiming to help and provides an excellent reference point for the subsequent discussions.


The next question that goes on the board is 'Why'. This is the most important question in the whole process, however until we've established the 'Who' it is difficult to answer properly. The why is where we explore the need that they have, the problem that they are facing. It is incredibly easy to lose sight of the problem as solutions are discussed to the extent that feature developments can over-deliver and miss core agile principles of minimising work done. Having the "Why" stage allows all of the members of the discussion to understand the problem and put key problem details on the shared whiteboard as an anchor for any subsequent discussion of solutions.

A commonly referenced technique for getting to the root value of a requirement is the '5 Whys'. Whilst this is a useful technique that I've used in the past for getting to the bottom of a request where the root value is not clear to the team, I think that it is heavyweight for most of our elaboration discussions. A single 'Why' branch on the board is enough to frame the conversation around the value that the target beneficiaries would hope to get from our development of that story.

Having 'Why' as the second step is important. David Evans and Gojko Adzic write in this post about changing the order of the 'As a... I want... So that' structure because it incorrectly places the 'what' before the 'why'. Instead they place the value statement at the front - "In order to .... As a ...I want". By following the process of adding branches to the elaboration board in order we similarly give the target value the appropriate priority and focus.


The 'what' relates to what the user wants to be able to do. This is where we really pin down the 'I want....' items that define the acceptance criteria. The primary objective of the elaboration is to look to expand beyond the one liner of the story definition. The testers responsibility here is to capture the criteria by which we will judge whether sufficient value has been delivered to deem the story completed to an acceptable level (NB I'm intentionally avoiding the term 'done' here as I have issues with the binary nature of that status.). The key here is that we are still avoiding discussing specific solutions. The "What" should be clear but not specific to a solution.

As well as the primary feature concept we're also looking to capture other important criteria here around what the beneficiaries of the story would expect from the new feature. This might include how it should handle errors and validation, usability considerations, scale-up and scale-out considerations and how it should behave with other processes and statuses within our system. The benefit of a mind map here is that we can revisit the previous branches as items come up in later discussion that merits revisiting so we can add items to the other branches if we identify items not previously thought of (such as system administrator stakeholder considerations) at this stage.


Finally, once the other subjects have been explored, do we consider the 'How?'. Some might argue that it is inappropriate to think about solutions at all during story elaboration. I'd suggest that avoiding the subject entirely, particularly if there is a solution idea proposed, feels contrived. As I discussed in the post linked earlier, we try to look beyond criteria during the elaboration process to identify potential risks in the likely changes that will result. This identification of potential risks can help to scope the testing and ensure we give adequate time to exploration of those risks if we pursue that approach. It is only one the 'Who, Why And What' have been clarified that we have sufficient contextual information to discuss the 'How' and the consequential risks that are likely to arise in the product as a result.

What happened to "Where?" and "When?"

These could be valid questions for the discussion, however I'd consider them optional in terms of whether to include them here. Both "Where?" and "When?" may be valid depending on the context of the story:

  • Where could include whether an action is performed on a server or from a client machine
  • When could include exploring the scheduling or frequency of execution of a process

I feel that the inclusion of these as branches in the discussion every time would make the process feel rigid and cumbersome, and would only include them if relevant to the subject of the story.

A placeholder for a conversation

I've often heard user stories described as 'a placeholder for a conversation'. That's an excellent reflection of their role. We can't expect to achieve the level of understanding necessary to progress a development activity from a one line statement, no matter how well worded, without further conversations. The elaboration is the most important such conversation in our process. It is where we flesh out the bones of the story statement to build a clear picture of the beneficiaries of the story and the value that they are hoping to gain.

The 'Who, Why, What....' approach to elaboration meetings provides a useful tool in my elaboration tool-kit to ensure we give the relevant priority to understanding these individuals or roles and their respective needs. I've run elaborations in the past where we 'knew' what we were going to do in terms of a solution, only for this to change completely as we worked through the details of the process here. In one example the solution in mind was too risky to attempt given that we needed a solution that was portable to an existing production release with minimum impact. In others we've revisited the proposed solution on the basis of expanding on the level of understanding that the intended user will have of the inner workings of our system. Even when a solution is apparent, nothing is guaranteed in software and plans can change drastically as new information emerges. Using this technique helps to avoid coupling our acceptance criteria too closely to a specific solution and rendering them invalid in the situations where it needs to be reconsidered later.

Thursday, 28 May 2015

In the Real World

A phrase that you hear a lot in software circles is 'in the real world'. I've attended many conference talks and presentations since I've worked in Software Testing. Typically there is opportunity for Q&A either during or after the talk - from my experience of speaking this is one of the most nerve-wracking stages of any talk as you simply don't know what is going to come up. The phrase 'in the real world' is one that is often thrown up during this stage. The source is typically an audience member who is suggesting that the the ideas or approach presented in the talk are not applicable in a real world situation. Sometimes the questioner restricts their 'real world' to their specific company or role, however I have seen those who go beyond this, claiming to represent all testers operating in real companies in their dismissal of an idea.

I've been thinking about this a lot recently, particularly relating to Agile. On one hand we have a manifesto and a set of principles that back this up. Out of this has grown a number of methodologies, such as Scrum, that dictate practices. I've seen how deviation from the core practices of scrum can incite accusations of 'doing it wrong' and labels such as 'Scrumbut' and 'Cargo cult'. At the same time, one of the key principles of Agile is to continuously review and improve as a team. Any improvements will inevitably be driven by context and our own 'real world' and will therefore involve tailoring our approach to our specific needs - so how do we tell the difference between valid, pragmatic, context based augmentation to our process and a scrumbut-esque gap in our Agile adoption?

Evolution is good

I think that any approach will require some modification in its application to a real world context. In my talk at EuroSTAR 2011, and a few others since, one of the key themes that I focussed on is how short time-boxed iterations and regular review allowed teams to evolve over time to yield massive improvements in their development activities. I strongly believe that this is the greatest single benefit of an iterative approach to software development. At the same time any deviations from a strict adherence to an approach, scrum in our case, can yield criticism of 'doing it wrong'.

Measuring up

With rather fortuitous timing - as I was noting down the initial ideas for this post - one of our team, John, in our last retrospective raised a point of questioning how well we compare to the 12 Agile principles. He wanted to highlight the fact that years of retrospectives and general familiarity with our process could have resulted in our taking our eyes of the ball in terms of working towards the Agile principles. This was a great reminder for me of how quickly time passes. It is easy to forget the importance of reviewing not only with the aim of improving within your own context, via scrum retrospectives, but also taking the time to measure yourself against the principles of the approach that you follow.

As a result we decided to have a review of the Agile principles to discuss how we were doing and if there were areas we wanted to revisit. John arranged a session where we reviewed the principles and discussed them in the context of which we felt we were delivering well, which we could do better at, and whether any were any that we felt might need some valid amendment to apply to our situation.

the Agile Principles

For anyone not familiar with them, the 12 Agile principles are documented here. Let's examine each in turn.

  • Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

    There is an interesting assumption embedded in this principle and that is that the customer will be satisfied with early and continuous delivery of software. For some contexts this assumption is perfectly valid, however for a product company delivering into blue chip organisations I seriously question it's applicability. Installing a new version of our software into a production environment is a significant undertaking for many of our customers. What they really want is to install a robust and stable piece of software that they can build and run their business processes around without disruption. For my team a principle around providing valuable new functionality through frequent iterations, whilst ensuring the final release delivery satisfies the contracts established through previous releases, would be a more appropriate one.

  • Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.

    This is an interesting one when applied to a scrum approach. The principle states that we welcome changing requirements, however many scrum teams operate with a policy of not changing scope within a sprint, in our case 4 weeks. If the principle applied to scrum relates to having different requirements on a sprint by sprint basis the principle is inherent in the process as we aim for a level of completion and replanning at each iteration that allows for changes. Given that we're delivering working software with each iteration I would not describe such changes as being 'late in development'. For me a 'late in development' change in srum would be within the sprint, thereby positioning this principle somewhat contrary to the scrum process. For no team we aim to minimise the changes within a sprint to reduce context switching, but we will be flexible to changes in scope if there is a clear business need. I think that this is a case of a principle that works well when contrasted with the more common 'waterfall' based approaches which it was clearly written in contrast against. For a team who have grown up around an iterative approach, the concept of 'late in development' may differ from the original intention of the principle, to the extent that the wording of the principle might need reviewing as our perspectives on what constitutes a development lifecycle change.

  • Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.

    This one is core for us, and I imagine for most agile teams. Maintaining a position of working software ensures that we never stray too far from being able to deliver the software, which reduces the likelihood of significant deadline increases due to discoveries that impact releasability.

  • Business people and developers must work together daily throughout the project.

    Is this true? Gojko Adzic once wrote a great piece on the mythical product owner. Can we really expect to have business decision makers present in our scrum teams on a day to day basis? For a product team we can't realistically expect a representative of each of our customers to work in our teams, we therefore have to have a proxy role, and most teams aim to have a product owner here. My concern with this is that, adding a proxy role in between the development team and the customers could actually allow the team to distance themselves from understanding the customer. I think that a better principle for many is that the members of the development team work frequently with business representatives on a conversational basis to ensure that they have an excellent understanding of the business needs and can act in a proxy capacity to assess the software. As I wrote about in this post I think that this is a role that a tester can take on in an agile team.

  • Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

    This was one that we marked as 'could do better'. I think in general we have a good level of motivation in the team, however one or two felt that there was room for improvement here and that was sufficient for it being marked as such. This is another of the principles where I felt that our assessment of our position was very much performed relative to our position as a successful agile team, where the original intention was perhaps relative to the pervasive development cultures at the time of writing. I believe that teams built around Agile approaches typically enjoy higher levels of motivation on their work than more traditional organisations due to greater levels of autonomy and collaboration, yet this is now how we measure ourselves, so the bar has been raised.

  • The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

    This was another one that we marked as as 'could do better', yet again I believe that we do a great job here if we compare not to our own high standards but the cultures that this principle was written in response to. Developers and testers don't communicate primarily through the bug tracker as they have in organisations I have worked in previously. Testing isn't a production line process which takes in requirements and software and produces test cases, bugs and metrics. Perhaps our over-reliance on emails causes concern for some, however for me I think we do a grand job here and easily meet the principle as it was originally intended - with high levels of interactivity and engagement between team members. In some ways the fact that some felt we could improve here was a real positive, in that again the cultural bar has been raised and we now need to measure our principles against the new expectation of highly collaborative teams and cultures.

  • Working software is the primary measure of progress.

    Yes - 100% this is us. We measure our progress based on what is finished and works. I think that this is one of those principles that really was created in response to some flawed models of tracking project progress that allowed a project to appear to be 90% complete only only for the final 10% testing phase to take as long as the rest of the project combined due to the late exposure of issues. I've heard stories of Agile adoptions where management failed to relinquish the need to focus on quantifiable measureents of progress and so misguidedly rounded on story points/team velocity as some kind of comparable metric to measure progress and compare the efficiency of different teams. I'm glad to say that I'm not in one of those organisations.

  • Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

    When I started in Agile 8 years ago we went at relentles pace. I think we did 12 sprints without a decent break, and as we pushed towards releases we had the stereotypical weekend working to meet deadlines. One time the situation was so bad that I fell out with the then manager as I had a friend visiting from another country and I was not prepared to miss their visit to go to work on a Sunday. I'm pleased to say those days are gone. We might not always achieve everything we plan in the time originally intended, but we respect the need for sustainable pace. If we don't meet our targets within the pace with which we operate that is accepted.

  • Continuous attention to technical excellence and good design enhances agility.

    I think that Agile can be its own worst enemy in this regard as short timeboxes can lead to myopia and 'shortcut coding' if we don't maintain a focus on this principle. I have seen the situation where, rather than risk breaking existing functionality by modifying an existing interface to a file store or database, a developer has simply written another interface. Whilst this reduces risk to the existing code in the short term, over time it leads to brittle code and a nightmare for testers and programmers alike as the number of features that are independently affected by a change to the storage layer increases. Sustainable development does not only refer to the pace of the team but also the quality of the code, if the quality deteriorates over time then the pace of development will slow. I completely agree here - and have seen the impact on agility that hasty design can have.

  • Simplicity--the art of maximizing the amount of work not done--is essential.

    As anyone who has read my post "Putting Your Testability Socks on" will know that "Simplicity" is also one of the testability factors. I question whether the definition of simplicity in this principle could actually contradict the Testability one on the basis that reducing work done does not necessarily lead to simplicity in the product design. In fact have the opposite effect -as in the example I gave above where taking a myopic approach of minimising the work can come at the expense of maintaining good code structure. This is clearly not what we are aiming for. For me, simplicity should not be about maximising the amount of work not done. It should be about minimising the amount of functionality in the product. It should be about minimising the number and complexity of interfaces. It should be about maximising the lines of code not present in the code base. Sometimes achieving these elements of simplicity actually requires more work. I know what the principle is driving at here, however the wording is flawed.

  • The best architectures, requirements, and designs emerge from self-organizing teams.

    I like to think that this is the case, however I have no evidence to back this claim up. If I were to take a contradictory stance I'd argue that a more accurate, but possibly more trite, definition would be that "The best architectures, requirements and designs emerge from people who know what they are doing". It is not so much the self organising team that I've seen yielding the best designs, but the teams who have grown to develop a high level of tacit knowledge around the workings of the product and the needs of the customer. In our review we struggled with this one, mainly due to the fact that it is such an absolute statement. We decided that we probably 'could do better' on the basis that some of our requirements and designs come from outside the self-organising teams, however whether this necessarily means that they are worse for that is hard to say. A possibly less trite attempt might be "The best architectures, requirements, and designs emerge from collaboration between individuals with a range of relevant expertise and experience".

  • At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly.

    Which brings me neatly back to my initial thoughts - "I strongly believe that this is the greatest single benefit of an iterative approach to software development". The frequency of review and improvement in an Agile adoption supports a level of tuning of procedures that allow a team to evolve itself perfectly into whatever context it operates. This evolution may sometimes result in practices that run contrary to the idealised models of Agile development that one finds in the syllabuses of certification courses and the training material for Agile tool vendors, however this is no bad thing, as long as we maintain a clear picture of the principles that we are working to.

The Beauty of Principles

"In the real world" sometimes it pays to review what we do in light of a more idealised viewpoint. Whilst we might decide that all of the practices of our chosen development methodology are suitable, by reviewing ourselves in the context of the principles we can see if we are straying from the core intentions of that approach. Whether I agree with them all or not, I like the agile principles and the fact that they exist. I find that a set of principles, as with the "Set of Principles for Test Automation" that I created to guide our automation, provides clear guidance without imposing too rigid a structure or being too woolly and trite (company mission statement anyone?). What we do need to do is ensure that those principles are maintained to ensure they stay relevant. With the Agile principles I feel that, given the massive impact that agile has had on development teams these principles may now benefit from being revisited in light of the fact that agile has itself moved the goalposts in terms of how these principles are presented.