Monday, 30 November 2015

The Fourth Amigo

Several years ago I was lucky enough to take over the management of the technical authoring for our product. Technical authoring goes by a number of names, "Technical Documentation", "Information Management", "Technical Communication" - all referring to the process of professional technical authors creating user guides. Managing this area was a privilege for me as it has allowed me to learn from some fantastic individuals who are highly professional and work autonomously with little input from me. At the same time, the taking on this responsibility did present something of a challenge. It's a challenge which I know has affected other technical authors working across Agile teams - that of how to synchronise the production of technical documentation with an Agile user story based development process.

An awkward fit

At the point that I was given the remit of managing the technical authoring for RainStor, we had a sole technical author. The documentation that she was producing was a fantastic asset to the company and the quality and professionalism really helped to provide confidence in our product for new customers. At the same time, she was having some difficulty in managing her work around the Agile sprint approach that the rest of the development team were using. The technical author that worked on our product was highly competent and our documentation was a huge asset to our product, but the process of documentation sat somewhat uncomfortably alongside the sprints.

The documentation approach had always been to plan activity on the requirements on a release basis. As historically the team had relied heavily on requirements specification documents this had supported the technical author's approach here given that she could create documentation based on the specification rather than the working product. As the team moved away from requirements specifications to developing based on user stories, with little in the way of up front documentation, she had been forced into capturing the information on new features only after the functionality had been completed. This was done typically once the sprints in which the features had been developed and tested were completed, placing a lot of pressure on her at release time. Also, as many of the design decisions and context around the use of the features were being discussed in the collaboration during the sprints, she was often lacking relevant information and context as she worked on the documentation later.

Looking at her situation, I felt that the solution was clear, however it was going to be a challenge. It was my belief that it was possible to produce the technical documentation for our product within the sprints, as part of the while team process of completing the user stories. If we could bring the technical authoring in as an integral part of the sprint activities then it would not only allow us to maintain documentation that reflected the software being developed. It would also mean that the technical author would receive greater support from the team in completing her activities as they would relate to the development that the rest of the team was in the process of collaborating on at the time. Whilst it seemed apparent that incorporating the technical documentation activities more more tightly into the user story activities was the key to success here, the process of achieving that was not trivial.

The missing amigo

It was clear that the critical point on which the success of integrating the technical documentation into the sprints would primarily depend was the elaboration process. The elaboration of the user story is a critical stage of the lifecycle of a story, where our team aims to turn the 'placeholder for a conversation' into a more tangible set of agreed acceptance criteria and other key elements to support a common understanding. A commonly used phrase in Agile media for the expected attendance list for this elaboration activity is "The 3 Amigos" (George Dinwiddie is generally credited as introducing the term). These three were originally the Product Owner, the Developer and the Tester, and the concept arose to stress the importance of having representatives of each of these disciplines present when agreeing the criteria for success of the stories.

What became apparent to us was that there was a 'missing amigo' in this list - the technical author. In important discussions around the beneficiaries and the value of the developments, it was clear that the presence of the technical author was essential if they were to understand the requirements behind the user stories and document in that context, rather than just from the eventual software behaviour. We made sure that with each elaboration the technical author was invited to attend. This was not always easy as the team were in the habit of very fluidly arranging the sessions and it was easy to forget to invite her along. A few times experiencing the 'post-it of shame' from the author in the retrospective pointing out these omissions was enough to cement the principle into the teams psyche, and the inclusion of the technical author in the elaboration meetings soon became a fundamental principle in our process.

What became clear very quickly was that attending the elaboration wasn't just one way traffic for the benefit of the author. Just as each of the original '3 amigos' roles has relevant input into the acceptance criteria around the story, the technical author too can provide a unique and valuable perspective into this process. Writing the user guides gives a great understanding of how the software is presented to the customer. Technical authors are particularly strong at identifying issues of consistency across the product, and picking up illogical naming and 'developer speak' in naming concepts.

As well as the elaboration meetings we also made sure that the documentation activities were included in other core areas of our development process, fir example we included a column for tracking documentation activity on our scrum board, and any familiarisation demos in which the developer demonstrates changes to the testers, would also include the technical author.

May the Fourth be With You

In recent years we've introduced another technical author to the team. This was an interesting time in testing out whether the integration of the technical documentation in the team would work for someone without the many years of product experience that our existing author possessed. I'm pleased that the new author adapted fantastically well to the approach. He has relished the level of interaction that he has with the development teams and has used that to very quickly build knowledge of the product.

We have not gone as far as integrating an individual author into each of our scrum teams, mainly because the level of documentation between teams varies so greatly depending on the features they are working on. Instead the authors manage their time across the teams, attending standups and elaborations across the scrum teams, getting documentation changes reviewed and approved by the rest of the team as they go along.

In talking to other technical authors across the multinational organisation that I now work in, I've found that doing technical documentation in Agile is a common area of difficulty. Moving from a more isolated role documenting based on up specification documentation to a role with more interaction with the development teams and incremental delivery is not easy. What those responsible for managing the documentation activities has to appreciate is that the reduction in 'in-process' documentation and specification that Agile approaches strive for has a profound impact on the approach to technical documentation. Any initiatives to move to agile in development/testing must also consider a closer integration of technical authoring to allow them to benefit from the collaboration that helps the team to define and refine their specifications. It makes me very proud that our team is integrating in just such a way, and as a result producing high quality documentation during the sprints based on the features delivered within them.

Image: https://www.flickr.com/photos/a440/2065938547/ of the "Four Amigos by Garrett McFann

Wednesday, 4 November 2015

The Living Test Strategy

What is a test strategy? It’s not necessarily as easy a question to answer as it may have been a few years ago. In previous testing roles I used to be able to explain clearly what a test strategy was - it was a document. The test strategy was the document that I created at the outset of a long development project with the purpose of defining and agreeing the approach that the testers on the project would take in testing each requirement and area of the software under test. I knew exactly how the document was structured, as it looked almost exactly the same each time that I produced it. Each time that I produced it we would undergo the ritual dance of review, amendment, re-publication and eventual sign off with development, project and/or product managers sufficient to convince all concerned that we were taking the testing seriously.

The value of the strategy from a testing perspective was limited. The majority of the content of the strategy would be boilerplate material that was transferable, not only from one project to the next but also possibly one product/company to the next without significant modification. Without the strategy document the testing performed would have been much the same. Often the testing work that was carried out was done so not because it was defined in the test strategy that we should do so, but because as testers we felt that it was a good idea.

I have many examples of where the approach I’ve taken deviated from the defined strategy, a couple of the better ones are:

  • One on data analytics engine product the developers created an in-house database comparison tool which I adopted and used to compare data sets through various load/append/update processes to validate the integrity of our indexed fields - this was never defined in the test strategy but proved invaluable in exposing issues in the data maintenance processes
  • On a primarily test script driven implementation project we adopted a phase of initial exploration to assess the stability of each new release prior to progressing into the scripted test execution stages in order to save time working through test scripts against sub-par releases from the supplier

The important point in these examples was that, whilst a strategy was defined and agreed at the start of the project, decisions on the approach taken were made later to increase the value of testing. These decisions were made as a result of the discoveries that were made as the testing progressed. Whilst overall the approach taken would tie in with the approach defined in the strategy document, the actual approach that was taken had less to do with the strategy and more with the experiences of myself and my colleagues.

In my recent work I have moved away from producing a test strategy document. Aside from the fact that I found that it had limited value in the situations where I’d created one previously, as I’ve moved more explicitly to a testing approach based around exploration and establishing relevant tests based on arising needs throughout an agile sprint based approach, the creation of a test strategy document in advance smacks of futility. This creates something of an information vacuum when it comes to discussing the testing approach, and one that some might feel uncomfortable with. If we dispense with a test strategy document, where does our strategy reside? How do we define what we do and how we test? In order to explore those questions further I’m going to look at what I see as the two different uses of the test strategy and how we might consider these being addressed in an agile organisation built on foundations of exploration: Defining the strategy for the team, and explaining the strategy to others.

Picking the Right Team

Alex Ferguson, Arsene Wenger, Brian Clough, Steve Hansen , John Madden, Vince Lombardi, - these names might be more familiar to some than others, but they are all highly successful sports managers. In achieving the huge successes that each one has, I doubt very much that any of these, or any other top sports manager has sat his team down with a 40 page word document and gone through a multi-phase review process until the entire team, coaching staff and board of directors are happy to commence with starting a game. Certainly a strategy is communicated and shared prior to each game through the team training and team discussions. The strategy of the successful manager starts way before that though - the strategy is encapsulated in the team itself. The starting point of the strategy lies in the players that are in those training sessions and team talks to start with. The top managers will hire and construct their teams based on the skills of the individual players involved and the needs to fill the squad with the breadth and depth of skills to be successful. If a manager wants to play fast, one touch attacking football then he will hire differently to one that wants to stifle play and soak up pressure.

I see hiring in testing as similarly an element of the strategy (I refer here specifically to testing, however this is just as valid for development overall). The structure of the team plays a massive part in the strategy. There is no point, for example, in hiring testers who relish in the predictable execution of scripted tests if you want to promote an exploratory culture. Similarly if you want a high level of predictability and rigorous documenation then filling your team with headstrong and expressive testers with a disdain for filling in documentation is going to be counter-productive. I’ve been fortunate to avoid too many attempts to enforce ‘top down’ changes in approach onto teams that were hired into a very different culture, however when I have seen it done I’ve seen high levels of friction and resistance - the team was simply not the right one for the new strategy.

In 2013 I guest wrote a piece for Rob Lambert’s blog on ‘T-shaped testers and square shaped teams’ . One thing that was implied in that piece, but perhaps not made explicit, was that the creation of the ‘square shaped team’ is a Test Strategy defining activity. For me, testing strategy starts with the testing skills of the individuals that we hire into our teams. As a result boilerplate, factory specifications have as little place in my hiring process as they do elsewhere in our development processes. Just as each hire into a sports team will be done based on the abilities of the existing players and the areas where skills shortages exist, so each hire into my testing team is done be based on complementing and reinforcing the skills already present to create a team that has the capabilities of delivering the approach that our strategy targets.

Every Player has a Role

Getting the right skills and attitudes in the team is a critical starting point in delivering strategy. Nevertheless, as many star-filled sports teams have demonstrated, having great players does not necessarily guarantee success. As important an element in a successful strategy is that the roles that each of the team are appropriate for the team to cover all of their responsibilities. If you put your star attacking players in defensive roles, or fail to ensure that each player knows their responsibilities when defending, then the result will be a lot of confusion.

A second critical element of a team strategy is therefore in ensuring that the responsibilities of the individuals and teams are understood. In my last post I looked at the nature of exploration and the multiple levels at which the approach that a tester or team is taking can be defined using charter based language. As an experiment in a group session I recently asked my team to define what they saw as their charters at a general level within the organisation and the teams they worked. What was clear from the exercise was that, whilst we’ve not historically explicitly defined individual responsibilities, every member of the team had a strong understanding of the expectations of their role and the part they played in the overall testing activity. Individuals also naturally tended to define their charters at different levels depending on their own level of experience and responsibility, with those less experienced members of the team encapsulating their work at a lower, more detailed level than those with more experience or responsibility requiring a higher level appreciation of the testing strategy.

One clear consensus coming from the team was that providing more explicit role definitions out of management would be counter-productive, as new needs were constantly arising with the team approach shifting to incorporate these. Individuals felt comfortable to adjust, sometimes through self-organising and sometimes a little more directed, but always able to shift their individual focus to incorporate new activities and responsibilities into the overall remit of the testing group. As I discussed in my last post - this ability to change and redefine approach at different levels is an characteristic of a successful exploratory approach and a key component of our testing strategy.

Explaining Strategy

So, from a team and test management viewpoint I believe that a testing strategy is encapsulated in the individuals that we have in the team and the responsibilities and charters that they fulfil. Having a strategy that is only known to the team, however, is sometimes not sufficient. Sometimes it is necessary to define testing strategy to others outside the group, and one argument for a test strategy document is that it helps to get the testing approach agreed and ‘signed off’ by other interested parties. I’d argue here that there are other more effective means that this aspect of test strategy can be achieved. Yes there is a need to communicate our testing strategy outside of the testing group, however do we really need to predefine all of the testing activities to a detailed level in order to achieve this? In my experience the individuals involved in the test strategy review process found the process a tiresome one as they did not necessarily have the knowledge of testing approaches, techniques or tools to assess whether the approach was the most appropriate, or even valid. The result was therefore an inclination to refer to industry best practices and stock approaches as a means to fill the void of understanding and reduce the risk of personal culpability. “Are we adopting industry standard best practices here?” is a question that anyone with little or no understanding of a subject can rely on to provide input into a strategy review process, neatly placing the responsibility of approach on the ‘industry standards’ and the onus of responsibility onto the testing team to satisfy the implications.

I find personally that development managers and product owners would prefer not to have responsibility for understanding the finer details. What most would prefer to have an overview of the testing approach at a more abstract level, and leave the details of execution to those whose job it is to understand them. To this end I’ve found that a well placed presentation summarizing a testing approach for those outside the team achieves a quicker, clearer understanding of the testing strategy than reading through pages of details on the fine details of how each requirement is to be tested.

A shaky defense

Another final reason for presenting the entire testing strategy in front of management in advance of testing is a more cynical one. Sometimes this is done as an attempt to protect against a ‘blaming the tester’ scenario. Some may labour in the mistaken belief that getting a test strategy signed off in advance affords some protection from subsequent blame if problems are discovered, on the basis that ‘management signed it off’. This is a false premise though for exactly the same reasons. We cannot expect other parties to have the same level of insight into the appropriate testing approach as the person creating the strategy, and therefore attempting to lay some culpability at the feet of others should that strategy prove to be flawed will have limited success.

I’d personally rather take responsibility for the strategy details through the structuring of a skilled team and maintaining flexibility of strategic choice through the process, than be restricted to a specific approach on the basis of diminishing the blame later.

Image: https://www.flickr.com/photos/laurencehorton/7289066310/

Monday, 12 October 2015

Fractal Exploratory Testing Revisited

In my recent post 'Blog Post Room 101' I discussed the situation where the ideas that we present perhaps don't hit the mark with others or have the staying power that we first hoped. In contrast there are ideas that can be reinforced through our reading and subsequent experience or adoption by others, and we find them developing over time. One such idea for me is the concept of Fractal Exploratory Testing that I first wrote about in 2013.

This is an idea that I've had good cause to review recently as Janet Gregory and Lisa Crispin included the idea in the Exploratory Testing chapter of their "More Agile Testing" book. I was both flattered and somewhat unnerved by this as I felt that the idea as presented in my original post was quite raw and not as fully formed as I'd have liked. What I find, however, is that the more that I review the ideas that I originally outlined in that post in light of my subsequent experience and other material that I've read on the subject, the neater the idea fits and I'm very pleased that it was included.

An exploration within an exploration

In her excellent book on the subject of Exploratory Testing, 'Explore IT' , Elizabeth Hendrickson uses the example of Jeffersons letter prompting the Lewis & Clark expedition to find a navigable route across the U.S.A as an example of a good charter. James Bach also references this expedition in his earlier work, including this paper explaining ET. I think this analogy of exploratory testing and exploration of territory is a good one and the Lewis and Clark expedition provides a particularly good demonstration of a successfully executed exploration. Whilst comparing their mission directly as equivalent to an individual test charter provides a useful analogy, an element that isn't necessarily apparent through such a comparison is the many layered nature of exploration demonstrated through the expedition.

Lewis and Clark's overall charter was well defined, however the decisions over how to explore and what resources to use during each stage of the expedition were developed through a series of smaller explorations that formed that greater process. Lewis and Clarke's charter was a high level one with a very open ended remit on hiring people and buying equipment to carry out the mission. Within that charter they undertook a variety of smaller exploratory activities, some planned, many prompted by the events and discoveries that they made during their mission. Decisions were made and refined on the basis of information gathered at each smaller stage of exploration and Lewis and Clark pursued new avenues of exploration and changed the resources used on the basis of the discoveries made.

  • On discovering that their boat was too large to navigate further up the river they used local resources and skills to create wooden canoes
  • On encountering a fork in the river with two branches of nearly equal size they spent days exploring both branches to decide on which was the Missouri
  • They experimented with a wire frame canoe covered in hide, experimenting with different hides to see which, if any, was most suitable
  • On discovering no suitable hides to line the canoe they abandoned this approach and explored locally to find more wood to fashion traditional canoes

The success of Lewis and Clark's mission relied heavily upon their willingness to alter their approach as they went along. Imagine instead if Lewis and Clark had decided up front that the larger boat was the only means by which they were going to explore the river and they would go to any lengths to achieve that goal, or if they'd decided that the wire framed canoe was the only means by which they would navigate the smaller sections of the river, would they have been as successful? I don't believe so. The success of their mission came about in a large part due to their ability to experiment and learn at each stage of their activities and redefine their approach as a result of the discoveries they were making, even if it meant completely abandoning an approach in light of evidence that it was ineffective. What Lewis and Clark were doing, in the process of undertaking one large exploration, was tackling many smaller exploratory activities. Each of these activities possessed characteristics that were apparent in the overall mission, but on a smaller level, and each targeted towards a common goal defined by the larger mission, but each in themselves distinct.

Fractal Recursivity

I believe that the exploratory activities that are undertaken as part of a larger mission exhibit the characteristics which can be viewed as "Fractal Recursivity". The term fractal recursivity originated from the study of ideologies of language, and occurs when groups which share a common language differentiate themselves from 'others' based on nuances of accent. The fractal element occurs because the phenomenon can be observed at the local and regional levels just as effectively as at the national. In this paper Mary Antonia Andronis explains the core principle

Integral to the idea of fractal recursivity is that the same oppositions that distinguish given groups from one another on larger scales can also be found within those groups. Operating on various levels, fractal recursivity can both create an identity for a given group and further divide it.

So the core idea is that characteristics that can be used to differentiate items at one level can also be applied in the same way to differentiate sub-elements of those items at a lower level.

In "Explore IT" Hendrickson outlines a simple template for an exploratory charter, and in that template identifies three primary characteristics of an exploration:-

  • An area that is targeted for exploration
  • Resources that will be used
  • The information hoped to be gained

The ability to define and differentiate activities at various levels from the overall mission to smaller explorations within it through referral to these characteristics is what, for me, characterises exploration as demonstrating Fractal Recursivity. Definitions on the terms of these properties can allow an individual or team to understand the scope of an exploratory operation and how it differs from others, at whatever level that operation is defined. This allows the coordination of effort ensuring that the relevant tasks are undertaken, but without predefining the actions to be undertaken to complete them.

A hypothetical example

The Lewis and Clark expedition has been well covered by greater minds that I, so let's instead look at a hypothetical example as a means to illustrate the idea further. Imagine that you are leading a mission to sail to an island which you believe to be uninhabited. Your ship has crew, equipment and rations suitable for your mission, which is:

"Explore the island with the vessel, equipment and crew to establish the suitability of the island for establishing a permanent settlement"

This is quite a broad mission, so in planning your approach you would probably break the mission down into some activities that you intend to focus on at the start:

"Explore the coast of the island with the ship and rowing boats to establish locations for a harbour"

"Explore the forest behind the beach with axes and saws to locate suitable building materials"

"Explore the valleys inland with a small team of people and digging equipment to identify sources of fresh water"

"Explore the sea around the island with fishing nets to establish whether there are sufficient fish to constitute a useful food source"

These activities all form part of the wider mission, and all have the properties that Hendrickson defines as making up a good exploratory charter - resources, a subject area and information to obtain. Whilst some may share some of these properties, no two activities will share all of them, thus these smaller explorations can be differentiated from each other through these, just as the wider mission may be differentiated from other such missions on the same basis.

There will likely be a set of initial planned activities, however as these are being carried out, the teams will be gathering further information about the landscape and environment which may lead to further tasks.

"Find out the best route from the sheltered harbour we discovered to our preferred base camp location 2 miles up the coast using a compass and a machete"

Or they might possibly identify new risks which merit new exploration:

"Explore the inland swamp with spears to check whether those small crocodiles we saw have any potentially man eating cousins"

"Explore those ominous drumming sounds that we heard when exploring the forest using keen hearing and tip-toes to establish whether the island really was uninhabited"

The need for these activities can't necessarily be predicted before the exploration has started. The discoveries that are made through the initially planned explorations give rise to further activities which target the discovery of different information, and may require different resources to complete. The expedition is characterised by a series of explorations, some planned, some triggered by discoveries through the process, but each with it's own independent set of goal, information and resources, that contribute to the overall charter of the expedition.

The ability to define and differentiate at each level is key for coordinating the overall expedition. Imagine if you were leading this expedition and asked the team heading to look for water what they are doing and they replied: "We're exploring the island with a ship and crew to establish if it's suitable for a settlement". If the same question to those leaving to look for building materials in the woods, or those about to fish in the bay, elicited the same response then would be difficult and confusing to understand the various activities being tackled. The nature of fractal recursivity in exploration supports the ability to define and plan activities at each level at a sufficient level for the person coordinating at that level, whether that involves coordinating only their own activities or others as well.

Enough with Analogies Let's Talk About Testing

As with an expedition, in terms of testing software testing our testing activities will also be defined by an overall mission. This might be summarised at a very high level, such as for a certain software product:

"Explore this product using the skills of these development and testing teams, the budget available for tools and hardware, and the time available, to establish how well the software delivers our target value to the relevant beneficiaries"

It's a bit vague and high level, however a statement at this level could validly be used to identify the responsibilities of a testing group within an organisation. At a lower level, within the software development process on this product, a tester may be working on a user story. They will probably have agreed some target acceptance criteria to guide their testing of the story with the aim of obtaining information on how well the software meets those criteria. Again the testing at this level can be expressed in the form of a statement defining the testing mission :-

"Explore the new feature and areas potentially impacted by it using my test environment, tools and knowledge and about two weeks of the sprint to assess the new behaviour in relation to the acceptance criteria, risks and assumptions identified during elaboration"

Within the testing work on that story the tester might define a series of charters covering the areas that they intend to explore whilst working on that story, such as this one:

"Explore the inputs of the configuration screen using a range of valid and invalid inputs to identify functional gaps in the validation"

This is the level that exploratory testers might expect to define a set of 'test charters'. This level is appropriate for a tester to define and breakdown a piece of testing sufficiently to differentiate between their testing tasks, plan how they will cover the testing at hand and manage their testing in a structured way.

In her book Elizabeth Hendrickson provides some excellent examples of test charters that are too narrow in focus to be useful to a tester in planning their activities. Statements at a lower level, such as this

"Explore the date of birth field with the value 29/2/2013 to test how it handles invalid dates"

Would be more appropriately considered tests or test actions than charters. Similarly Hendrickson rightly points out that charters at a higher level are too broad to be a useful testing charter. I agree, however as I've shown above I think that it is possible to define testing activities at many levels that we might not necessarily perceive as 'charters' yet can be expressed in a similar way. Each level is made up of a series of activities at a lower level that can in themselves be expressed by an exploratory mission statement appropriate to that level.

Each activity also has the potential to generate further activities at any level as a result of decisions arising from the information obtained. The outcome of an exploratory test could be some more tests, or possibly the creation of a new charter. In some cases I've seen exploratory tests result in entirely new user stories or testing activities at the equivalent level due to the serious ramifications of the information discovered.

So is it...?

Generally with any model there are two important questions to assess its value:-

  • Is it valid?
  • Is it useful?

In terms of validity I hope that I've presented a reasonable case here for the validity of a fractal model of exploration. All models are flawed and this is no exception, there are limits to applicability, however as I stated at the outset, I've had good cause to reflect on the original idea and think that it stands up.

In terms of whether it is useful, that is less clear. It's certainly not a model that I refer to on a daily basis, however I do refer to the idea when introducing exploratory ideas to new testers in my organisation. I think that the value for me is in demonstrating that an exploratory approach can be applied equally at many levels. Exploration in testing is not limited to executing tests through charters, with all of the other rigid structures of boilerplate test strategies and rigid definitions around test stages and non-functional testing applied. Defining test strategy can equally be an exploratory activity whereby new testing approaches and methods are introduced as a result of the discoveries made. Rather than prescribing the exact approaches that will be taken in our high level test planning, instead I favour an approach of considering the high level testing activities as a set of overriding test missions which are the responsibility of teams or individuals to deliver. It is not up to the test manager to dictate how these missions are to be completed, as long as there is a clear understanding of the area, resources and information targeted. As long as we have sufficient coverage in the scope and responsibility of the defined activities at each level then the focus of test strategy and planning moves from predefining each testing activity to helping the team to obtain the skills and resources that they need to carry out their missions.

References

Wednesday, 23 September 2015

Learning Letters

Being a parent furnishes you with the wonderful opportunity of watching your children learn. As my older children have developed I've found that their range of learning subjects has progressed quickly to include things that are unfamiliar to me, mainly through having been introduced to the world since I was at school. This could be daunting in that I am exposed to my children seeing limitations in my own knowledge (the illusion of parental infallibility is easily shattered, for example through my cumbersome attempts at playing minecraft). Nevertheless, I prefer to see it as an exciting opportunity to learn myself and share with them the joy of learning new things.

Making Sense

One of the most interesting aspects of watching children learn comes when they start to learn how to read letters and numbers. Sometimes it takes seeing a system through the experiences of another trying to learn it to expose the flaws inherent in the system that aren't apparent to a person more familiar with it. Watching my children attempt to learn the symbols that go into making up their letters and numbers really brought home to me some of the illogical and unintuitive problems in our common symbology.

A great example happened recently with my middle son. We'd spent time learning all of his letters in a picture book, to the extent that he could recognise each letter without the pictures. I was somewhat surprised when I presented him with a pre-school story book and he couldn't identify the letter 'a'. When I pointed it out to him his response was even more surprising - he said "That's not an 'a'". I looked at the page and realised that he was quite right.

The 'a' in his story book looked like this

Whereas the one in his letters book looked like this

How could I expect him to know that the extension over the top of the letter here was meaningless, when in these two letters

A much smaller difference has a profound significance.

Another one that all of my children have found hugely confusing is when different characters are different only through reflection or orientation. 6 and 9 for example can be very confusing particularly when working with children's games such as bricks and cards which can be rotated in hand. p, q , b and d are similarly often the last ones to be confidently learned.

And don't get me started on equivalent characters such as upper and lower case. So P is a capitalised p, S is a capitalised s but a capital q is Q, what?

When you consider a child's learning journey it is hardly surprising that they get confused. We spend their very early years teaching them how to identify shapes by their properties, irrespective of their position,

  • a rectangle oriented horizontally or vertically is still a rectangle
  • a shape with thee sides and 3 vertices is a triangle irrespective of the relative lengths of the sides.

Then we introduce a language and numbering system using a series of symbols where properties are far less relevant. Characters with completely different properties can represent the same letter, and we can change a character into another one simply by rotation or reflection.

There is little logic in the system. The number of rules that we'd have to provide even to understand the basic alphabet and number characters used in English would be huge. Whilst the simple rules of character representation in learning letters may be explicitly defined for our children - through letter books and number charts and the like - the understanding of the range of different ways that the characters in our alphabet can be represented is tacit knowledge. We build up our knowledge through example and experience, incrementally building our rule set around what constitutes an 'a' or a 'q' until we have sufficient rules to read most fonts that we encounter.

Even now on occasion I am presented with a character such as this,

And have to examine the context in which it is used to establish the letter represented. In this situation I depend on a set of in-built heuristics based on how the symbol is presented - e.g. Is it in a word with other symbols that I recognise - to identify what it is intended to represent?

I'm now pretty sure it's a 'T', or possibly an 'F' , but there's still a little uncertainty. Is the word part of a phrase or sentence that makes the meaning clear?

Now the character represented is clear. So, unthinkingly, when reading the cover of this book I've applied a series of personal heuristics to identify the letter 'T'.

For the most part I am not generally aware of the depth of knowledge that I am tapping into when interpreting a new font or text. I would find it extremely difficult to construct an explicit set of instructions for a human or computer to identify this character based on my knowledge prior to seeing it.

Presenting our Software

I was recently reading this great set of tips for technical authors from Tom Johnson. One that really struck a chord with me was number 3 - "Developers almost always overestimate the technical abilities of their audience".

Developers often create products with a certain audience level in mind, and usually they assume the audience is a lot more familiar with the company's technology than they actually are.

As someone who works across the boundaries between development teams and support, I have a great affinity with this phenomenon. As we develop our software we unknowingly build rich and complex layers of tacit knowledge across the development team as we work with the system that we then rely on during development activities.

When working with my query system, for example, there is a wealth of knowledge within the relevant development team around the shapes of queries and how they are likely to execute against certain data structures. Some queries may be a natural fit for our software and execute in parallel scalably across machines, others may force more restricted execution paths due to the SQL syntax used and the resulting stages of data manipulation required. These models that are built up over time support a level of understanding which rarely pervades beyond the walls of our development offices. When our customers first start to create and execute their queries they are typically not considering these things. Yes, they may start to build up their knowledge should a query not meet their expectation, and they work through explain plans, query documentation or work with a consultant to better understand the system. In my experience this type of activity is most often based around solving a specific problem rather than constructing a deep knowledge of query execution behaviour.

Working with my support teams helps to maintain perspective on the levels of product expertise that exist among our user communities. This is not to say that we don't have very capable users, it is simply that developing and testing a product affords us a level of understanding to the extent that expert knowledge becomes second nature and it can be hard not to code and test from this status of elevated knowledge. More than once I've seen referrals on system behaviour from the support team to a development group responded to with an initial level of surprise that the user is attempting to use the system in the manner described. With an open interface such as SQL providing high levels of flexibility over use this is somewhat inevitable. Given that some SQL is generated by other applications rather than through direct input, we can't necessarily rely on sensible structuring of SQL, let alone that it is done so in the manner that our system prefers.

Making Tea

When I was at school a teacher gave my class a fascinating exercise - we had to describe how to make a cup of tea to an alien unfamiliar to earth (who spoke perfect English, obviously). The teacher then went on to highlight our mistaken assumptions such as in an alien knowing how to 'put the kettle on', and the possibly amusing outcomes of such an instruction.

Naturally we wouldn't expect testers to have to work from such an extreme starting point of user experience. We do, however, probably want to maintain awareness of our own levels of tacit knowledge and try to factor this in when testing and documenting the system. For me it is about looking for gaps or inconsistencies in the feature set where we might unknowingly be glossing over the problems through our own knowledge.

  • Are there inputs that to the user could be considered equivalent yet yield different results? SQL is awash with apparently equivalent inputs that can yield different results, for example the difference between 1 and 1.0 might appear trivial, however they can result in different data types in the system with implications for query performance. The difference between these two characters


    can be as trivial to the user as the editor in which they typed their text, however they can have a huge difference if copied into an application input.
  • Are there very different features or workflows with only subtle differences in labelling? I was recently using a web system where the "Resources" tab took me to a completely different location to the "User Resources" link in the side menu. I had taken them to be the same and so failed to find the content I was looking for.
  • Are abbreviations or icons used without labelling or explanation? On an internal system that I recently started using there are two areas which are referred to by three letter acronyms, one of which has the same characters as the first with 2nd and 3rd letters reversed. My team and I still struggle to know which one we are referring to in conversation. In this post I recount a situation where my lack of familiarity of the 'post' icon commonly used in android resulted in an embarrassing mistake and my rejection of a popular blogging app as a result.
  • Is team specific language exposed to the user in labelling or internal documentation? Internal terminology can leak out via various channels and confuse the customer. Our customers know the features according to the manuals and marketing literature, not necessarily the in-house terminology. Using team specific terms in labelling or internal facing documentation will result in inconsistency and confusion as those terms leak out via logs and support channels.
  • Is the same value referenced consistently throughout, or is terminology used interchangeably? A big personal bugbear of mine is when I register with an application entering my email address amongst other fields, and then on revisiting the site I am prompted for my "username". Hold on - I don't remember entering a username. Was I given a username? Should I check my emails and see if I was mailed a username? Or should I try my email address as that is what I used to identify my account? But surely if the site wanted my email address it would prompt for that and not my "username", wouldn't it? Unless they are storing my email address in a username field on the database and forgetting the origin of the value when prompting for credentials.

When exposing an interface which shares characteristics or expectations with other systems, an important consideration is whether we need to test on the basis of a generic knowledge and consistent terminology rather than application specific knowledge or organisational jargon. Otherwise we may risk a reaction similar to my son's on first encountering his "not an 'a'" when the software reaches the users "that's not a username, it's an email address!"

References

John Stevenson - Tacit and Explicit Knowledge and Exploratory Testing

Bach/Bolton - Exploratory Testing 3.0

Markus Kuhn - Acii and Unicode Quotation Marks

Wednesday, 15 July 2015

Room 101

My last post marked the 101st that I’ve published on this blog. Firstly I’d like to say a hearty thanks to you if you’ve read any of my posts before, I’ve had a great time writing this blog so far and getting comments and feedback.

Secondly, this seems like a good time to reflect on some of the posts here and the nature of writing, speaking and presenting material generally to a professional community. A couple of events recently have prompted my thinking about ideas that people present, perhaps in talks or blog posts, that grow to become less representative of that persons opinions or methods over time, unbeknownst to the folk who may still be referencing the material.

A snapshot in time

At Next Generation Testing I had the pleasure to meet Liz Keogh, who is an influential member of the Lean/Agile community and a great proponent of the ideas of BDD. We discussed ideas in a specific talk of hers that I’d taken inspiration from and applied her idea on fractal BDD projects to a different subject to write my Fractal Exploratory Testing post. Liz admitted to me that her ideas and understanding of the projects she was referring to had moved on since giving the talk. It had been a useful model to present an idea at the time, but the talk did not reflect Liz’s current thinking based on her subsequent experiences. Liz had since moved on in her ideas, however for me her thinking was frozen in time at the point that I’d seen her present that talk.

I also had the recent opportunity to take on some new testers into my team. As part of their induction I talked them through some of the principles and ideas that define us as a team. A couple of times I found myself presenting ideas that I’d written about previously, and were working well when I put the introduction slides together a couple of years earlier, but had since fallen out of use. The ideas were current when the posts were created but had not endured long term acceptance outside of my own personal use.

Room 101

On UK television there is a television show called Room 101. In it celebrity guests argue to have ideas, objects or behaviours committed to “Room 101”, which represents banishment to a place reserved for our worst nightmares. As any regular readers of my posts will know, I’m a great believer in admitting our mistakes and being honest about our failures as well as our successes as a community. Having just completed my 101st post, it seemed appropriate to publish a ‘Room 101’ list of some of the ideas that, while not my worst nightmares, maybe don’t reflect my current way of thinking, or perhaps haven’t been quite so successful as I was hoping, or simply weren’t well written. So here are some of the posts that, if I’m honest with myself, are are not quite as relevant or worthy of attention than I’d originally believed them to be.

Context driven cheat sheets - A Problem Halved

I’m a little gutted about this one because I truly believe in this approach and the value of it. The idea is that you generate a ‘cheat sheet’ akin to the famous one from Hendrickson, Lynsday and Emery one, but with entries specific to your own software. This worked really well in my company for a time, however I simply couldn’t sustain enthusiasm from the team in maintaining it. The additional overhead on adding entries to the cheat sheet resulted in few attempts to update it outside occasional bursts of endeavour from myself or one of the other testers. We did review the idea in a testing meeting and everyone agreed that it was a fantastic idea and incredibly useful and agreed that it is sad that we can’t seem to maintain it, but being brutally honest the information in our cheat sheet is rather stale.

Internal Test Patterns - A Template for Success

This is an interesting one as the original purpose of the post was to use test patterns as a documentation tool, to document different structures within our automated testing so that others understand them. In this light the use of our internal test patterns has fallen out of use, we don’t embed the pattern name into the automation structure as a rule so can’t easily identify the pattern used for any one test suite.

The patterns have proved useful, however, when it comes to review and training. I still refer to the test patterns, particularly the anti-patterns, when reviewing our automation approaches with the aim of improving our general structuring of automated tests. They’re simply not extensively used by others.

As a useful tangent on this subject - if you are interested in the idea of automation patterns then Dorothy Graham recently introduced me to a wiki that she is promoting documenting test automation patterns.

Skills matrix - Finding the Right Balance

I don’t use this technique any longer. I did find it useful at the time when I was putting together a small team, however I found it too easy to manipulate the numbers to provide reinforcement of my own existing opinions on what I need in the team, that I now simply bypass this matrix and focus on the skills that I believe us to be in need of.

A Difficult Read - Letting Yourself Go

I can see what I was trying to say here, but honestly reading it back it doesn’t read well and lacks a real coherent message. Definitely my top candidate for Room 101.

If at first you don’t succeed, try a new approach

What is really interesting about the list above is that some of the ideas that haven’t worked quite as well as I thought seem to be the ones that I am most convinced are great ideas. Perhaps overconfidence in the fact that I’ve personally found them really useful has meant that I don’t try as hard when promoting them internally as I assume they’ll succeed. Whatever the reason, trying and promoting new ideas is an activity that is a core part of my work ethic, and there are almost inevitably going to be some ideas and approaches that work better than others. I strongly believe that it is still worth writing about and promoting these. As with Liz’s talk, perhaps through what Jurgen Appelo calls "The Mojito Method" - ideas shared may inspire something in others even after the originator no longer finds them valuable.

As I move into my second century of posts, I’m thinking of expanding my subject matter slightly to cover the fact that I’m involved in a variety of areas, including testing, which are rooted in the customer interfacing disciplines of a software product company. Integrating our agile organisaton into a larger , more traditional, organisation also presents some interesting challenges which might merit mention in a post at some point. I hope that future posts are as enjoyable to write as they have been up until now. If there is the odd idea in there that doesn’t work or read quite as well as I’d hope, I apologise, and at the same time hope that it might still prove useful to someone in some unexpected way.

If you have your own candidates for blog "Room 101" please let me know by commenting, I'd love to hear from you.

Monday, 8 June 2015

The Who, Why and What of User Story Elaboration

The Elaboration process is one of the pivotal elements of our Agile process. At the start of every user story that we work on, the individuals working on that story will get together in an elaboration discussion. The purpose of the discussion is to ensure that we have a common understanding of the story and that there are no fundamental misunderstandings of what is needed between the team members.

Typically in my product teams the discussion is captured on a whiteboard in the forum of a mind map highlighting the key considerations around the story. The testers have the responsibility to write this up in the form of the acceptance criteria, risks and assumptions which they then publish and share to a wider audience. For the most part this process works well and allows us to create a solid foundation of agreement on which to develop and test new features.

Occasionally though we've been inclined to get ahead of ourselves and delve too quickly into technical discussions around a solution before we've really taken the time to understand the problem. The technique that I present here is one that I've found useful to inject a little structure into the discussion and ensure the focus is on the beneficiaries of the story and understanding of the problem.

The Cart Before the Horse

For some stories we encounter the developers involved may have some early visibility of the story and a chance to start thinking about possible solutions. Early visibility of a story provides a good opportunity to think about the potential problems and risks that various approaches might present. At the same time, bringing ready baked solutions into the elaboration also carries the risk of pre-empting the story elaboration by taking the focus into solution details too early. Occasionally I've seen elaboration meetings head far too quickly into the "Here's what we're going to do" discussion before giving due attention to the originators and reasons behind the request. Even with a business stakeholder representing the needs of the users in the elaboration, the early injection of a solution idea can frame the discussion and bypass discussing key elements of the problem.

In order to avoid this, here's a simple approach that I use in elaboration discussions to provide some structure around the session and ensure that we consider the underlying needs before committing to discussion of possible solutions.

Who

I start with a central topic on a whiteboard of the story subject and create the first mind-map 'branch' labelled 'Who'. This is the most important question for a user story, who are the beneficiaries of the story? If using a 'As a .. I want .. So that' story format should have a clear picture of the primary beneficiary, however that format contains precious little information to add meat to the bones. As a product company we have different customers fitting the 'As a' role who may have subtle differences in their approach and capabilities which the discussion should consider. Sometimes the primary target for the story is a role from one type of organisation, yet the feature may not be of value to the same roles in other types of company if it is not developed with consideration to them.

Also the "As a ..." format has no concept of ancillary beneficiaries who may be relevant when discussing later criteria. Whilst stories should aim to have a primary target role, we should certainly consider other roles who may have a vested interest in the means or method implemented. We can also identify these at this stage which can help to ensure the needs of all relevant beneficiaries are considered as the criteria for success are discussed.

By revisiting the "Who" question in the elaboration we can ensure that we flesh out the characters of all interested parties, including specific examples of relevant customers or users to give depth to the nominal role from the story title. Establishing a clear concept of 'Who' at the outset ensures the discussion is anchored firmly around the people we're aiming to help and provides an excellent reference point for the subsequent discussions.

Why

The next question that goes on the board is 'Why'. This is the most important question in the whole process, however until we've established the 'Who' it is difficult to answer properly. The why is where we explore the need that they have, the problem that they are facing. It is incredibly easy to lose sight of the problem as solutions are discussed to the extent that feature developments can over-deliver and miss core agile principles of minimising work done. Having the "Why" stage allows all of the members of the discussion to understand the problem and put key problem details on the shared whiteboard as an anchor for any subsequent discussion of solutions.

A commonly referenced technique for getting to the root value of a requirement is the '5 Whys'. Whilst this is a useful technique that I've used in the past for getting to the bottom of a request where the root value is not clear to the team, I think that it is heavyweight for most of our elaboration discussions. A single 'Why' branch on the board is enough to frame the conversation around the value that the target beneficiaries would hope to get from our development of that story.

Having 'Why' as the second step is important. David Evans and Gojko Adzic write in this post about changing the order of the 'As a... I want... So that' structure because it incorrectly places the 'what' before the 'why'. Instead they place the value statement at the front - "In order to .... As a ...I want". By following the process of adding branches to the elaboration board in order we similarly give the target value the appropriate priority and focus.

What

The 'what' relates to what the user wants to be able to do. This is where we really pin down the 'I want....' items that define the acceptance criteria. The primary objective of the elaboration is to look to expand beyond the one liner of the story definition. The testers responsibility here is to capture the criteria by which we will judge whether sufficient value has been delivered to deem the story completed to an acceptable level (NB I'm intentionally avoiding the term 'done' here as I have issues with the binary nature of that status.). The key here is that we are still avoiding discussing specific solutions. The "What" should be clear but not specific to a solution.

As well as the primary feature concept we're also looking to capture other important criteria here around what the beneficiaries of the story would expect from the new feature. This might include how it should handle errors and validation, usability considerations, scale-up and scale-out considerations and how it should behave with other processes and statuses within our system. The benefit of a mind map here is that we can revisit the previous branches as items come up in later discussion that merits revisiting so we can add items to the other branches if we identify items not previously thought of (such as system administrator stakeholder considerations) at this stage.

How

Finally, once the other subjects have been explored, do we consider the 'How?'. Some might argue that it is inappropriate to think about solutions at all during story elaboration. I'd suggest that avoiding the subject entirely, particularly if there is a solution idea proposed, feels contrived. As I discussed in the post linked earlier, we try to look beyond criteria during the elaboration process to identify potential risks in the likely changes that will result. This identification of potential risks can help to scope the testing and ensure we give adequate time to exploration of those risks if we pursue that approach. It is only one the 'Who, Why And What' have been clarified that we have sufficient contextual information to discuss the 'How' and the consequential risks that are likely to arise in the product as a result.

What happened to "Where?" and "When?"

These could be valid questions for the discussion, however I'd consider them optional in terms of whether to include them here. Both "Where?" and "When?" may be valid depending on the context of the story:

  • Where could include whether an action is performed on a server or from a client machine
  • When could include exploring the scheduling or frequency of execution of a process

I feel that the inclusion of these as branches in the discussion every time would make the process feel rigid and cumbersome, and would only include them if relevant to the subject of the story.

A placeholder for a conversation

I've often heard user stories described as 'a placeholder for a conversation'. That's an excellent reflection of their role. We can't expect to achieve the level of understanding necessary to progress a development activity from a one line statement, no matter how well worded, without further conversations. The elaboration is the most important such conversation in our process. It is where we flesh out the bones of the story statement to build a clear picture of the beneficiaries of the story and the value that they are hoping to gain.

The 'Who, Why, What....' approach to elaboration meetings provides a useful tool in my elaboration tool-kit to ensure we give the relevant priority to understanding these individuals or roles and their respective needs. I've run elaborations in the past where we 'knew' what we were going to do in terms of a solution, only for this to change completely as we worked through the details of the process here. In one example the solution in mind was too risky to attempt given that we needed a solution that was portable to an existing production release with minimum impact. In others we've revisited the proposed solution on the basis of expanding on the level of understanding that the intended user will have of the inner workings of our system. Even when a solution is apparent, nothing is guaranteed in software and plans can change drastically as new information emerges. Using this technique helps to avoid coupling our acceptance criteria too closely to a specific solution and rendering them invalid in the situations where it needs to be reconsidered later.

Thursday, 28 May 2015

In the Real World

A phrase that you hear a lot in software circles is 'in the real world'. I've attended many conference talks and presentations since I've worked in Software Testing. Typically there is opportunity for Q&A either during or after the talk - from my experience of speaking this is one of the most nerve-wracking stages of any talk as you simply don't know what is going to come up. The phrase 'in the real world' is one that is often thrown up during this stage. The source is typically an audience member who is suggesting that the the ideas or approach presented in the talk are not applicable in a real world situation. Sometimes the questioner restricts their 'real world' to their specific company or role, however I have seen those who go beyond this, claiming to represent all testers operating in real companies in their dismissal of an idea.

I've been thinking about this a lot recently, particularly relating to Agile. On one hand we have a manifesto and a set of principles that back this up. Out of this has grown a number of methodologies, such as Scrum, that dictate practices. I've seen how deviation from the core practices of scrum can incite accusations of 'doing it wrong' and labels such as 'Scrumbut' and 'Cargo cult'. At the same time, one of the key principles of Agile is to continuously review and improve as a team. Any improvements will inevitably be driven by context and our own 'real world' and will therefore involve tailoring our approach to our specific needs - so how do we tell the difference between valid, pragmatic, context based augmentation to our process and a scrumbut-esque gap in our Agile adoption?

Evolution is good

I think that any approach will require some modification in its application to a real world context. In my talk at EuroSTAR 2011, and a few others since, one of the key themes that I focussed on is how short time-boxed iterations and regular review allowed teams to evolve over time to yield massive improvements in their development activities. I strongly believe that this is the greatest single benefit of an iterative approach to software development. At the same time any deviations from a strict adherence to an approach, scrum in our case, can yield criticism of 'doing it wrong'.

Measuring up

With rather fortuitous timing - as I was noting down the initial ideas for this post - one of our team, John, in our last retrospective raised a point of questioning how well we compare to the 12 Agile principles. He wanted to highlight the fact that years of retrospectives and general familiarity with our process could have resulted in our taking our eyes of the ball in terms of working towards the Agile principles. This was a great reminder for me of how quickly time passes. It is easy to forget the importance of reviewing not only with the aim of improving within your own context, via scrum retrospectives, but also taking the time to measure yourself against the principles of the approach that you follow.

As a result we decided to have a review of the Agile principles to discuss how we were doing and if there were areas we wanted to revisit. John arranged a session where we reviewed the principles and discussed them in the context of which we felt we were delivering well, which we could do better at, and whether any were any that we felt might need some valid amendment to apply to our situation.

the Agile Principles

For anyone not familiar with them, the 12 Agile principles are documented here. Let's examine each in turn.

  • Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

    There is an interesting assumption embedded in this principle and that is that the customer will be satisfied with early and continuous delivery of software. For some contexts this assumption is perfectly valid, however for a product company delivering into blue chip organisations I seriously question it's applicability. Installing a new version of our software into a production environment is a significant undertaking for many of our customers. What they really want is to install a robust and stable piece of software that they can build and run their business processes around without disruption. For my team a principle around providing valuable new functionality through frequent iterations, whilst ensuring the final release delivery satisfies the contracts established through previous releases, would be a more appropriate one.

  • Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.

    This is an interesting one when applied to a scrum approach. The principle states that we welcome changing requirements, however many scrum teams operate with a policy of not changing scope within a sprint, in our case 4 weeks. If the principle applied to scrum relates to having different requirements on a sprint by sprint basis the principle is inherent in the process as we aim for a level of completion and replanning at each iteration that allows for changes. Given that we're delivering working software with each iteration I would not describe such changes as being 'late in development'. For me a 'late in development' change in srum would be within the sprint, thereby positioning this principle somewhat contrary to the scrum process. For no team we aim to minimise the changes within a sprint to reduce context switching, but we will be flexible to changes in scope if there is a clear business need. I think that this is a case of a principle that works well when contrasted with the more common 'waterfall' based approaches which it was clearly written in contrast against. For a team who have grown up around an iterative approach, the concept of 'late in development' may differ from the original intention of the principle, to the extent that the wording of the principle might need reviewing as our perspectives on what constitutes a development lifecycle change.

  • Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.

    This one is core for us, and I imagine for most agile teams. Maintaining a position of working software ensures that we never stray too far from being able to deliver the software, which reduces the likelihood of significant deadline increases due to discoveries that impact releasability.

  • Business people and developers must work together daily throughout the project.

    Is this true? Gojko Adzic once wrote a great piece on the mythical product owner. Can we really expect to have business decision makers present in our scrum teams on a day to day basis? For a product team we can't realistically expect a representative of each of our customers to work in our teams, we therefore have to have a proxy role, and most teams aim to have a product owner here. My concern with this is that, adding a proxy role in between the development team and the customers could actually allow the team to distance themselves from understanding the customer. I think that a better principle for many is that the members of the development team work frequently with business representatives on a conversational basis to ensure that they have an excellent understanding of the business needs and can act in a proxy capacity to assess the software. As I wrote about in this post I think that this is a role that a tester can take on in an agile team.

  • Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

    This was one that we marked as 'could do better'. I think in general we have a good level of motivation in the team, however one or two felt that there was room for improvement here and that was sufficient for it being marked as such. This is another of the principles where I felt that our assessment of our position was very much performed relative to our position as a successful agile team, where the original intention was perhaps relative to the pervasive development cultures at the time of writing. I believe that teams built around Agile approaches typically enjoy higher levels of motivation on their work than more traditional organisations due to greater levels of autonomy and collaboration, yet this is now how we measure ourselves, so the bar has been raised.

  • The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

    This was another one that we marked as as 'could do better', yet again I believe that we do a great job here if we compare not to our own high standards but the cultures that this principle was written in response to. Developers and testers don't communicate primarily through the bug tracker as they have in organisations I have worked in previously. Testing isn't a production line process which takes in requirements and software and produces test cases, bugs and metrics. Perhaps our over-reliance on emails causes concern for some, however for me I think we do a grand job here and easily meet the principle as it was originally intended - with high levels of interactivity and engagement between team members. In some ways the fact that some felt we could improve here was a real positive, in that again the cultural bar has been raised and we now need to measure our principles against the new expectation of highly collaborative teams and cultures.

  • Working software is the primary measure of progress.

    Yes - 100% this is us. We measure our progress based on what is finished and works. I think that this is one of those principles that really was created in response to some flawed models of tracking project progress that allowed a project to appear to be 90% complete only only for the final 10% testing phase to take as long as the rest of the project combined due to the late exposure of issues. I've heard stories of Agile adoptions where management failed to relinquish the need to focus on quantifiable measureents of progress and so misguidedly rounded on story points/team velocity as some kind of comparable metric to measure progress and compare the efficiency of different teams. I'm glad to say that I'm not in one of those organisations.

  • Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

    When I started in Agile 8 years ago we went at relentles pace. I think we did 12 sprints without a decent break, and as we pushed towards releases we had the stereotypical weekend working to meet deadlines. One time the situation was so bad that I fell out with the then manager as I had a friend visiting from another country and I was not prepared to miss their visit to go to work on a Sunday. I'm pleased to say those days are gone. We might not always achieve everything we plan in the time originally intended, but we respect the need for sustainable pace. If we don't meet our targets within the pace with which we operate that is accepted.

  • Continuous attention to technical excellence and good design enhances agility.

    I think that Agile can be its own worst enemy in this regard as short timeboxes can lead to myopia and 'shortcut coding' if we don't maintain a focus on this principle. I have seen the situation where, rather than risk breaking existing functionality by modifying an existing interface to a file store or database, a developer has simply written another interface. Whilst this reduces risk to the existing code in the short term, over time it leads to brittle code and a nightmare for testers and programmers alike as the number of features that are independently affected by a change to the storage layer increases. Sustainable development does not only refer to the pace of the team but also the quality of the code, if the quality deteriorates over time then the pace of development will slow. I completely agree here - and have seen the impact on agility that hasty design can have.

  • Simplicity--the art of maximizing the amount of work not done--is essential.

    As anyone who has read my post "Putting Your Testability Socks on" will know that "Simplicity" is also one of the testability factors. I question whether the definition of simplicity in this principle could actually contradict the Testability one on the basis that reducing work done does not necessarily lead to simplicity in the product design. In fact have the opposite effect -as in the example I gave above where taking a myopic approach of minimising the work can come at the expense of maintaining good code structure. This is clearly not what we are aiming for. For me, simplicity should not be about maximising the amount of work not done. It should be about minimising the amount of functionality in the product. It should be about minimising the number and complexity of interfaces. It should be about maximising the lines of code not present in the code base. Sometimes achieving these elements of simplicity actually requires more work. I know what the principle is driving at here, however the wording is flawed.

  • The best architectures, requirements, and designs emerge from self-organizing teams.

    I like to think that this is the case, however I have no evidence to back this claim up. If I were to take a contradictory stance I'd argue that a more accurate, but possibly more trite, definition would be that "The best architectures, requirements and designs emerge from people who know what they are doing". It is not so much the self organising team that I've seen yielding the best designs, but the teams who have grown to develop a high level of tacit knowledge around the workings of the product and the needs of the customer. In our review we struggled with this one, mainly due to the fact that it is such an absolute statement. We decided that we probably 'could do better' on the basis that some of our requirements and designs come from outside the self-organising teams, however whether this necessarily means that they are worse for that is hard to say. A possibly less trite attempt might be "The best architectures, requirements, and designs emerge from collaboration between individuals with a range of relevant expertise and experience".

  • At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly.

    Which brings me neatly back to my initial thoughts - "I strongly believe that this is the greatest single benefit of an iterative approach to software development". The frequency of review and improvement in an Agile adoption supports a level of tuning of procedures that allow a team to evolve itself perfectly into whatever context it operates. This evolution may sometimes result in practices that run contrary to the idealised models of Agile development that one finds in the syllabuses of certification courses and the training material for Agile tool vendors, however this is no bad thing, as long as we maintain a clear picture of the principles that we are working to.

The Beauty of Principles

"In the real world" sometimes it pays to review what we do in light of a more idealised viewpoint. Whilst we might decide that all of the practices of our chosen development methodology are suitable, by reviewing ourselves in the context of the principles we can see if we are straying from the core intentions of that approach. Whether I agree with them all or not, I like the agile principles and the fact that they exist. I find that a set of principles, as with the "Set of Principles for Test Automation" that I created to guide our automation, provides clear guidance without imposing too rigid a structure or being too woolly and trite (company mission statement anyone?). What we do need to do is ensure that those principles are maintained to ensure they stay relevant. With the Agile principles I feel that, given the massive impact that agile has had on development teams these principles may now benefit from being revisited in light of the fact that agile has itself moved the goalposts in terms of how these principles are presented.

Image: https://www.flickr.com/photos/kevinmgill/6181049228

Tuesday, 28 April 2015

A Cultural Fit

Ealier this month I attended the third MEWT (Midlands Exploratory Workshop in Testing) Organised by Richard @FriendlyTester Bradshaw, Bill Matthews (@bill_matthews), Simon Knight (@sjpknight) and Facilitated by Vernon @TesterFromLeic Richards - the day brought together 16 testers of various backgrounds and experience working across the Midlands to share talks and discuss ideas in an open environment of trust. As anyone who has read The Facebook Effect will know that I place a huge value on groups such as this where we can share ideas without fear of exposing our faults for public scrutiny. The day did not disappoint - the group dynamic was fantastic. The theme of the day was communication and we discussed a range of subjects all with the common thread of communication, sometimes to a very deep and personal level.

One of the topics that came up during Daniel @TheTestDoctor Billing's hearfelt talk on geeks and personality types, was hiring. It was mentioned that many folks, Rob Lambert at New Voice Media and myself included, have a strong focus on cultural fit when we are hiring. Anna Baik (@testerab) made an salient point speaking from her perspective of being a woman in tech, that we need to be clear when hiring for cultural fit that we're not simply hiring to match a stereotype and using cultural fit as an excuse. Common interests in beer and football do not constitute 'cultural fit'.

I tweeted this sentiment

but as James Bach (@jamesmarcusbach) rightly pointed out - the tweet lacked the group context, and Anne-Marie Charrett (@Charrett) suggested is important enough to expand upon so I'll attempt to do so here.

I am happy to say that I place a strong emphasis on cultural fit when I am hiring, When I say this I have absolute clarity on what I mean, and what I don't mean. Anna's comments provided an excellent reminder that others may interpret this in a completely different light such that one person's cultural fit may be another's excuse to restrict their hiring to favour certain individuals, men, or people in a certain age bracket, for example. Let me make it as clear as I can.

What cultural fit is

When I say that I place a strong emphasis on cultural fit when hiring - I mean that I look for people who, in joining my company will not undermine the work culture that we have worked to establish. Our cultural principles are the cornerstone of our successful ongoing product development and we work very hard to ensure that new team members will contribute to and build on them. Some of the characteristics that I will look for that I think are key to this are

  • A desire to collaborate and a focus on the importance of team working
  • Tactful appreciation of the importance of a focussed working environment without too many distractions
  • A willingness to take on a variety of tasks if necessary to achieve the teams goals
  • A focus on personal skill and expertise over adherence to standards or certification
  • A focus on delivering working software over delivery of a successful process
  • An appreciation of the individuals filling the range of roles required to produce software as having an equal contribution to that overall effort
  • A focus on solving problems over apportioning blame under the guise of establishing root cause

Some of these are rooted in our agile approach, some are more distinct to our own style of working. Not all of these things would necessarily form the cultural basis of all sucessful teams. Each team is different, but in general our office tends to be quiet with high levels of focussed working - other offices may rely more on camaraderie and a 'buzz' of activity to maintain their team dynamic. I once attended an excellent talk by Tom Howlett describing how he achieved great success with a completely virtual team formed of highly focussed individuals all working from home. I would expect the dynamic of that team and the resulting recruitment focus could be very different to mine and still deliver great success.

What I do believe is that, for whatever team dynamic we are aiming for, it is possible to recruit a high variety of people in terms of personal interests, gender, age and background whilst still focusing on preserving your cultural ideals.

What cultural fit is not

Years ago, when I was working for a different company, I was helping my manager in recruiting. At the end of an interview we were discussing the candidate - he asked me whether I thought that the candidate might not be 'a good fit'. Without him saying it, or even possibly admitting it to himself, I knew what he was implying. We had a team of white men and he was worried about how the fact that the candidate was a woman of Indian origin would affect the existing team. Our team was a comfortable enclave of white maleness who discussed football on Monday mornings and went to the pub on Friday lunchtimes. My response was clear - we would not consider the candidates gender or ethnicity as a basis for their suitability for the role, because

  1. As far as I was aware it was illegal and
  2. I was not prepared to work for a company that did so and would have to leave

The manager's comments demonstrate how easy it is to confuse cultural fit with hiring to stereotype. The implication was that someone from a background that was a closer match to the existing team would constitute a 'better fit' with the team. As it happened on that occasion I did not feel that the lady was the best candidate, mainly due to what I saw as our inability to support and train her at a very early stage in her career. I stated my belief that we would be letting her down by hiring her, as we would have been for anyone at a similar stage in their career, as our culture was simply not supportive enough at the time. ( It goes without saying that the manager was totally wrong in their concern as, after taking responsibility for hiring, I went on to build out the team with a mix of genders and backgrounds and the team dynamic only grew in strength as a result. )

Our team habits and common interests are nothing to with our work culture as I mean it when I target 'cultural fit'. Our culture was that we were a competent, collaborative and highly professional small team who used daily face to face communication to self organise. If hiring a new candidate meant that we had to adapt our team or personal habits slightly to welcome and accommodate them as a member of that team and that culture, then that was our responsibility to do so.

Hiring for cultural fit is not about finding people who conform to a stereotype that matches the people that already work in your organisation to improve the likelihood of them 'fitting in'. It is about finding people who want to work in a style that is consistent with the values of your team, however those values should not be such that they exclude capable candidates based on their gender, age or background.

It is hard

Hiring with a focus on a good cultural fit is difficult. One reason for this that I've encountered is that it is much harder for recruiters to understand a set of cultural ideals than it is a set of technical skills and certifications. Even the good recruiters that I work with will naturally focus on technical skills that allow easy CV searching over finding someone with the personality to integrate well with our team. I can picture the frustration on the other end of the email or phone when a candidate that looks great in terms of technical fit is rejected because I think that they will disrupt our team dynamic.

You could argue that I and others like me are being too fussy in focusing so much on the 'softer' skills. After all at the end of the day as long as someone can get the job done then surely that is enough. For some organisations that may be the case, however I have seen the effects that bringing someone who is a poor fit into a team can have. I have seen a team which once enjoyed open and professional discussion withdraw into themselves because of the new recruit who takes a slightly too aggressive approach on getting their point across. I've seen the previously confident tester grimacing as they have to deal with the new programmer who, despite the false asertions to the contrary, doesn't really respect the testing role and treats testers as second class citizens.

You could also argue that perhaps our team culture is such that it naturally excludes individuals from certain backgrounds. This is a difficult one as certainly the values in some organisations will run contrary to those that we operate on. Given that many organisations operate across regional and national borders I think this is more an issue of experience and history rather than background. I take the approach of clearly explaining in the interview process what makes us tick as a team, and how that may be frustrating if accustomed to a different way of working. If a candidate is happy to accept these differences, can appreciate the potential benefits and engross themselves in a new environment then I'm more than happy to have them on board.

On discussing the subject with Rob Lambert he made a great observation that perhaps it is the use of the phrase 'cultural fit' that could be overloaded.

"We've started saying "team values" now - as the word cultural tends to mean "ethnicity" or "gender" to many people - and this is not what we mean."

I think this is a great point and I'll probably try to use 'values' rather than 'culture' from now on to avoid potential for misinterpretation. For me, and I'm sure many others, hiring consistently with our team values is a critical element in maintaining the team dynamics that allow us to deliver great work in a supportive environment. What it is not is an excuse to target our hiring around a stereotypical norm at the exclusion of the wealth of individuals from a variety of backgrounds who would augment and enrich our culture if they are simply given the opportunity.

ShareThis

Recommended