Wednesday, 4 November 2015

The Living Test Strategy

What is a test strategy? It’s not necessarily as easy a question to answer as it may have been a few years ago. In previous testing roles I used to be able to explain clearly what a test strategy was - it was a document. The test strategy was the document that I created at the outset of a long development project with the purpose of defining and agreeing the approach that the testers on the project would take in testing each requirement and area of the software under test. I knew exactly how the document was structured, as it looked almost exactly the same each time that I produced it. Each time that I produced it we would undergo the ritual dance of review, amendment, re-publication and eventual sign off with development, project and/or product managers sufficient to convince all concerned that we were taking the testing seriously.

The value of the strategy from a testing perspective was limited. The majority of the content of the strategy would be boilerplate material that was transferable, not only from one project to the next but also possibly one product/company to the next without significant modification. Without the strategy document the testing performed would have been much the same. Often the testing work that was carried out was done so not because it was defined in the test strategy that we should do so, but because as testers we felt that it was a good idea.

I have many examples of where the approach I’ve taken deviated from the defined strategy, a couple of the better ones are:

  • One on data analytics engine product the developers created an in-house database comparison tool which I adopted and used to compare data sets through various load/append/update processes to validate the integrity of our indexed fields - this was never defined in the test strategy but proved invaluable in exposing issues in the data maintenance processes
  • On a primarily test script driven implementation project we adopted a phase of initial exploration to assess the stability of each new release prior to progressing into the scripted test execution stages in order to save time working through test scripts against sub-par releases from the supplier

The important point in these examples was that, whilst a strategy was defined and agreed at the start of the project, decisions on the approach taken were made later to increase the value of testing. These decisions were made as a result of the discoveries that were made as the testing progressed. Whilst overall the approach taken would tie in with the approach defined in the strategy document, the actual approach that was taken had less to do with the strategy and more with the experiences of myself and my colleagues.

In my recent work I have moved away from producing a test strategy document. Aside from the fact that I found that it had limited value in the situations where I’d created one previously, as I’ve moved more explicitly to a testing approach based around exploration and establishing relevant tests based on arising needs throughout an agile sprint based approach, the creation of a test strategy document in advance smacks of futility. This creates something of an information vacuum when it comes to discussing the testing approach, and one that some might feel uncomfortable with. If we dispense with a test strategy document, where does our strategy reside? How do we define what we do and how we test? In order to explore those questions further I’m going to look at what I see as the two different uses of the test strategy and how we might consider these being addressed in an agile organisation built on foundations of exploration: Defining the strategy for the team, and explaining the strategy to others.

Picking the Right Team

Alex Ferguson, Arsene Wenger, Brian Clough, Steve Hansen , John Madden, Vince Lombardi, - these names might be more familiar to some than others, but they are all highly successful sports managers. In achieving the huge successes that each one has, I doubt very much that any of these, or any other top sports manager has sat his team down with a 40 page word document and gone through a multi-phase review process until the entire team, coaching staff and board of directors are happy to commence with starting a game. Certainly a strategy is communicated and shared prior to each game through the team training and team discussions. The strategy of the successful manager starts way before that though - the strategy is encapsulated in the team itself. The starting point of the strategy lies in the players that are in those training sessions and team talks to start with. The top managers will hire and construct their teams based on the skills of the individual players involved and the needs to fill the squad with the breadth and depth of skills to be successful. If a manager wants to play fast, one touch attacking football then he will hire differently to one that wants to stifle play and soak up pressure.

I see hiring in testing as similarly an element of the strategy (I refer here specifically to testing, however this is just as valid for development overall). The structure of the team plays a massive part in the strategy. There is no point, for example, in hiring testers who relish in the predictable execution of scripted tests if you want to promote an exploratory culture. Similarly if you want a high level of predictability and rigorous documenation then filling your team with headstrong and expressive testers with a disdain for filling in documentation is going to be counter-productive. I’ve been fortunate to avoid too many attempts to enforce ‘top down’ changes in approach onto teams that were hired into a very different culture, however when I have seen it done I’ve seen high levels of friction and resistance - the team was simply not the right one for the new strategy.

In 2013 I guest wrote a piece for Rob Lambert’s blog on ‘T-shaped testers and square shaped teams’ . One thing that was implied in that piece, but perhaps not made explicit, was that the creation of the ‘square shaped team’ is a Test Strategy defining activity. For me, testing strategy starts with the testing skills of the individuals that we hire into our teams. As a result boilerplate, factory specifications have as little place in my hiring process as they do elsewhere in our development processes. Just as each hire into a sports team will be done based on the abilities of the existing players and the areas where skills shortages exist, so each hire into my testing team is done be based on complementing and reinforcing the skills already present to create a team that has the capabilities of delivering the approach that our strategy targets.

Every Player has a Role

Getting the right skills and attitudes in the team is a critical starting point in delivering strategy. Nevertheless, as many star-filled sports teams have demonstrated, having great players does not necessarily guarantee success. As important an element in a successful strategy is that the roles that each of the team are appropriate for the team to cover all of their responsibilities. If you put your star attacking players in defensive roles, or fail to ensure that each player knows their responsibilities when defending, then the result will be a lot of confusion.

A second critical element of a team strategy is therefore in ensuring that the responsibilities of the individuals and teams are understood. In my last post I looked at the nature of exploration and the multiple levels at which the approach that a tester or team is taking can be defined using charter based language. As an experiment in a group session I recently asked my team to define what they saw as their charters at a general level within the organisation and the teams they worked. What was clear from the exercise was that, whilst we’ve not historically explicitly defined individual responsibilities, every member of the team had a strong understanding of the expectations of their role and the part they played in the overall testing activity. Individuals also naturally tended to define their charters at different levels depending on their own level of experience and responsibility, with those less experienced members of the team encapsulating their work at a lower, more detailed level than those with more experience or responsibility requiring a higher level appreciation of the testing strategy.

One clear consensus coming from the team was that providing more explicit role definitions out of management would be counter-productive, as new needs were constantly arising with the team approach shifting to incorporate these. Individuals felt comfortable to adjust, sometimes through self-organising and sometimes a little more directed, but always able to shift their individual focus to incorporate new activities and responsibilities into the overall remit of the testing group. As I discussed in my last post - this ability to change and redefine approach at different levels is an characteristic of a successful exploratory approach and a key component of our testing strategy.

Explaining Strategy

So, from a team and test management viewpoint I believe that a testing strategy is encapsulated in the individuals that we have in the team and the responsibilities and charters that they fulfil. Having a strategy that is only known to the team, however, is sometimes not sufficient. Sometimes it is necessary to define testing strategy to others outside the group, and one argument for a test strategy document is that it helps to get the testing approach agreed and ‘signed off’ by other interested parties. I’d argue here that there are other more effective means that this aspect of test strategy can be achieved. Yes there is a need to communicate our testing strategy outside of the testing group, however do we really need to predefine all of the testing activities to a detailed level in order to achieve this? In my experience the individuals involved in the test strategy review process found the process a tiresome one as they did not necessarily have the knowledge of testing approaches, techniques or tools to assess whether the approach was the most appropriate, or even valid. The result was therefore an inclination to refer to industry best practices and stock approaches as a means to fill the void of understanding and reduce the risk of personal culpability. “Are we adopting industry standard best practices here?” is a question that anyone with little or no understanding of a subject can rely on to provide input into a strategy review process, neatly placing the responsibility of approach on the ‘industry standards’ and the onus of responsibility onto the testing team to satisfy the implications.

I find personally that development managers and product owners would prefer not to have responsibility for understanding the finer details. What most would prefer to have an overview of the testing approach at a more abstract level, and leave the details of execution to those whose job it is to understand them. To this end I’ve found that a well placed presentation summarizing a testing approach for those outside the team achieves a quicker, clearer understanding of the testing strategy than reading through pages of details on the fine details of how each requirement is to be tested.

A shaky defense

Another final reason for presenting the entire testing strategy in front of management in advance of testing is a more cynical one. Sometimes this is done as an attempt to protect against a ‘blaming the tester’ scenario. Some may labour in the mistaken belief that getting a test strategy signed off in advance affords some protection from subsequent blame if problems are discovered, on the basis that ‘management signed it off’. This is a false premise though for exactly the same reasons. We cannot expect other parties to have the same level of insight into the appropriate testing approach as the person creating the strategy, and therefore attempting to lay some culpability at the feet of others should that strategy prove to be flawed will have limited success.

I’d personally rather take responsibility for the strategy details through the structuring of a skilled team and maintaining flexibility of strategic choice through the process, than be restricted to a specific approach on the basis of diminishing the blame later.


Monday, 12 October 2015

Fractal Exploratory Testing Revisited

In my recent post 'Blog Post Room 101' I discussed the situation where the ideas that we present perhaps don't hit the mark with others or have the staying power that we first hoped. In contrast there are ideas that can be reinforced through our reading and subsequent experience or adoption by others, and we find them developing over time. One such idea for me is the concept of Fractal Exploratory Testing that I first wrote about in 2013.

This is an idea that I've had good cause to review recently as Janet Gregory and Lisa Crispin included the idea in the Exploratory Testing chapter of their "More Agile Testing" book. I was both flattered and somewhat unnerved by this as I felt that the idea as presented in my original post was quite raw and not as fully formed as I'd have liked. What I find, however, is that the more that I review the ideas that I originally outlined in that post in light of my subsequent experience and other material that I've read on the subject, the neater the idea fits and I'm very pleased that it was included.

An exploration within an exploration

In her excellent book on the subject of Exploratory Testing, 'Explore IT' , Elizabeth Hendrickson uses the example of Jeffersons letter prompting the Lewis & Clark expedition to find a navigable route across the U.S.A as an example of a good charter. James Bach also references this expedition in his earlier work, including this paper explaining ET. I think this analogy of exploratory testing and exploration of territory is a good one and the Lewis and Clark expedition provides a particularly good demonstration of a successfully executed exploration. Whilst comparing mission directly as equivalent to an individual test charter provides a useful analogy, an element that isn't necessarily apparent through such a comparison is the many layered nature of exploration demonstrated through the expedition.

Lewis and Clark's overall charter was well defined, however the decisions over how to explore and what resources to use during each stage of the expedition were developed through a series of smaller explorations that formed that greater process. Lewis and Clarke's charter was a high level one with a very open ended remit on hiring people and buying equipment to carry out the mission. Within that charter they undertook a variety of smaller exploratory activities, some planned, many prompted by the events and discoveries that they made during their mission. Decisions were made and refined on the basis of information gathered at each smaller stage of exploration and Lewis and Clark pursued new avenues of exploration and changed the resources used on the basis of the discoveries made.

  • On discovering that their boat was too large to navigate further up the river they used local resources and skills to create wooden canoes
  • On encountering a fork in the river with two branches of nearly equal size they spent days exploring both branches to decide on which was the Missouri
  • They experimented with a wire frame canoe covered in hide, experimenting with different hides to see which, if any, was most suitable
  • On discovering no suitable hides to line the canoe they abandoned this approach and explored locally to find more wood to fashion traditional canoes

The the success of Lewis and Clark's mission relied heavily upon their willingness to alter their approach as they went along. Imagine instead if Lewis and Clark had decided up front that the larger boat was the only means by which they were going to explore the river and they would go to any lengths to achieve that goal, or if they'd decided that the wire framed canoe was the only means by which they would navigate the smaller sections of the river, would they have been as successful? I don't believe so. The success of their mission came about in a large part due to their ability to experiment and learn at each stage of their activities and redefine their approach as a result of the discoveries they were making, even if it meant completely abandoning an approach in light of evidence that it was ineffective. What Lewis and Clark were doing, in the process of undertaking one large exploration, was tackling many smaller exploratory activities. Each of these activities possessed characteristics that were apparent in the overall mission, but on a smaller level, and each targeted towards a common goal defined by the larger mission, but each in themselves distinct.

Fractal Recursivity

I believe that the exploratory activities that are undertaken as part of a larger mission exhibit the characteristics which can be viewed as "Fractal Recursivity". The term fractal recursivity originated from the study of ideologies of language, and occurs when groups which share a common language differentiate themselves from 'others' based on nuances of accent. The fractal element occurs because the phenomenon can be observed at the local and regional levels just as effectively as at the national. In this paper Mary Antonia Andronis explains the core principle

Integral to the idea of fractal recursivity is that the same oppositions that distinguish given groups from one another on larger scales can also be found within those groups. Operating on various levels, fractal recursivity can both create an identity for a given group and further divide it.

So the core idea is that characteristics that can be used to differentiate items at one level can also be applied in the same way to differentiate sub-elements of those items at a lower level.

In "Explore IT" Hendrickson outlines a simple template for an exploratory charter, and in that template identifies three primary characteristics of an exploration:-

  • An area that is targeted for exploration
  • Resources that will be used
  • The information hoped to be gained

The ability to define and differentiate activities at various levels from the overall mission to smaller explorations within it through referral to these characteristics is what, for me, characterises exploration as demonstrating Fractal Recursivity. Definitions on the terms of these properties can allow an individual or team to understand the scope of an exploratory operation and how it differs from others, at whatever level that operation is defined. This allows the coordination of effort ensuring that the relevant tasks are undertaken, but without predefining the actions to be undertaken to complete them.

A hypothetical example

The Lewis and Clark expedition has been well covered by greater minds that I, so let's instead look at a hypothetical example as a means to illustrate the idea further. Imagine that you are leading a mission to sail to an island which you believe to be uninhabited. Your ship has crew, equipment and rations suitable for your mission, which is:

"Explore the island with the vessel, equipment and crew to establish the suitability of the island for establishing a permanent settlement"

This is quite a broad mission, so in planning your approach you would probably break the mission down into some activities that you intend to focus on at the start:

"Explore the coast of the island with the ship and rowing boats to establish locations for a harbour"

"Explore the forest behind the beach with axes and saws to locate suitable building materials"

"Explore the valleys inland with a small team of people and digging equipment to identify sources of fresh water"

"Explore the sea around the island with fishing nets to establish whether there are sufficient fish to constitute a useful food source"

These activities all form part of the wider mission, and all have the properties that Hendrickson defines as making up a good exploratory charter - resources, a subject area and information to obtain. Whilst some may share some of these properties, no two activities will share all of them, thus these smaller explorations can be differentiated from each other through these, just as the wider mission may be differentiated from other such missions on the same basis.

There will likely be a set of initial planned activities, however as these are being carried out, the teams will be gathering further information about the landscape and environment which may lead to further tasks.

"Find out the best route from the sheltered harbour we discovered to our preferred base camp location 2 miles up the coast using a compass and a machete"

Or they might possibly identify new risks which merit new exploration:

"Explore the inland swamp with spears to check whether those small crocodiles we saw have any potentially man eating cousins"

"Explore those ominous drumming sounds that we heard when exploring the forest using keen hearing and tip-toes to establish whether the island really was uninhabited"

The need for these activities can't necessarily be predicted before the exploration has started. The discoveries that are made through the initially planned explorations give rise to further activities which target the discovery of different information, and may require different resources to complete. The expedition is characterised by a series of explorations, some planned, some triggered by discoveries through the process, but each with it's own independent set of goal, information and resources, that contribute to the overall charter of the expedition.

The ability to define and differentiate at each level is key for coordinating the overall expedition. Imagine if you were leading this expedition and asked the team heading to look for water what they are doing and they replied: "We're exploring the island with a ship and crew to establish if it's suitable for a settlement". If the same question to those leaving to look for building materials in the woods, or those about to fish in the bay, elicited the same response then would be difficult and confusing to understand the various activities being tackled. The nature of fractal recursivity in exploration supports the ability to define and plan activities at each level at a sufficient level for the person coordinating at that level, whether that involves coordinating only their own activities or others as well.

Enough with Analogies Let's Talk About Testing

As with an expedition, in terms of testing software testing our testing activities will also be defined by an overall mission. This might be summarised at a very high level, such as for a certain software product:

"Explore this product using the skills of these development and testing teams, the budget available for tools and hardware, and the time available, to establish how well the software delivers our target value to the relevant beneficiaries"

It's a bit vague and high level, however a statement at this level could validly be used to identify the responsibilities of a testing group within an organisation. At a lower level, within the software development process on this product, a tester may be working on a user story. They will probably have agreed some target acceptance criteria to guide their testing of the story with the aim of obtaining information on how well the software meets those criteria. Again the testing at this level can be expressed in the form of a statement defining the testing mission :-

"Explore the new feature and areas potentially impacted by it using my test environment, tools and knowledge and about two weeks of the sprint to assess the new behaviour in relation to the acceptance criteria, risks and assumptions identified during elaboration"

Within the testing work on that story the tester might define a series of charters covering the areas that they intend to explore whilst working on that story, such as this one:

"Explore the inputs of the configuration screen using a range of valid and invalid inputs to identify functional gaps in the validation"

This is the level that exploratory testers might expect to define a set of 'test charters'. This level is appropriate for a tester to define and breakdown a piece of testing sufficiently to differentiate between their testing tasks, plan how they will cover the testing at hand and manage their testing in a structured way.

In her book Elizabeth Hendrickson provides some excellent examples of test charters that are too narrow in focus to be useful to a tester in planning their activities. Statements at a lower level, such as this

"Explore the date of birth field with the value 29/2/2013 to test how it handles invalid dates"

Would be more appropriately considered tests or test actions than charters. Similarly Hendrickson rightly points out that charters at a higher level are too broad to be a useful testing charter. I agree, however as I've shown above I think that it is possible to define testing activities at many levels that we might not necessarily perceive as 'charters' yet can be expressed in a similar way. Each level is made up of a series of activities at a lower level that can in themselves be expressed by an exploratory mission statement appropriate to that level.

Each activity also has the potential to generate further activities at any level as a result of decisions arising from the information obtained. The outcome of an exploratory test could be some more tests, or possibly the creation of a new charter. In some cases I've seen exploratory tests result in entirely new user stories or testing activities at the equivalent level due to the serious ramifications of the information discovered.

So is it...?

Generally with any model there are two important questions to assess its value:-

  • Is it valid?
  • Is it useful?

In terms of validity I hope that I've presented a reasonable case here for the validity of a fractal model of exploration. All models are flawed and this is no exception, there are limits to applicability, however as I stated at the outset, I've had good cause to reflect on the original idea and think that it stands up.

In terms of whether it is useful, that is less clear. It's certainly not a model that I refer to on a daily basis, however I do refer to the idea when introducing exploratory ideas to new testers in my organisation. I think that the value for me is in demonstrating that an exploratory approach can be applied equally at many levels. Exploration in testing is not limited to executing tests through charters, with all of the other rigid structures of boilerplate test strategies and rigid definitions around test stages and non-functional testing applied. Defining test strategy can equally be an exploratory activity whereby new testing approaches and methods are introduced as a result of the discoveries made. Rather than prescribing the exact approaches that will be taken in our high level test planning, instead I favour an approach of considering the high level testing activities as a set of overriding test missions which are the responsibility of teams or individuals to deliver. It is not up to the test manager to dictate how these missions are to be completed, as long as there is a clear understanding of the area, resources and information targeted. As long as we have sufficient coverage in the scope and responsibility of the defined activities at each level then the focus of test strategy and planning moves from predefining each testing activity to helping the team to obtain the skills and resources that they need to carry out their missions.


Wednesday, 23 September 2015

Learning Letters

Being a parent furnishes you with the wonderful opportunity of watching your children learn. As my older children have developed I've found that their range of learning subjects has progressed quickly to include things that are unfamiliar to me, mainly through having been introduced to the world since I was at school. This could be daunting in that I am exposed to my children seeing limitations in my own knowledge (the illusion of parental infallibility is easily shattered, for example through my cumbersome attempts at playing minecraft). Nevertheless, I prefer to see it as an exciting opportunity to learn myself and share with them the joy of learning new things.

Making Sense

One of the most interesting aspects of watching children learn comes when they start to learn how to read letters and numbers. Sometimes it takes seeing a system through the experiences of another trying to learn it to expose the flaws inherent in the system that aren't apparent to a person more familiar with it. Watching my children attempt to learn the symbols that go into making up their letters and numbers really brought home to me some of the illogical and unintuitive problems in our common symbology.

A great example happened recently with my middle son. We'd spent time learning all of his letters in a picture book, to the extent that he could recognise each letter without the pictures. I was somewhat surprised when I presented him with a pre-school story book and he couldn't identify the letter 'a'. When I pointed it out to him his response was even more surprising - he said "That's not an 'a'". I looked at the page and realised that he was quite right.

The 'a' in his story book looked like this

Whereas the one in his letters book looked like this

How could I expect him to know that the extension over the top of the letter here was meaningless, when in these two letters

A much smaller difference has a profound significance.

Another one that all of my children have found hugely confusing is when different characters are different only through reflection or orientation. 6 and 9 for example can be very confusing particularly when working with children's games such as bricks and cards which can be rotated in hand. p, q , b and d are similarly often the last ones to be confidently learned.

And don't get me started on equivalent characters such as upper and lower case. So P is a capitalised p, S is a capitalised s but a capital q is Q, what?

When you consider a child's learning journey it is hardly surprising that they get confused. We spend their very early years teaching them how to identify shapes by their properties, irrespective of their position,

  • a rectangle oriented horizontally or vertically is still a rectangle
  • a shape with thee sides and 3 vertices is a triangle irrespective of the relative lengths of the sides.

Then we introduce a language and numbering system using a series of symbols where properties are far less relevant. Characters with completely different properties can represent the same letter, and we can change a character into another one simply by rotation or reflection.

There is little logic in the system. The number of rules that we'd have to provide even to understand the basic alphabet and number characters used in English would be huge. Whilst the simple rules of character representation in learning letters may be explicitly defined for our children - through letter books and number charts and the like - the understanding of the range of different ways that the characters in our alphabet can be represented is tacit knowledge. We build up our knowledge through example and experience, incrementally building our rule set around what constitutes an 'a' or a 'q' until we have sufficient rules to read most fonts that we encounter.

Even now on occasion I am presented with a character such as this,

And have to examine the context in which it is used to establish the letter represented. In this situation I depend on a set of in-built heuristics based on how the symbol is presented - e.g. Is it in a word with other symbols that I recognise - to identify what it is intended to represent?

I'm now pretty sure it's a 'T', or possibly an 'F' , but there's still a little uncertainty. Is the word part of a phrase or sentence that makes the meaning clear?

Now the character represented is clear. So, unthinkingly, when reading the cover of this book I've applied a series of personal heuristics to identify the letter 'T'.

For the most part I am not generally aware of the depth of knowledge that I am tapping into when interpreting a new font or text. I would find it extremely difficult to construct an explicit set of instructions for a human or computer to identify this character based on my knowledge prior to seeing it.

Presenting our Software

I was recently reading this great set of tips for technical authors from Tom Johnson. One that really struck a chord with me was number 3 - "Developers almost always overestimate the technical abilities of their audience".

Developers often create products with a certain audience level in mind, and usually they assume the audience is a lot more familiar with the company's technology than they actually are.

As someone who works across the boundaries between development teams and support, I have a great affinity with this phenomenon. As we develop our software we unknowingly build rich and complex layers of tacit knowledge across the development team as we work with the system that we then rely on during development activities.

When working with my query system, for example, there is a wealth of knowledge within the relevant development team around the shapes of queries and how they are likely to execute against certain data structures. Some queries may be a natural fit for our software and execute in parallel scalably across machines, others may force more restricted execution paths due to the SQL syntax used and the resulting stages of data manipulation required. These models that are built up over time support a level of understanding which rarely pervades beyond the walls of our development offices. When our customers first start to create and execute their queries they are typically not considering these things. Yes, they may start to build up their knowledge should a query not meet their expectation, and they work through explain plans, query documentation or work with a consultant to better understand the system. In my experience this type of activity is most often based around solving a specific problem rather than constructing a deep knowledge of query execution behaviour.

Working with my support teams helps to maintain perspective on the levels of product expertise that exist among our user communities. This is not to say that we don't have very capable users, it is simply that developing and testing a product affords us a level of understanding to the extent that expert knowledge becomes second nature and it can be hard not to code and test from this status of elevated knowledge. More than once I've seen referrals on system behaviour from the support team to a development group responded to with an initial level of surprise that the user is attempting to use the system in the manner described. With an open interface such as SQL providing high levels of flexibility over use this is somewhat inevitable. Given that some SQL is generated by other applications rather than through direct input, we can't necessarily rely on sensible structuring of SQL, let alone that it is done so in the manner that our system prefers.

Making Tea

When I was at school a teacher gave my class a fascinating exercise - we had to describe how to make a cup of tea to an alien unfamiliar to earth (who spoke perfect English, obviously). The teacher then went on to highlight our mistaken assumptions such as in an alien knowing how to 'put the kettle on', and the possibly amusing outcomes of such an instruction.

Naturally we wouldn't expect testers to have to work from such an extreme starting point of user experience. We do, however, probably want to maintain awareness of our own levels of tacit knowledge and try to factor this in when testing and documenting the system. For me it is about looking for gaps or inconsistencies in the feature set where we might unknowingly be glossing over the problems through our own knowledge.

  • Are there inputs that to the user could be considered equivalent yet yield different results? SQL is awash with apparently equivalent inputs that can yield different results, for example the difference between 1 and 1.0 might appear trivial, however they can result in different data types in the system with implications for query performance. The difference between these two characters

    can be as trivial to the user as the editor in which they typed their text, however they can have a huge difference if copied into an application input.
  • Are there very different features or workflows with only subtle differences in labelling? I was recently using a web system where the "Resources" tab took me to a completely different location to the "User Resources" link in the side menu. I had taken them to be the same and so failed to find the content I was looking for.
  • Are abbreviations or icons used without labelling or explanation? On an internal system that I recently started using there are two areas which are referred to by three letter acronyms, one of which has the same characters as the first with 2nd and 3rd letters reversed. My team and I still struggle to know which one we are referring to in conversation. In this post I recount a situation where my lack of familiarity of the 'post' icon commonly used in android resulted in an embarrassing mistake and my rejection of a popular blogging app as a result.
  • Is team specific language exposed to the user in labelling or internal documentation? Internal terminology can leak out via various channels and confuse the customer. Our customers know the features according to the manuals and marketing literature, not necessarily the in-house terminology. Using team specific terms in labelling or internal facing documentation will result in inconsistency and confusion as those terms leak out via logs and support channels.
  • Is the same value referenced consistently throughout, or is terminology used interchangeably? A big personal bugbear of mine is when I register with an application entering my email address amongst other fields, and then on revisiting the site I am prompted for my "username". Hold on - I don't remember entering a username. Was I given a username? Should I check my emails and see if I was mailed a username? Or should I try my email address as that is what I used to identify my account? But surely if the site wanted my email address it would prompt for that and not my "username", wouldn't it? Unless they are storing my email address in a username field on the database and forgetting the origin of the value when prompting for credentials.

When exposing an interface which shares characteristics or expectations with other systems, an important consideration is whether we need to test on the basis of a generic knowledge and consistent terminology rather than application specific knowledge or organisational jargon. Otherwise we may risk a reaction similar to my son's on first encountering his "not an 'a'" when the software reaches the users "that's not a username, it's an email address!"


John Stevenson - Tacit and Explicit Knowledge and Exploratory Testing

Bach/Bolton - Exploratory Testing 3.0

Markus Kuhn - Acii and Unicode Quotation Marks