Saturday, 24 September 2011

Wopping the pie stack - demonstrating the value in questioning requirements

Over time in my current role we've made great strides in raising the profile of testing activities as a critical part of our successful development of the software. A key benefit of this is that the testers are an integral part in our requirement elaboration process. The testers role here is to define and scope the problem domain in terms of acceptance criteria targets for the work, driving out any assumptions we are required to make to establish these criteria, and potential risks in the initial models, a process I've detailed more in this post. We also have the opportunity to discuss with the developers early ideas on possible solution structures and question whether these provide an acceptable model to base a feature on. The ability to question the early design ideas in this way can save huge amounts of time by identifying flaws in the design before it has been developed.

Telling not asking

One anti-pattern that is common in software development that we sometimes encounter as part of this elaboration process is when the initial requirement is provided in the form of an already defined solution. It is a characteristic of our product market that both our outbound product team and our customers have good understanding of technologies relevant to the environment in which our software operates. Because of this there is a tendency for requirements to be delivered having already passed through some, possibly unconscious, phase of analysis on the part of the person making the request based on their domain knowledge.

So what is the problem (i)

So what is the problem? Some of the work has been done for us, right? No. It is significantly more difficult to deliver and test a generic solution than it is to solve a defined and scoped problem. As well as relying upon assumptions that the solution addresses the problem at hand, lack of knowledge of the problem on which a solution is based can lead to other mistakes that have serious implications for the suitability of the final product:
  • Under Testing
  • I find that testing based on an existing solution suffers heavily from anchoring bias. Even when a tester understands that they need to test outside the boundaries of a solution domain, there is a subconscious tendency for testers to anchor the testing limits based on those boundaries in which the solution is operating. If tests are being designed based on the solution domain rather than the problem domain this can be at the expense of posing relevant questions on the scope of the problem.
  • Over Testing
  • If the solution provides a scope of use which far exceeds that required to solve the problem at hand then testing to the extent of the solution design will be wasting effort in areas likely to remain untouched as the product is implemented.
  • Missing the target
  • If the assumption that the solution design fully addresses all aspects of the problem is incorrect then important aspects of the problem will remain unaddressed (this is one reason why programmers may be limited in effectiveness when testing their own solutions, there is always a confirmation bias that their design resolves the problem)

So what is the problem (ii)

Having established that trying to test solutions is not ideal, we are left with the same question, so what is the problem. As testers we have a duty to try to answer this question in order to anchor our testing scope on the appropriate domain. A very simple and effective technique much written about in testing literature is that of the "5 Whys" or "Popping the why stack" (which provides us with a wonderful spoonerism and a great title for a blog post). I won't revisit the details and origins of the technique here, they are well covered elsewhere, but I did encounter an excellent example in my company the other day which I felt illustrates the technique beautifully. The story title as originally delivered to the team read something like "The ability to plan a query against X data partitions in 5 seconds" where planning is an internal phase of our querying process, and X was a big enough number for this to be a significant challenge. It was immediately apparent that this was seen as a solution to a bigger problem so we questioned
Why#1: "Why planning in 5 seconds?"
Answer#1: "So that this customer query can run in 12 seconds"
OK so now we have some customer facing behaviour, but still a fairly arbitrary target.
Why#2: "Why does this need to run in 12 seconds?"
Answer#2: "The customer wants to be able to support 5 users getting their query results in a minute, so has targeted 12 seconds per query"
OK so now we have value and a reason for this, to support the targeted system user level. We've probably gone far enough with the whys (it doesn't always take 5), but it is clear that the logic from the customer is flawed:-
Why#3: "Why are we assuming that we can only run one query at a time? Would it be acceptable for queries to take slightly longer but run in parallel so that 5 complete in 1 minute"
Answer#3: "Yes"
So we have a new story, and a new target to develop and test towards. As it turns out we achieved planning in 5 seconds. Here is the critical part though, we also identified and resolved some resource contention issues that would have prevented 5 queries from running in 1 minute had we just focussed on the original target. I know that in some software developments it is hard enough getting requirements at all so it seems counter intuitive to start challenging them. Hopefully this case shows how by using a simple technique it is possible to work back to the root stakeholder value in a requirement and ensure the problem, rather than the solution, forms the basis of the testing effort. (BTW - I try to avoid product promotion, this is an independent blog, but I also try to avoid anything disparaging so - if you are thinking that 12 seconds is a long time for a query, please bear in mind we're talking about tables containing trillions of records so querying in seconds is no mean feat) Copyright (c) Adam Knight 2011 Twitter: adampknight

Monday, 5 September 2011

Mind Mapping an Evolution

I've recently introduced the idea of Thread Based Exploratory Testing into my team, with the option of using spreadsheets or xmind mind maps to document exploratory threads. Mind maps are an excellent tool for documenting a thought process and Xmind is a really intuitive and flexible tool for creating and sharing these. As a demonstration I thought I'd show how I've made use of mind maps to plan my EUROStar conference talk later in the year using Xmind.

General Ideas Dump

Fishbone Diagram To Work Out Presentation Flow

If you are attending euroSTAR I look forward to seeing you there, please say hi, I'll be the one with baby sick stains on my jacket.

Copyright (c) Adam Knight 2011 Twitter: adampknight

Sunday, 4 September 2011

Plastering over the cracks - Why fixing bugs is missing the point

A few years ago I was fortunate enough to travel through China and take a tour up the Yangtze river, passing the new Hydro-electric dam that was in the process of flooding the famous "Three Gorges". As our (government operated) tour boat navigated the giant locks up the side of the dam our guide informed us that "the cracks that have been reported on the BBC news have been fastidiously fixed". My gut reaction this statement was a feeling of mild panic, yet I knew that the cracks had been repaired - what was my problem? My confidence in the integrity of the dam was massively impacted by the fact that I knew that the apparent fault had been fixed, yet I had no confidence that the underlying problem had been understood. I was travelling on a large body of water which was only being prevented from rushing down the valley below was a lump of concrete that a few weeks ago had cracks in and had no evidence that the government had any idea why those cracks had occurred in the first place. What was to stop the fault that had caused those cracks cropping up in another part of the dam?

Planned Failure

More recently I was reading a discussion group on the subject of what testers felt were the most useful/useless statistics to a testing process. One of the figures suggested was that of actual versus predicted bug rates, the idea being that developers predicted the likely bug rates for a development and then progress was measured on how many bugs were being detected and fixed compared to this "defect potential". I dislike this concept for many reasons, but the foremost of these is exactly the same reason that my subconscious was nagging me on the Yangtze river:-

Just fixing defects is missing the point.

A bug is more than just a flaw in code. It is a problem whereby the behaviour of the system contradicts our understanding of a stakeholder's expectation of what the system should do. The key to an effective resolution is understanding that problem. Only then can the appropriate fix be implemented. I believe that the benefit that can be obtained from the resolution of a bug depends hugely on the understanding of the problem domain held by the individuals implementing and testing the fix.

Factors such as when and how the issue is resolved, and who implements and retests the fix, can impact this understanding. Even the same bug fix applied at a different time by a different person can have an impact on the overall quality of the software through the identification of further issues in one of the feature domains. If the identification or resolution of issues are delayed, either in a staged testing approach, or through the accumulation of bugs in a bug tracking system in an iterative process, then the chances of related bugs going undetected or even being introduced are higher.

While the Iron is Hot

Many factors that can influence the successful understanding of the underlying cause of a bug are impacted hugely by the timing and context in which the bug is tackled.

  • Understanding of the Problem Domain
  • It could be that a problem acually calls into question some principle underpinning the feature model, for example an assumption on the implementation environment or the workflow. A functional fix implemented as part of a bug fixing cycle may resolve the immediate bug but leave underlying flaws in the feature model unaddressed.
  • Understanding of Solution Domain
  • When a feature is being implemented, the indiviuals involved in that implementation will be holding a huge amount of relevant contextual information in their heads. With fast feedback cycles and quick resolution on issues then problems can be addressed while the details of the implementation are fresh in the minds of the developer and associated issues are more likely to be identified. It could be that the most apparent resolution to a bug would compromise a related area of the feature, a fact that could be overlooked if tackling as a standalone fix.
  • Knowledge of related features
  • It is a common situation for a developer to work on a number of similar stories or features as part of a project, often using similar approaches on related features. If an issue has been identified with the functional solution implemented then it could be that similar unidentified problems are apparent in related areas that the developer has worked on.
  • Understanding of Testing Domain
  • In addition to the developers, as I discussed in this post, the tester working on a feature will have a better understanding of the testing domain when actively working on that feature area than if testing the issue cold at a later date. Addressing the retesting of the problem immediately provides the opportunity to review the testing of related features and perform further assessment of those areas, an opportunity that not be apparent if tackling as a point retest

By operating with fast feedback and reslution cycles we take advantage of the increased levels of understanding of the problem, solution and testing domains affected, thereby maximising our chances of a successful resolution and identification of related issues. If a software development process embraces the prediction, acceptance and delayed identification and resolution of issues then many of the collateral benefits that can be gained from tackling issues in the context in which they are introduced are lost.

Copyright (c) Adam Knight 2011 Twitter: adampknight