Telling not askingOne anti-pattern that is common in software development that we sometimes encounter as part of this elaboration process is when the initial requirement is provided in the form of an already defined solution. It is a characteristic of our product market that both our outbound product team and our customers have good understanding of technologies relevant to the environment in which our software operates. Because of this there is a tendency for requirements to be delivered having already passed through some, possibly unconscious, phase of analysis on the part of the person making the request based on their domain knowledge.
So what is the problem (i)So what is the problem? Some of the work has been done for us, right? No. It is significantly more difficult to deliver and test a generic solution than it is to solve a defined and scoped problem. As well as relying upon assumptions that the solution addresses the problem at hand, lack of knowledge of the problem on which a solution is based can lead to other mistakes that have serious implications for the suitability of the final product:
- Under Testing I find that testing based on an existing solution suffers heavily from anchoring bias. Even when a tester understands that they need to test outside the boundaries of a solution domain, there is a subconscious tendency for testers to anchor the testing limits based on those boundaries in which the solution is operating. If tests are being designed based on the solution domain rather than the problem domain this can be at the expense of posing relevant questions on the scope of the problem.
- Over Testing If the solution provides a scope of use which far exceeds that required to solve the problem at hand then testing to the extent of the solution design will be wasting effort in areas likely to remain untouched as the product is implemented.
- Missing the target If the assumption that the solution design fully addresses all aspects of the problem is incorrect then important aspects of the problem will remain unaddressed (this is one reason why programmers may be limited in effectiveness when testing their own solutions, there is always a confirmation bias that their design resolves the problem)
So what is the problem (ii)Having established that trying to test solutions is not ideal, we are left with the same question, so what is the problem. As testers we have a duty to try to answer this question in order to anchor our testing scope on the appropriate domain. A very simple and effective technique much written about in testing literature is that of the "5 Whys" or "Popping the why stack" (which provides us with a wonderful spoonerism and a great title for a blog post). I won't revisit the details and origins of the technique here, they are well covered elsewhere, but I did encounter an excellent example in my company the other day which I felt illustrates the technique beautifully. The story title as originally delivered to the team read something like "The ability to plan a query against X data partitions in 5 seconds" where planning is an internal phase of our querying process, and X was a big enough number for this to be a significant challenge. It was immediately apparent that this was seen as a solution to a bigger problem so we questioned
Why#1: "Why planning in 5 seconds?"OK so now we have some customer facing behaviour, but still a fairly arbitrary target.
Answer#1: "So that this customer query can run in 12 seconds"
Why#2: "Why does this need to run in 12 seconds?"OK so now we have value and a reason for this, to support the targeted system user level. We've probably gone far enough with the whys (it doesn't always take 5), but it is clear that the logic from the customer is flawed:-
Answer#2: "The customer wants to be able to support 5 users getting their query results in a minute, so has targeted 12 seconds per query"
Why#3: "Why are we assuming that we can only run one query at a time? Would it be acceptable for queries to take slightly longer but run in parallel so that 5 complete in 1 minute"So we have a new story, and a new target to develop and test towards. As it turns out we achieved planning in 5 seconds. Here is the critical part though, we also identified and resolved some resource contention issues that would have prevented 5 queries from running in 1 minute had we just focussed on the original target. I know that in some software developments it is hard enough getting requirements at all so it seems counter intuitive to start challenging them. Hopefully this case shows how by using a simple technique it is possible to work back to the root stakeholder value in a requirement and ensure the problem, rather than the solution, forms the basis of the testing effort. (BTW - I try to avoid product promotion, this is an independent blog, but I also try to avoid anything disparaging so - if you are thinking that 12 seconds is a long time for a query, please bear in mind we're talking about tables containing trillions of records so querying in seconds is no mean feat) Copyright (c) Adam Knight 2011 a-sisyphean-task.blogspot.com Twitter: adampknight