Monday 29 April 2013

The Testers, the Business and Risk Perception

 

One of the sessions I most enjoyed when I attended the TestBash on 22nd March was Tony Bruce's talk on testers and negativity. In his fantastic disarming style Tony discussed why testers are sometimes seen as negative by both themselves and the business, and whether that 'negativity' affects their lives outside work. Tony made a great point in that a tester role should be a positive one to provide information, why is that seen as negative?

A comment that I made in the ensuing discussion, that I think is worth expanding on, is how important the subject of perceived risk is to this scenario. I don't see testing as a negative role. Like Tony I see testing as an information provider to furnish the business with the information required to make important decisions. Those decisions will inevitably involve an element of risk adoption by the business and it is inevitable that each stakeholder in those decisions will have their own perception on the levels of risk involved. What I have seen is that in situations where testers are perceived as 'negative' could be more appropriately explained as a difference in the tester's perception of the risks involved the development and the level of risk adopted in the decisions taken by the business. If the tester disagrees with the business decision makers regarding the risks of problems such as software bugs, delays in the development process and eventually customer dissatisfaction then this can result in negativity both from the tester in the perception towards them.

I see many reasons why testers and product owners or business decision makers may not agree on the levels of risk being taken in a product development. When testing professionals encounter a situation where there is a large disparity between our own acceptable risk level and that of the business as a whole then our position can feel and appear to be very negative. If this occurs, rather than assuming that we're the only ones with full visibility of the situation and belligerently sticking to our negative stance, I think that instead we might want to ask ourselves some questions in an attempt to explain this disparity in perceived risk.

  • Am I overestimating the risks?
  • Is the business underestimating the risks
  • Ultimately - can I put up with it?

Am I overestimating the risks

In his book 'Risk, the Science and Politics of Fear', Dan Gardner gives an excellent explanation of how thousands of years of human evolution have formed our mental processes regarding risk.

'Our brains were simply not shaped by the world as we know it now, or even the agrarian one that preceded it. They are exclusively the creation of the Old Stone Age'

One consequence of this is how our brains make immediate risk assessments based on our ability to recall relevant experiences. This is a natural part of our 'System one' or 'gut' decision making process that has evolved to make quick decisions in risky situations. This is contrasted with the processes of logical reasoning which forms 'system two' or 'head' thinking which are slower but more accurate. We are pre-disposed to considering a situation to be risky if we are able to recall similar situations where problems have arisen. This recollection can be through our own experiences or through 'stories' conveyed to us by others, a concept known as the Availability Heuristic. As Gardner points out, whilst this mechanism worked well for our ancestors in avoiding danger, the mechanism is flawed when it comes to modern society. The huge amounts of information 'available' to us relative to a situation can provide an unrealistic perception of the actual risks present. To paraphrase Gardner for this context, we are running around testing software with brains that are perfectly evolved for avoiding dangerous animals while hunting and gathering.

Testers work revolves around finding problems with software. We find our own bugs, we read articles on testing and software problems and, when we meet other testers, we share stories of bugs and software problems. We will also typically spend more time investigating and examining problem behaviour than anyone else. An inevitable result is that the experiences driving our own availability heuristics' will be inclined to over-emphasise the likelihood of issues. The examples that are most readily available to us when facing new situations are likely to be based around problems that we have previously seen. We will have a natural tendancy to possess higher levels of perceived risk than other roles. The business, on the other hand, don't see all of the bugs. Their experiences of bugs are often masked behind figures, reports and metrics, which might convey summary information, however the personal experience of issues encountered is absent and so, therefore, is the 'System one' perception of risk. 

From a tester standpoint, a rather un-palatable conclusion that we could draw from this is that, in the situation where we have provided excellent status information to the business, they could well be better placed than us to accurately assess the risks involved in releasing the software. This is because they are more likely to be operating in a 'System two' process of logical reasoning based on the facts, whereas we will be strongly influenced by our 'System one' processes to assume problems on the basis of prior experience.

Testers role and experiences will also drive their assessment of risk to be heavily based around software bugs. The business decision makers, if they are doing their jobs effectively, will have visibility of other categories of risk which must be considered when making project and release decisions. Risks such as missing market opportunities, losing investor confidence and missing opportunity for useful feedback are all factors outside of the scope of the quality of the code which must be considered.

The somewhat enlightening conclusion for testers for me is this - we need to be able to let go of the worry. If you have done the best job that you can to provide the business with information, and they are making a decision based on that information, then you have done your job. Understand that your manifold experiences, both personal and second-hand, of failures in software causes your gut to see problems everywhere and you may not be best placed to accurately assess the overall risk involved in a software release. You may reject this idea, claiming that you are aware of biases yet not susceptible. As Gardner points out, this is a common problem

'Psychologists have found that people not only accept the idea that other people's thinking may be biased, they tend to overestimate the extent of that bias. But almost everyone resists the notion that their own thinking may also be biased.'

Sometimes as a tester you have to identify when you've been too close to too many problems to be thinking rationally, and work to providing the information to let others make the risk decisions. The folks making those decisions will hopefully have access to multiple information sources, in addition to the output of testing, which helps to balance biases in the decisions being made.

 

Is the business Underestimating the risks?

Of course there are two sides to every disagreement. Business decisions are made on the basis of perceived risk, which is based on the information available to the decision maker. It may be that the decision maker is actually adopting a riskier approach than they think due to poor, or poorly presented information, or their own biases. Underestimation of potential risks by the business will also result in a differential in perceived risk between business/management and testers. In their 1988 paper on "Underestimation Bias" in business decision making "An availability bias and professional judgement", Laurette Dubé-Rioux and J. Edward Russo suggest that underestimation as a bias is again heavily influenced by availability, or the lack thereof. 

"After evaluating such alternative explanations as category redefenition, we conclude that availability is a major cause, though possibly not the sole cause, of the underestimation bias"

In summary their findings were that decision makers tended to group risks for which they had low visibility into catch all categories and then underestimate the likelihood of anything in those categories occurring, with the lack of available examples of those risks proposed as the major cause of this underestimation,

If the perceived level of risk adopted by the business is based on low availability, and actually differs significantly from the real levels, then we may be able to help through providing better information to inform risk decisions. It is therefore our responsibility to convey as clearly as possible the relevant information to allow an informed decision. As the perception of risk is heavily influenced by our availability of relevant experiences or stories, it follows that in order to convey risk information then the most effective mechanism would therefore to be through the sharing of experience, rather than the presentation of raw figures. I've certainly encountered the situation when testing poor quality code where simply describing a few of the issues encountered can have a much greater impact on the recipient than the presentation of bug counts. Metrics and status reports can convey a certain level of information, however when backed up with examples of the nature of issues being encountered this creates a much more personal response and will have a much more significant impact on the perceived risk in the recipient. I've been in more than one situation where a "bug story" that I have conveyed to a manager has been subsequently repeated by them when reporting project status externally.

In short - if you want to influence perceived risk, then start telling stories.

Before doing so, however, consider whether it will benefit the business. As I've stated above it could be that our levels of perceived risk are unrepresentatlvely high compared to the actual risk, in which case telling stories of every bug found may simply result in the business moving closer to our 'System one' position away from a more realistic assessment of the situation.

Can I put up with it?

In my list of questions at the start if the post, you may wonder why I've included - 'can I put up with it?' , but specifically haven't included is 'how do I change things?'. The simple answer is that I believe that, whilst improving accuracy of perceived risk may be possible, changing the level of risk adopted by the business is unlikely to be something that a tester can achieve.

In a fascinating experiment, researchers into risk behaviour placed cameras at an open level crossing and then recorded the speeds of cars travelling through and the correlated risk of accident. The researchers then cut back the trees around the crossing to improve visibility and repeated the experiment. The results revealed a huge amount about human risk adoption in that, due to the reduced perceived risk, drivers increased their speed on average such that the same proportion of vehicles were at risk of an accident as before with no net safety benefit from improving visibility.

Behaviour was compared before and after sightline enhancement achieved by the removal of quadrant obstructions. At the passive crossing, sightline enhancement resulted in the earlier preview of approach quadrants. The perceived risk of approach to this crossing appeared to be reduced, resulting in consistently higher approach speeds after sightline enhancement. This performance benefit in response to the intrinsic ,effect upon safety realised by sightline enhancement yielded no net safety benefit

The implication of such results is that people will adopt a predefined level of risk to situations and will adjust their behaviour to reflect this based on new information, a phenomenon known as Risk Compensation. This led to interesting consequences for car manufacture, for example, where safety features don't result in improved safety, but instead result in increased net driving speeds and riskier driving behaviour.

This also has pretty fundamental implications to software projects. Essentially, if this phenomenon applies in business, then every action taken by a testing team to improve testing and the confidence in a product, will result in a change in behaviour by the business to operate faster or implement more features at the same risk level, rather than using the improvement to reduce risk.  For example, in the case of introducing test automation, if automated tests are seen as providing an equivalent level of confidence as humans executing the same 'cases' then the response by the business is likely to be to drive for faster development and lower levels of manual testing on the back of the perceived confidence boost. If the improvements by the tester were driven by a disparity between their own acceptable risk levels and those of the business, the outcome is likely to prove very frustrating for them.

Hence the question 'can I put up with it?' - if you as a tester are at odds with your company over what constitutes acceptable risk, get used to it, is unlikely to change and any improvements that you make to try to address it could actually make things worse.

Knowing oneself

The understanding of risk and personal bias is a complex subject. In the testing world we need to try to ensure that our risk perception is based as much as possible on "System two" thinking and not 'System one' feelings driven by the availability heuristic. Avoiding such biases is hard, however in any situation we should be asking ourselves if our position is based on evidence arising from testing performed, or whether we simply have a 'gut' feeling that there will be problems. In considering risks are we calling to mind particuarly memorable problems from other projects that could be affecting a realistic assessment?

This problem is not limited to software testing, a 2005 paper discussing the problem of decision making in the medical profession provided these key guidelines for avoiding the availability heuristic:-

  • Be aware of base rates (more appropriate for medical diagnosis, however the prevalence of issues arising from live use is an important yardstick)
  • Consider whether data are truly relevant rather than just salient 
  • Seek reasons why your decisions may be wrong and entertain alternative hypotheses
  • Ask questions that would disprove, rather than confirm, your current hypothesis
  • Remember you are wrong more often than you think

I think that these provide sound general advice. A feeling of constant negativity is not a healthy or sustainable situation for any role. Being aware of your biases and using these guidelines you may find that your negative position eases somewhat in the light of evidence. If you are doing your job and the business are happy then maybe you are overemphasising risks, and you need to lighten up and lose some of that negativity.

If, on the other hand, you are confident in your assessments, you are providing excellent information into your company's risk decisions, and you are still finding yourself in a very 'negative' position relative to the rest of the business, I suggest you think long and hard about whether you are in the right place. As we've seen, your business is unlikely to change.

References

Risk, the science and politics of fear: Dan Gardner
Risk perception by Lennart Sjõberg
DRIVER RESPONSE TO IMPROVED LATERAL VISIBILITY: EVIDENCE OF USABILITY OF "RISK HOMEOSTASIS THEORY".WARD, Nicholas J.; Husat Research Institute, Loughborough Univ. of Tech., Leicestershire, United Kingdom
Study: Airbags, antilock brakes not likely to reduce accidents, injuries - Emil Venere, Purdue University News
Five pitfalls in decisions about diagnosis and prescribing : Jill G Klein
Wikipedia - the Availability Heuristic
An availability Bias in Professional Judgement : Laurette Dubé-Rioux and J. Edward Russo Cornell University 1988

Photo: http://www.hotelsanmarcofiuggi.it/en/free_climbing.php

Whatsapp Button works on Mobile Device only

Start typing and press Enter to search