Tuesday 27 September 2016

The Risk Questionnaire

What does it mean that something is "tested". The phrase "tested" troubles me in a similar way to "done" in the sense that it implies that some status has been achieved, some binary switch has been flicked such that a piece of work has achieved some threshold status that up until that point had not been the case.

Of course, there is no magic switch that renders the feature "tested" or the user story "done", we will simply reach a point where the activity performed against that item gives us sufficient confidence in it to move on to other things. My biggest problem with "tested" is that it can mean very different things to different people, often within the same organisation. When two different people in the same organisation refer to something as tested then that each can be taking a very different understanding of what is implied, particularly the exposure to risk involved.

A common understanding

Working for a long time in the same organisation affords one with the luxury of time to reach a common understanding across the various business and technical roles of what "tested" means. To the mature team, tested hopefully means testing has been performed to a level that we've established as a development team that the business is happy with in terms of the time and cost of carrying out the testing and the level of bug fixing that we're likely to need. This level will differ from company to company, with a critical driver being the level of acceptable business risk. I suggested in my post "The Tester, The Business and Risk Perception" that businesses, like individuals, will likely operate at a determined level of risk and the overall activities within that business will gravitate towards that level irrespective of the actions of individual testers.

In my experience an understanding evolves as a team or department matures around what the expectation is on testing and how much time is to be spent on it. If a team spends too much time on testing then it is questioned by management, if the team are pushed to do too little testing then they push back, or quality suffers and the customer feedback prompts review. Eventually in response to these inputs, an equilibrium is reached. But what of new companies or teams? How do we identify an appropriate testing balance between rigour and pace, when a team is either newly created or adopting a new approach such as agile? Is it possible to fast-track this learning process and calibrate ourselves and our activities more quickly around the risk profile of the business

Finding an equilibrium

I was recently asking myself just such a question when facing the challenge of establishing a test strategy for my new company. The company had grown quickly and also recently adopted agile methods, and so inevitable had not yet found its equilibrium in terms of an acceptable level of risk and testing. This caused some confusion across the development teams as they wasn't a consistent understanding of the level that was expected of them in their development and testing activities.

It would have been easy to come in and immediately push to increase the level of testing, however I felt that it was important first to establish amongst the business leaders understanding of the general attitude to acceptable risk. The approach that I took yielded some very interesting results:-

Learning from IFAs

Independent financial advisers (IFAs) need to understand risk. When an individual in the UK attends a financial adviser then they may well be asked to complete something like this questionnaire:- https://www.dynamicplanner.com/home/investor/what-is-risk-profiling/assess-attitude-to-risk/

The use of these questionnaires is common practice amongst financial advisers and institutions to assess the risk appetite of their clients and recommend a pension/investment strategy accordingly. The principle is simple, the customer is presented with a series of statements associated with their attitude to finances, with the answers to these questions provided through a set of standard options reflecting different risk levels.

Example questions might be ones like this

Assume you had an initial investment portfolio worth £100,000. If, due to market conditions, your portfolio fell would you:

  • a) Sell all of the investments. You do not intend to take risks.
  • b) Sell to cut your losses and reinvest into more secure investment sectors.
  • c) Hold the investment and sell nothing, expecting performance to improve.
  • d) Invest more funds to lower your average investment price.

If you could increase your chances of a return by taking a higher risk would you

  • a) Take more risk with all of my money
  • b) Take more risk with half of my money
  • c) Take more risk with quarter of my money
  • d) Not take the risk

The resulting range of answers provides the adviser with an overall impression of the individual's risk level, and they can build an investment plan accordingly. I recently went to an IFA to consolidate my pensions and took just such a questionnaire, coming out at a rather dull 6/10 (apparently the vast majority of people sit between 4 and 7).

I felt that this idea of a questionnaire was a useful simple technique which could be validly used for assessing the risk appetite of individuals across a development company. I wanted to establish a starting position that would allow me to progress the conversation around an acceptable level of business risk and establish whether or not there was consistency of opinion across the company on our approach to risk, and this looked like a good option.

The questionnaire

I created a risk questionnaire that we could ask members of the leadership team to complete. The audience was selected to be a range across development and client services, and the CEO. For the questionnaire I focussed around 4 primary areas

  1. Business Risk
  2. Development Risk
  3. Perceived Status
  4. Time Spent on Testing Activities

1. Business Risk

The business risk statements were focussed on the interaction between the software process and the customers. These were open to all to answer, but primarily targeted at the non-development leadership roles. The aim here was to establish an understanding of the priorities in terms of customer deliverables and the conflicting priorities of time, cost and rigour.

Rate the following questions on 1 - 5 from 1=Strongly agree 5=Strongly disagree

  1. On time delivery is more important than taking longer to deliver a higher quality product

  2. I am happy to accept the need for later effort in maintaining a product if we can deliver that product at a lower up-front cost

  3. Our customers would knowingly accept a reduced level of rigour in development compared to other products in order to keep the cost of the software down

  4. Putting the software in front of customers and responding to the issues they encounter is a cost effective way to prioritise fixing software problems

  5. The cost of fixing issues in production software is now reduced to the point that this is an economically viable approach

  6. Our product context is one in which we can adopt a relatively low level of rigour compared to other business facing software development organisations

  7. I would be reluctant to see an entire sprint given over entirely to testing and bug fixing unless this was driven by issues encountered by the customer

In review of the results we identified that some of these questions were open to interpretation. For example, in question 4. I was not clear as to whether this referred to giving customers early visibility prior to release, through demos or betas, or whether it meant pushing early to production and letting them find the issues in live use (I had intended the latter). Having similar questions worded in slightly different ways, such as question 5, helped to identify whether there was any misunderstanding there, however if doing a similar exercise again I would look more carefully for possible ambiguity.

2. Development Risk

Development risk questions focussed on the risks inherent in our development activity. The idea was to get an understanding of the expectation around the development process and the feeling towards developer only testing. Again these were open to all to answer, but did not shy away from slightly more involved development terminology and concepts.

Rate the following questions on 1 - 5 from 1=Strongly agree to 5=Strongly disagree

  1. The effective application of developer/unit testing can eliminate the need for further devoted testing activity

  2. Appropriate software design can eliminate the need for devoted performance and stability testing

  3. Adding further development skills in our agile teams provides more value in our context than devoted testers

  4. The testing of our products does not require specialist testing knowledge and could be performed by individuals with limited training in software testing

  5. I would be reluctant to schedule specific testing tasks on a team’s backlog without any associated development

In creating the questions I did try to avoid leading questions around any known problems or perceived shortfalls in the current status. At the same time, clearly the questions needed to be suitable for the nature of the business - questions suitable for on a long release-cycle big data database would not have been appropriate here.

3. Perceived Status

This was a really interesting one. I pitched two very straightforward questions around company status.

  1. With 1 being lowest and 5 highest rate how you think the company currently stands in its typical level of rigour in software quality and testing?

  2. With 1 being lowest and 5 highest rate how you think the company should stand in its typical level of rigour in software quality and testing?

The first of these doesn't give an answer that reveals anything about the level of acceptable risk, however what it does do is put the answers that do into context. Knowing how people feel about the current status and how this compares to where they want to be gives a strong indication of whether changes to increase development/testing rigour will be accepted and supported.

4. Testing Commitment

This section didn't work quite so well. I was aiming to get an understanding of how much time people felt should be spent on testing activities as part of a typical development.

Rate the following in terms of time; 1 = Less than 1 hour; 2 = 2 - 4 hours; 3 = 0.5 - 1 days ; 4 = 1-2 days; 5 = more than 2 days

  1. How much time out of a typical user story development taking 1 person-week in total should be given over to the creation of unit tests?

  2. How much time out of a typical user story development taking 1 person-week in total should be given over to automated acceptance testing?

  3. How much time out of a typical user story development taking 1 person-week in total should be given over to human exploratory testing?

One respondent didn't answer these as he felt that, even in the same organisation, different work items were too different to provide a 'typical' answer. My feeling was that there was a fairly typical shape for the developments undertaken that we could use to calibrate teams around how much testing to do, however I could see his point and understood the reluctance to answer here

Another issue here was that I provided timescales for user stories anchored by my experience of development of complex data systems. In this context "typical" user stories would be much shorter than 1 week in duration and therefore there was a contradiction built into the questions. Nevertheless the answers were informative and provided useful information to help in constructing a strategy.

Presenting the Figures

All testers are wary of metrics. It is a natural suspicion borne of seeing too many occasions of glowing bug statistics masking serious quality problems. Presenting the figures in a visually informative and digestible way was key to getting the benefits of the analysis here. I used Tableau Public to create some useful visualisations. The most effective being a simple min/max/average figure for each question. This format allowed me to highlight not only the average response but also the range of responses on each question.

(I've altered peoples names and scores of the responses here for obvious reasons, however tried to keep the outputs representative of the range of results and patterns that I encountered)

Business Risk:

With the business risk it was the ranges that were most interesting. Some questions would yield a wide range of opinion across respondents in their answers, whereas others would be much more focussed. Clearly in some areas specific individuals were prepared to consider a higher risk approach than others, something that hadn't been necessarily highlighted previously and possibly the cause of some uncertainty and pressure within the business. What was apparent in the real results was a general desire to reduce the risk levels in development and an acceptance of needing to increase rigour.

Development Risk

Most interesting on the development risk front was that, as I've shown here, there was 100% consensus on the need for specialist testing skills, however the organisations strategy to that point had been not to have specialist testers. Whilst testing skills doesn't necessarily require testers, the phrase "specialist testing skills" does imply a level of testing focus beyond a team solely consisting of developers.

Company Perception

The "Company Perception" demonstrated most clearly the desire to increase the level of rigour in development, with the desired level of testing clearly above what was perceived to be the current status in a similar way to the results shown here.

Starting from the Right Place

As I wrote in my post "The Living Test Strategy", a test strategy in iterative software development is not based around documents, but embodied in the individuals that make up the development teams, and the responsibilities that those individuals are given. The role of defining test strategy is then not in writing documents, but in communicating to those teams and individuals what their responsibilities and expectations are. Some fundamental decisions need to be made in order to establish a starting framework for these responsibilities. Questions such as:-

  • Are we going to have specialist testers?
  • If so will this be in every team?
  • What level of acceptance test automation is appropriate for the risk appetite of the business?

Need to be answered to establish the initial testing responsibilities and hiring needs of the team.

Using a risk questionnaire to profile the business leadership has given me an invaluable insight into the risk appetite stepping into a new company. In addition to giving an overall understanding of where the company sits in its acceptance of risk, the approach has also highlighted where there is a lack of consensus over testing issues that might need further investigation as to why.

As an experiment I would definitely regard it as a success. In my context, as someone stepping into a new agile organisation and looking at test strategy, it is particularly useful. I can see other uses as well, though. Whether your goal is to understand your new company, or to simply review where your current company stands, or even to expose conflicting opinions on testing and risk across the business, a risk questionnaire may just be the tool to help you achieve it.

References

A good guide to financial risk profiling and the requirements of it can be found here: https://www.ftadviser.com/2015/06/18/training/adviser-guides/guide-to-risk-profiling-k4JJTswPCCWAPyEsMwEZsN/article.html

Another good example of a risk profiling questionnaire that you can take online: https://www.standardlife.co.uk/c1/guides-and-calculators/assess-your-attitude-to-risk.page

Whatsapp Button works on Mobile Device only

Start typing and press Enter to search