Sunday 18 September 2011

Risky Business

Within the testing profession we are all aware of risk and in the majority of cases we adjust our testing based upon risk. Is this the wrong approach to take? What models do you use to assign risk to elements within the project?

In my experience in most situations the risks we apply are based upon things we known could go wrong or disrupt the testing we are going to carry out. Most risk assessment is done before hand and upfront. It is normally based upon the probability of what could occur to a system based upon someone’s experiences, viewpoint and biases at a given time. I am not sure this is the correct approach to take within testing.

Testing is not an exact science there are some elements where we can predict the outcomes and risks, yet there are far more where it is unpredictable. The thoughts behind this blog post are to look at this unpredictability and how we can try to include that into our testing approach.
Nassim Nicholas Taleb (1) within his book The Black Swan (2) talks about the highly improbable and its impact on the stock market. He states that the majority of investments are based upon risk and use models in which known risks are taken into account. What these models do not include are the improbable risks things such as natural disasters (3) or individuals/countries (4) do something that cannot be predicted.

In conclusion Taleb says that most models are based upon using top down predictions using experiences of what has already happened which is a high risk strategy rather than plan against the unpredictable the things that cannot be planned for.

So how can this apply to testing?

How many times within testing have we seen a last minute showstopper, just before go live? Or a showstopper discovered in the live system when some, what appears to be a totally random set of circumstances happen (multi failure of various unconnected components – recent power failure within the USA (5)). Could this have been predicted as a risk? Would people have built this into their models? IMO I doubt this.

Do we need to change the way we use risk within our testing? Taleb talk about using stochastic tinkering (6) which to me is fascinating since it appears to match closely to the exploratory testing approach. As an example look at the following two statements:

Thus stochastic tinkering requires experimenting in small ways, noticing the new or unexpected, and using that to continue to experiment.

The general principle is: Do as little as possible unless the system shows you have to do more, then do only as much as you need to keep the process going.

If we change the wording of these statements so that they apply to testing:

Thus stochastic tinkering requires TESTING in small ways, noticing the new or unexpected, and using that to continue to TEST.

The general principle is: Do as little as possible unless the system shows you have to do more TESTING, then do only as much as you need to keep the TESTING going.

Does the exploratory testing approach (by design or accident) do this already? To me it appears as if by using exploratory testing instead of using detailed, well planned, risk assessed test scripts we are more likely to discover the ‘black swans’

Food for thought…

References: