Monday, 21 February 2011

Measuring Testing

I saw a couple of tweets by @Lynn_Mckee recently on the metrics that are used in testing.

There are many great papers on #metrics. Doug Hoffman's "Darker Side of Metrics" provides insight on behavior. #testing

Ack! So many more that are painful... Scary to read recent papers citing same bad premises as papers from 10 - 15 yrs ago. #testing #metrics

And it made me think about how we measure testing.

This article is not going to be

'This is how you should measure testing’


offer any ‘best practice’ ways of measuring

My concern with any of the ways in which we measure is that it is done without context or connection to what question you which to have answered with the numbers. It is a set of numbers devoid of any information to their ‘real’ meaning. There are many and various debates within the software testing field about what should and should not be measured. My take on all of this is:

Can I provide useful and meaningful information with the metrics I track?

I still measure number of test cases that pass and fail and number of defects tests and fixed.

Is this so wrong?

If I solely presented these numbers without any supporting evidence and a story about the state of testing then yes it is wrong it can be very dangerous.

I view the metrics that are gathered during testing to be an indication that something might be correct or wrong, working or not working, I do not know this just from the metrics it is from talking to the team, debriefing and discussing issues.

I capture metrics on requirement coverage, focus area coverage, % of time spent testing, defect reporting, system setup. So I have a lot of numbers to work with which on their own can be misleading, confusing and misinterpreted. If I investigate the figures in detail and look for patterns I notice missing requirements, conflicting requirements and what is stopping me executing testing.

So what is this brief article saying?

Within the software testing community I see that we get hung up on metrics and how we measure testing and I feel we need to take a step back.

It is not too important what you measure but how you use and present what measurements you have captured. It is the stories that go with the metrics that are important, not the numbers.

Wednesday, 2 February 2011

What you believe might not be true. (Part 2)

The first part of this article looked at conjunction bias and framing bias and how it can influence our thinking towards incorrect assumptions all under the heading of cognitive bias.

The next part of this article investigates other forms of bias and how they influence our decisions and thought processes. One of the first I will touch upon in this article is belief bias.

Belief bias has many similarities to confirmation bias and in some ways both are closely linked. If someone has very strong beliefs they can use arguments that back up their beliefs in such a way that only evidence that supports their beliefs are used giving a confirmation bias to their beliefs. There are many examples of this in the world from the belief in the existence of aliens to the range of conspiracy theories that are abound on the internet.

So what is belief bias?

People will tend to accept any and all conclusions that fit in with their systems of belief, without challenge or any deep consideration of what they are actually agreeing with.

Belief bias is the conflict a person incurs when their beliefs do not match the logic of what is presented to them

The danger with belief bias is that it can quickly turn to belief projection:

Psychological projection is a form of defence mechanism in which someone attributes thoughts, feelings, and ideas which are perceived as undesirable to someone else.

The problem now is that the beliefs of someone on a team could become fostered on other team members using belief projection even if what they believe is unfounded. Within software development we all have our own views and beliefs on what a piece of software is expected to do.

How does this have an impact on software development and especially testing? Imagine a situation in which a tester has a very firm belief on how an interface should interact. They then test that interface and find it is not behaving as they believe it should be. A bug report is now raised and passed back to the development team. It is found that the bug was raised in error and that the interface interacts as designed and described in the requirements. This is simple case in which regardless of what requirements, design specifications and others are saying the testers strong belief bias is saying everyone is wrong and what they believe in is correct.

In the world in which we as testers operate I doubt that the above would happen since we are now in situations in which developers and testers communicate and there is no more throwing it over the wall way of releasing. However if you still work in teams in which there is a lack of communication and talking then belief bias can have a large negative affect on testing.

Another issue is when you do work in a team and belief projection comes into the equation. If someone on your team subconsciously has a belief that the developers think the testers are a waste and not necessary (negative personality trait) then could project this on to other members of the team and start to cause a barrier of resentment to build up between teams. It is impossible to prevent people having opinions and thoughts about other members of a team but having an environment in which everyone is allowed to express their views and thoughts in an open discussion can help to remove this type of bias. Within one company in which I worked as a team lead I would have an open session in which nothing was recorded or written down but people could express views and thoughts on what was really happening within the project. Sometimes it would be heated and people would get emotional but it managed to clear the air. One important part of this method was that a mediator was always in charge to prevent it getting into a naming calling situation.

Another bias which could have an impact on testing is illusory correlation in which people form a connection between two events even when all evidence shows that there is no such connection or relationship. A good example is people who have arthritis believe that their condition worsens depending on the weather. Redelmeier and Tversky conducted an experiment in which they took measurements based upon the patient’s view of their condition and at the same time noted details meteorological data. Even though nearly all the patients believed that their condition got worse during bad weather the actual results shown that there was a near zero correlation between the two.

Wikipedia defines illusory correlation as:

Illusory correlation is the phenomenon of seeing the relationship one expects in a set of data even when no such relationship exist.

It is easy to see the effect this can have on software development. Imagine if a developer creates an illusory correlation between two variables that do not have any real correlation therefore introducing bugs into the project. There has been a study on the reasons for software errors and it has been found that illusory correlation does play a part. Details of this study can be found here.

Stereotypes are normally defined by the use of illusory correlation. Someone who came from a small town where everyone was kind makes an assumption that everyone from a small town is kind therefore when they go out into the world and meet a kind person they correlate that the person much be from a small town since they are kind even if the correlation is not true or does not make any sense.

How does this help or hinder with regards to software testing?

The problem occurs when testers work in isolation and form they own methods and create their own hypotheses of what should happen when they test the product under certain conditions. The danger is that the testing becomes one sided searching for evidence that matches their current hypothesis of how the product should react. The resulting factor of this bias within testing is that conditions are tested which meets the illusory correlation of the tester but conditions which do not meet the expected assumptions are not tested. This could then cause significant bugs to be missed due to flows not meeting the expected correlation being tested.

It is very difficult to avoid falling in to the illusory correlation trap since the human mind tries to take the easiest path and groups’ objects together for easier recall, hence the existence of stereotypes. To help to avoid this cognitive bias it is again important to not work in isolation and to involve others in both your planning for testing (kick offs), the execution of your testing (pairing) and the result of your testing (debriefs)

There are many other biases that I have yet to touch upon and some I might save for future articles including one or two that could have a positive affect when it comes to testing

In the meantime while you wait for my next post @Qualityfrog tweeted a link to a whole bunch of fallacies and their meaning here:

That should keep you occupied for awhile.

I wonder how many of these fallacies affect your day to day testing?

On a positive note since developers will also suffer from these fallacies when coding, there will always be a need for testers…….