Thursday, 9 August 2012


Whilst researching for a recent blog article on science v manufacturing and testing I came across an interesting article about scientific standards called the RESPECT code of practice and I made a mental note to come back to this since I thought it could have some relevance to testing. The article can be found here and a PDF version of the code can be located here:

The purpose of this article is to look at each of the statements made about what socio-economic researchers should endeavour to and my thoughts on how it may apply to testing.

The first paragraph is the one that drew me to the article in the first instance.
Researchers have a responsibility to take account of all relevant evidence and present it without omission, misrepresentation or deception.
It is so interesting how this is closely related to the responsibility of the tester when carrying out testing.  We have a duty to ensure that ethnically and morally we provide a service that meets these responsibilities. 

The bit that stood out  within the main body of text was the following statement
does not predetermine an outcome
I still find that within the field of testing there are still people writing scripted tests in which they try to predict the outcomes before they actual experience using the application.  This is not testing, testing is exploring the unknown, asking questions and seeing if there is a problem.

Now if we look at the last line of the paragraph
Data and information must not knowingly be fabricated, or manipulated in a way that might lead to distortion
Hmmm? Anyone want to start a discussion on testing metrics?  Cem Kaner talks about validity of metrics here

Then the article gets into the reporting of findings.
Integrity requires researchers to strive to ensure that research findings …. truthfully, accurately and comprehensively…have a duty to communicate their results in as clear a manner as possible.
I get tired of seeing time and time again shoddy or poorly documented testing findings/bug reports.  In my world exploratory testing is not an excuse for poor reporting of what you did and what you found

The most exciting part of the article was the final paragraph in which they realise that as human beings we are fallible.
…no researcher can approach a subject entirely without preconceptions 
It is therefore also the responsibility of researchers to balance the need for rigour and validity with a reflexive awareness of the impact of their own personal values on the research
It is something within this blog that I talk about a lot the need to understand that we have our own goals and views which could impact and influence our testing.  We owe it to ourselves to try and be aware of these sometimes irrational and emotional biases. 

The following is my attempt to go through each of the statements made in the article and provide my own personal view (with bias) or some external links in which others within the testing community have already discussed.

a) ensure factual accuracy and avoid misrepresentation, fabrication, suppression or misinterpretation of data

See previous link to article by Cem Kaner on metrics, Also by Michael Bolton here  and here by Kaner and Bond

b) take account of the work of colleagues, including research that challenges their own results, and acknowledge fully any debts to previous research as a source of knowledge, data, concepts and methodology

In other words if you use other peoples articles, ideas etc give them some credit.

c) critically question authorities and assumptions to make sure that the selection and formulation of research questions, and the conceptualisation or design of research undertakings, do not predetermine an outcome, and do not exclude unwanted findings from the outset

STOP accepting that because it has always been done this way then that means it is right.

d) ensure the use of appropriate methodologies and the availability of the appropriate skills and qualifications in the research team

Interesting one, I do not take this as meaning to get certified, other people  may.  I take it that we have a responsibility to ensure that everyone we work with has the relevant skills and if they do not mentor them and support them to obtain these skills.  Encourage self-learning and look at all the available approaches you can use for testing and select the one most suitable for you.

e) demonstrate an awareness of the limitations of the research, including the ways in which the characteristics or values of the researchers may have influenced the research process and outcomes, and report fully on any methodologies used and results obtained (for instance when reporting survey results, mentioning the date, the sample size, the number of non-responses and the probability of error

In other words be aware of both your own limits and project limits such as time, money or risk.  Testing is an infinite task so when reporting make sure it is clear that your sample of ‘tests’ are very small in comparison of all the possible ‘tests’ you could do.

f) declare any conflict of interest that may arise in the research funding or design, or in the scientific evaluation of proposals or peer review of colleagues’ work

Does this apply to testing?  If you are selling a tool or a certification training scheme then this should be stated clearly on any material you publish regarding testing.

g) report their qualifications and competences accurately and truthfully to contractors and other interested parties, declare the limitations of their own knowledge and experience when invited to review, referee or evaluate the work of colleagues, and avoid taking on work they are not qualified to carry out

To me if you stop learning about testing and act like one of the testing dead (see article by Ben Kelly – here) then you are not qualified to carry out testing.

h) ensure methodology and findings are open for discussion and full peer review

Do not hide your testing effort inside a closed system in which only the privileged few have access.  Make sure all your testing effort is visible to all within your company (use wikis)

i) ensure that research findings are reported by themselves, the contractor or the funding agency truthfully, accurately, comprehensively and without distortion. In order to avoid misinterpretation of findings and misunderstandings, researchers have a duty to seek the greatest possible clarity of language when imparting research results
In other words make sure that what you have done when testing is what you report and that you report clearly and without ambiguous facts

j) ensure that research results are disseminated responsibly and in language that is appropriate and accessible to the target groups for whom the research results are relevant

Make sure that all relevant parties have access to your findings, communicate, talk, discuss.  As stated earlier do not hide your findings publish them for all to see warts and all.

k) avoid professional behaviour likely to bring the socio-economic research community into disrepute

We all have a duty as testers to be professional in our behaviour and this means even when we disagree we still need to respect each other’s view and be able to participate in a debate without making others feel inferior.

l) ensure fair and open recruitment and promotion, equality of opportunity and appropriate working conditions for research assistants whom they manage, including interns/stagiaires and research students

Employers and recruitment agencies STOP using multi-choice certification schemes as a filter for working in testing.  Holding one of these certificates do not mean that you can test.

m) honour their contractual obligations to funders and employers

This is a given no comment needed on this.

n) declare the source of funding in any communications about the research.

If what you are publishing is in your own self-interest or a vested interest in which you can receive funds then please be honest and up front about this.  As professionals we can then make an informed decision about the content.

The context driven testing school has a list of principles here and it is interesting to compare the two there appears to be some overlap but maybe we could improve the context driven one by using more of the RESPECT code of practice.  What do others think?  A good starting point maybe?