Friday, 9 December 2011

Apprenticeship schemes at Test Conferences

A quick blog on a thought I have had.

I read an article today about how we could try and fix the IT skills gap that exists within the UK, this may also apply around the world, by getting young adults into apprenticeships. I have a view that for some people academia study is not for them and they would better suited to a vocational training course instead of a university degree. I never went to university and as such I do not have a degree. Do I feel as if I have missed out? I do not think so but I have not experienced university life so cannot be sure if I missed out something I may have liked.

I think within our profession of testing we have an opportunity to mentor and help create the next generation of testers (not discounting coders, architects) and allowing them to build their skills and knowledge up by learning from experience rather than studying non relevant subjects at university (How many universities do testing as a degree?) As Nassim Nicholas Taleb has said we as human beings are far better at learning from doing rather than from books. I have over the past year been mentoring two people in our craft of testing one is still on-going the other has managed to secure a tester role within a company, neither have been involved in testing beforehand. I feel we within our community should be trying to do this and encourage young adults by maybe taking them under our tutorage, it does not require a large amount of personal investment, a few hours per week. Or maybe within our companies we should all start looking at trying to introduce apprenticeship schemes, let’s try to tap into this vast resource who in my opinion feels they have been abandoned by the educational system.

On the other side I want to call out to those who run conferences, EuroSTAR, CAST, Lets test, UNICOM and say let’s advertise for young adults who may have an interest to come along as an apprentice for the length of the conference. They would not pay a fee but would be expected to produce a report on their thoughts and what actions they intend to take away for the future. I am have not finalized these thoughts but it would give these young adults to get engagement in a craft which I myself feel very passionate about.

Maybe the organizations that run the conferences could look at running an apprenticeship competition, vetting process. I am sure there are many vocational colleges (Both UK and around the world) who would be willing to get involved in this. It has the added effect that it will start to raise in the minds of the next generation of influential people the value of testing and put testing out there as a forwarding thinking craft that people want to get involved with.

What do others think?

I would especially love some feedback from conference organizers to see how feasible these ideas are.

Thursday, 8 December 2011

Recipe Knowledge

This is my response to an blog article written by Paul Gerrard.

I was going to post this as a comment but thought it would be better as a separate blog article

.I am not sure if I agree with entirely what Paul was saying but that is the point of the good blog article. I will say I do entirely agree with his conclusion that we have to have our eyes open and your brain switched on. There are methods that can be used to prevent the ‘quitting’ process and the rambling around in the dark approach to exploratory testing but I think that would be an entirely different article.

However I would suggest people try searching for article on avoiding (or being aware of) bias, cognitive research methods, focusing and defocusing skills. Another one to look at is Air Traffic Control work patterns; they work in time boxed shifts is this similar to session based testing?The point I want to make is the issue Pauls makes about domain knowledge and the usefulness that scripts may bring is an important one.

I am not in the camp that we should abandoned scripts and a lot of the people I communicate with are not saying that either. I feel there are a lot of Chinese whispers with regards to views of some people on the use of scripted tests. I cannot recall anyone saying to me that we must abandon scripts in favour of just doing exploratory testing (Is that a bias and I am deliberately missing or not noticing information?) We also can train ourselves not to quit using a variety of cognitive processes especially the use of checklists and heuristics. These ‘tools’ enable us to counter the quitting instinct but triggering new paths, observations and comparisons.

Testing is not just about finding things it is about asking questions and forming theories based on the answers (evidence) given while experiencing the software. This may lead to more questions and further evaluation and even more re-evaluation of what you already thought debunking and disproving your theories. Finding bugs is a side effect of this approach, a very useful side effect but it is not the sole purpose of testing.

There is a term used within society (especially the social science community) which is ‘recipe’ knowledge, this is often devalued by academics since it is a step by step instruction for learning something. In the everyday world context recipes tell you what to use, what you will need (ingredients) and exactly what procedures to follow, this sound familiar to scripts in the testing world. These recipes can provide important foundations for acquiring or developing skills or as we would say in the software development world learning domain knowledge? People using a recipe as we know may not follow it exactly, they may taste the product and adjust it for their own personal taste so the will move away from the script. However we should not pretend that learning a recipe is the same as learning a skill.

If we for example look at baking, this requires a ‘knack’ which can only come from experience (if you have tried baking bread you will understand this) like qualitative analysis, baking also permits creativity and the development of your own styles.

The skilled tester at some point, like the experienced chef, may stop using the recipe book and start to experiment and explore different tastes and ways to discover more and hopeful improve their skills. At the same time the recipe (script) remains a useful tutorial for the newcomer to the art.

Some of the content used here is taken from the following book:

Qualitative Data Analysis: A User-friendly Guide for Social Scientists - Ian Dey

Wednesday, 7 December 2011

A ‘title’ is of value to someone who matters

Recently I attended the Eurostar Testing Conference in Manchester and came away with some mixed messages and thoughts about the content of the conference. Some of the presentations and tracks were really good whilst others appeared to repeat the same old information. I hope to write a few blog articles on some of the positive messages I got from the conference along with lots of ideas I have with regards to social science and how it can be used within testing, these may have to wait until after the holiday period.

The reason for writing this blog post is due to the, what appeared to be a negative, message coming from some of the key note presentations, this is my opinion and how I understood the messages in the context of my views on testing and testers. The one point I wish to raise (and maybe rant) is one of the messages that James Whittaker made.

“at Google ‘Tester’ has disappeared from people’s job titles. People who were ‘testers’ are now ‘developers’ and are expected to code regularly”

Now my thoughts on this may be taking the point James was making out of context however I am not sure in what other context this could be made.

James during the presentation made the point that testers should be part of the team and not get bogged down in who has what role and I whole hearty agree with that.

However from a social and status perspective people need to be able to identify with a title and there has been a lot of talk within the development community about removing titles especially the title of tester. Take the following scenario:

You go out on a social evening with a group of friends and their partners would you work with a project manager, a developer, a business analyst and a tester, As the evening proceeds each person is asked by a non-team member what they do at work.

The developer could reply: I write code and create applications

The tester could reply that they test to ensure the system works

The project manager could reply that they make sure everyone knows what target they have to meet

The Business analyst could say they provide information on what the customers who will use the application need

Each person answering this question I would say would be proud of their job title and what they do.

So my take on making a statement in which we say get rid of the of the title of tester and call everyone a developer is a little insulting and makes me personally feel unappreciated and unvalued. I feel I have been working as a tester for a long period of time now and whilst I can understand that within a team people can have a variety of roles and responsibilities why should I have to give up something that I feel passionate about? I wonder what would be said if at a developers conference everyone is now going to be called a business analyst since we all provide something that the customer wants.

Why does everyone have to be a developer within a project? My concern is why has the word ‘tester’ become such a dirty word? It is if we should be ashamed of what we are and what our title is.


Friday, 4 November 2011

Defining Testing

I am about to run a couple of internal workshops on the Exploratory Testing approach which is based upon a lot of work done by Michael Bolton and James Bach. One of the concerns I have been having recently is what people within the organisation think testing is in comparison to what they actually doing. So I started to put together an article looking at these concerns and trying to see if there is a problem. This blog is based upon some of the points that I cover in the article.

The views and definitions expressed in this article are my own and as such they may not match what a dictionary may say or agree with your views/definitions.

When I start to look at what we see as testing activities they appear to fall into three distinct categories:
  • Validation
  • Verification
  • Testing

These terms may be familiar to some of the older readers of this blog. V V & T has been around for a long time and has it origins within the manufacturing industry. It has been the main process to provide quality control and assurance of manufacturing production lines. (

It appears that these ‘manufacturing’ processes have been applied to software testing (

This seemed to have lead to the appearance of process standards initially the ISO 9000 Quality Assurance standard, which was modified to become the ISO 9001 2008 standard which included software. These standards are very closely linked to manufacturing process and from a software testing perspective the quality control methods.

Talking to and observing various companies I have seen that lot of people’s perception of testing is as shown in the photo below.

Is it a problem to have this perception of software testing?

At the beginning of my career in software testing a lot of companies started to change from mainly hardware manufactures to both hardware and software manufactures. There was a need among these companies to have processes they could use to prove the ‘quality’ of their software products and the general consensus was what had worked in quality control for hardware surely could be applied to software testing.

The reasoning behind these was based upon some fairly flawed assumptions:

  • All software was the same
  • All software worked in the same way
  • All users would follow the designed work flows.
  • All users would behave in the same way

The main focus of these processes was to validate and verify what was already known about the product and its expected inputs and outputs. In my opinion following quality control and assurance processes is not really about testing Testers ‘normally’ do not control the quality (Yes there are approaches such as TDD which ‘may’ help). If there is crap in the system you are testing then there will be still crap in the system afterwards. Testers provide a service telling you there is crap in the system. Michael Bolton talks more about getting out of the QA business here

Validation and verification

will in the majority of cases NOT

Tell us anything new about the product

Make us ask questions of the product

So what do I mean when I talk about validation and verification?


To me validation is about proving what you already know about the product. Confirming what the requirements say are correct and that the system is correct in accordance to what you believe it should be. The normal response when validating will be:

  • true or false
  • yes or no
  • 0 or 1

I see Validation as a checking exercise (See article by Michael Bolton here on testing v checking) rather than a testing exercise, it still has some value within the testing approach but it will not tell you anything new about the system being tested. It will prove that what you already know about the system is correct and working (or not working) this is like testing requirements or validation of fields in a database/GUI – you know what the input is you know what outputs you expect according to specification/requirements so why not automate this?

The majority of ‘testing’ I see happening is validation and even though it has some value I would not count validation as testing since it does not tell you anything new about the system you are testing.

It should be noted that to interpret the results from validation ‘testing’/checking requires human interaction to work out if what happens what the correct expected response.


When I look at the term verification I use it for when we are verifying any bugs that have been previously found. Someone has made a change to the product and I want to verify that the change made has fixed the problem I had seen before. Some verification tests can be automated – for example if you have run a test previously and found the problem you may be able to automate the steps you followed from testing so that you do not need to run the same test again.


I see testing as a thinking exercise in which you need a person to use their skills (and brain) to ask questions of the system being tested. From asking this question they learn more about the system and its behaviour. They will not know the answer to the question but by investigating and tinkering with the product they can form some reasonable answer to the question they posed.When testing we act like crime investigators – you suspect foul play, but you need to ask questions and gather evidence to back up your theories and provide answers to your questions. Testing is not based upon requirements or specification but rather when the specification and requirements are not saying.

Testing is about asking the

  • What
  • Why
  • How

Nassim Nicolas Taleb came up with the following interesting quote:

We are better at doing than learning. Our capacity for knowledge is vastly inferior to our capacity for doing things – our ability to tinker, play, discover by accident.

So after all of this is there really a problem?

Some of the problems I see within the software testing industry are:

  • We spend far too much validating rather than testing
  • We repeat the same validations (manually) time and time again
  • Cover less of the system by only repeating the same validations
  • Testing is checking exercise rather than a testing exercise
  • Testers are not being engaging
  • Testers are not being challenged
  • Testers do not need to think
  • People see testing as a boring thing to do
  • Testers if used to manually validate are seen as robots

What can be done to improve this?

  • Look to automate the validation (checking stuff)
  • Improve coverage by changing the data sets used in validation
  • Start to use exploratory testing approach – attend a rapid software testing course
  • Look at using Session based testing
  • THINK engage your mind and question the system.

We need to keep learning about testing and do more testing rather than keep repeatedly validating systems, it then becomes a much better and challenging role to be a tester.

Monday, 24 October 2011

If Testers were Paranormal Investigators

Image: Witthaya Phonsawat /

I thought considering it is getting close to the spooky time of year (All Hallows Eve) I would put together a tongue in cheek article about what would happen if exploratory testers were paranormal investigators.

It was a dark moonlit night as the certified and exploratory tester approached the dark imposing building that their project manager had asked them to look at. The project manager wanted them to report back on if the building was suitable for him to move into and that there were no hidden surprises. They had heard stories that the building was full of bugs and other scary stuff.
The testers used the (pass) keys to enter the building and slowly walked into the main hallway of the building. As they started to walk around the room suddenly got cold as the temperature dropped.

The certified tester says it must death being down the temperature.

The exploratory tester looks around the room and notices that there appeared to be a draft coming from just outside the room. They go to explore the draft since it has now interested them as something that could answer a question. Once outside the room they notice that one of the windows has come open due to a broken catch. They close the window and make a note to contact a handy man in the morning. The temperature of the room returns back to normal.
The testers slowly move towards the kitchen when all of sudden an overwhelming disgusting smell overpowers their senses.

The certified tester is certain that this is the smell of death coming to get them.

The exploratory tester is not sure and starts to use their sense of smell to see if they locate where the smell is coming from. They notice that the smell appears to get stronger in the direction of the fridge. They open the fridge door and note that the fridge does not appear to be working (even noticing that it is plugged in) inside the fridge there is a bottle of gone off milk which appears to be the source of the smell. They make another note to contact a fridge repair person in the morning.

The testers now move to the next floor in the building when suddenly they appear to see something move on the stairwell.

The certified tester is sure that this is a sign of sprits from beyond the grave.

The exploratory tester takes a moment to think about possible reasons for the movement before realising that the window has come open again causing the light fitting to swing and cast different light patterns on the stairwell giving the impression of movement.

They now move towards the main bedroom on the first floor and suddenly they hear unnatural sounds and what appears to be a creature from another world.
The certified tester is certain that this is souls from the other side warning them to leave and now starts to panic.

The certified tester is not so sure and even though they are starting to get a little scared they open the door to the main bedroom and the noise gets louder and louder. With their heart pounding they enter the room and see a large irregular shaped mass on the bed from where the noises are coming from. Slowly they move forward get closer, closer still and even closer……..

Suddenly the mass moves and the exploratory tester notice that it is their project manager fast asleep and snoring, the snoring which is causing the supernatural noises.


Loosely based upon the following article:

• All characters appearing in this work are fictitious. Any resemblance to real persons, living or dead is purely coincidental.
• I wish to stress that this blog in no way endorses a belief or non-belief in the occult or anything of a supernatural nature.

Thursday, 20 October 2011

Sat Navs and Maps

The following blog article is based upon a lightening talk I gave at the Software Testing Club Meet Up in Winchester on the 19th Oct 2011.

I have recently been on holiday touring the Yorkshire dales, moors – covering over 1000 miles in one week. The car is fitted with a sat nav which is great when we want to get from A to B but also I have in the car a large scale map of the UK. I started to think about how we use these two ‘tools’ and how this could be used within testing to show the difference between following a set of instructions (scripts) and exploring the countryside (ET)

An example is that both have the same goal (mission) I want to get from A to B. However we use Sat Navs to show us the most direct, quickest, fastest way (some Sat Navs do now have an option for scenic route)

So I set off into the Yorkshire moors using the large scale map (my wife being the navigator), we knew where we wanted to end up, but the route we took was through the back of beyond. (In fact one the roads we ended up on did not even appear on the Sat Nav map (saying we must return to a digitized area – bug?) We explored the areas and when we noticed things that appeared interesting we took a detour and explored these areas. It was wonderful experience and we found places of interest that were outstanding in natural beauty along with all the seasons in one day (sun, rain, hail). At the end of the journey we had discovered some great things but still ended up at the place we wanted to be, yes it took more time (slightly) but we found out more.

My point is that if you stick to using the sat nav you end up at the same place but you may miss so much that is interesting. Now can we compare this to testing? Yes a script ‘may’ be useful from getting you from A to B but how much will you discover, how many surprises will you find? Yes I could repeat the same journey again since we have the map and know the route I took. Would I want to repeat the exact same route? I am sure that if I went to that area again I would be tempted to go a slightly different way since there could be things around the corner that may interest me.

Rob Lambert pointed out the following to me:

“I find the sat nav is a safety and re assurance aid also in that i can explore but then turn the sat nav on, or refer to it, to then return to a known route.”

I would question this is the sense that it could lead to a false sense of security. What happens if the map gets corrupted, or the electronics fail? I would tend to think of the paper map as the safety, reassurance rather than the sat nav which may have a tendency to fail.

With regards to the meet up in Winchester - I wish more people would have come along they missed a great evening of testing discussions with Michael Bolton being on top form. There are plans to have a regualr bi-monthly meet up in Winchester in the near future - watch out for an announcement via the software testing club soon.

Sunday, 18 September 2011

Risky Business

Within the testing profession we are all aware of risk and in the majority of cases we adjust our testing based upon risk. Is this the wrong approach to take? What models do you use to assign risk to elements within the project?

In my experience in most situations the risks we apply are based upon things we known could go wrong or disrupt the testing we are going to carry out. Most risk assessment is done before hand and upfront. It is normally based upon the probability of what could occur to a system based upon someone’s experiences, viewpoint and biases at a given time. I am not sure this is the correct approach to take within testing.

Testing is not an exact science there are some elements where we can predict the outcomes and risks, yet there are far more where it is unpredictable. The thoughts behind this blog post are to look at this unpredictability and how we can try to include that into our testing approach.
Nassim Nicholas Taleb (1) within his book The Black Swan (2) talks about the highly improbable and its impact on the stock market. He states that the majority of investments are based upon risk and use models in which known risks are taken into account. What these models do not include are the improbable risks things such as natural disasters (3) or individuals/countries (4) do something that cannot be predicted.

In conclusion Taleb says that most models are based upon using top down predictions using experiences of what has already happened which is a high risk strategy rather than plan against the unpredictable the things that cannot be planned for.

So how can this apply to testing?

How many times within testing have we seen a last minute showstopper, just before go live? Or a showstopper discovered in the live system when some, what appears to be a totally random set of circumstances happen (multi failure of various unconnected components – recent power failure within the USA (5)). Could this have been predicted as a risk? Would people have built this into their models? IMO I doubt this.

Do we need to change the way we use risk within our testing? Taleb talk about using stochastic tinkering (6) which to me is fascinating since it appears to match closely to the exploratory testing approach. As an example look at the following two statements:

Thus stochastic tinkering requires experimenting in small ways, noticing the new or unexpected, and using that to continue to experiment.

The general principle is: Do as little as possible unless the system shows you have to do more, then do only as much as you need to keep the process going.

If we change the wording of these statements so that they apply to testing:

Thus stochastic tinkering requires TESTING in small ways, noticing the new or unexpected, and using that to continue to TEST.

The general principle is: Do as little as possible unless the system shows you have to do more TESTING, then do only as much as you need to keep the TESTING going.

Does the exploratory testing approach (by design or accident) do this already? To me it appears as if by using exploratory testing instead of using detailed, well planned, risk assessed test scripts we are more likely to discover the ‘black swans’

Food for thought…


Friday, 5 August 2011

Professional Qualifications and Bodies

I saw an interesting tweet from James Bach (@jamesmarcusbach) the other day:

@testingclub What counts as certification? What's a "professional qualification?" Why is schooling confused with education?

Which was in reply to seeing the following post from the software testing club (@testingclub) about a survey of testers?

@jamesmarcusbach you may be interested in the Education for Testers survey results

Whilst the data within the survey may be of interest to some people what really got me thinking were the questions James was asking and within this blog article I am going to attempt and answer some of them from my perspective. It does not necessary mean that my view is correct and I encourage people to debate and correct points that I make, however it is important to remember that the context of this, it is my own personal view of the testing world.

One of the key points that James states to the testing community is that testing is context driven, I feel the answer to these questions are also dependant on context and as such the answers to the questions are context driven.

The first question I intend to try and answer is “What’s a professional qualification?

The context I am using to answer this is within the UK and Europe where they appear to be very well defined.

Professional qualifications in the UK are generally awarded by professional bodies in line with their charters. These qualifications are subject to the European directives on professional qualifications. Most, but not all, professional qualifications are 'Chartered' qualifications, and follow on from having done a degree (or equivalent qualification).

However the important point to note here is the word ‘generally’ to me this does not mean all professional qualification are awarded by professional bodies.

So ‘generally’ professional qualifications are awarded by professional bodies – but what are professional bodies? How do you become a professional body? It appears that it is simple to set up a professional body, all you need to do is:

Get a group of people interested in the same subject

Produce a charter which describes your aims and ethos

Have regular meetings

One interesting point that is made about profession qualifications and bodies that I found was:

Membership of a professional body does not necessarily mean that a person possesses qualifications in the subject area, or that they are legally able to practice their profession.

Some professional bodies can be cartel in which anyone who is not a member cannot practice legally in that domain. Examples of this are within the field of Medicine doctors need to register with the BMA and nurses with the RCN to be able to practice.

So professional qualification in this context indicates that you are proficient in your field and some professional bodies only allow you to practice if you continue to keep up to date with current practices and methods and publish new findings for your peers to review. Without doing this you lose your right to practice. IMO this is the direction tester should be going in. We need to be continuing to learn, read articles, publish articles and enter into debates about the course we take.

ISEB and the other certification schemes are ok as a starting point but it is not the end of learning. We need to adapt these schemes so that they are not static and become out-dated as they currently are. The problem comes that for the people who run these schemes to do this would not make it cost effective and as such it is not in their interest to change. This goes against the reasoning for having these ‘professional qualifications’ the bodies that are saying they represent us on a professional level are not adhering to two KEY parts of being a professional body.
  • Protecting its fellow professionals
  • Looking after public interest by maintaining and enforcing standards of training and ethics

Without this happening I have little confidence in the current testing ‘professional qualifications’

Moving on to James question about confusing schooling and education

I find this interesting since seen both sides of the education system (formal schools) having been to school up to the age of 18 and from working within an education system. I think I see what James is getting at. Formal education worked and did not work for me, due to my circumstances up to a certain age I was away from school more than I was there by my own choice I just did not go. Once I did settle into going to school regularly I found it offered me some fantastic grounding in key subject skills (maths, science, history, English) – I really struggled with English and still do according to my wife! It also gave me social skills in being able to share, communicate, listening to others, letting others have their view point which may not agree with mine. I feel lucky in the schools I attended, they may not have been the highest achieving schools but they taught great life skills they I am always thankful for. (Pity think more about league tables than the students). So how does this differ from schooling? The confusion I think comes from the fact that most definitions of schooling see schooling as part of being at school and formal education.

I find this definition worrying since I see schooling as something slightly different. It can mean the education you get at school. However what about ‘home’ schooling, self-schooling? In which you embark on a different style of learning which is not institutional.

The other context here could be that James could be referring to the differing schools of testing. This does not sit right with me and I do have a problem with having different ‘schools’ of testing. I see testing as one big thing not lots of different fragmented schools. Since each school has some strong views and ideas that the others do not agree with we end up in heated debates in which no one side wishes to back down. I am not sure how that helps the testing profession, debates are ok, but constant fighting is not good and at some point a middle ground should be found even if it does not sit easy with all sides. Sometimes it is better for the good of the all rather than for the good of the individual.

My thoughts on these different schools and professional bodies etc. is that maybe just maybe all sides should come together and look at forming a learned society.

What is a learned society?

A learned society is an organization that exists to promote an academic discipline or group of disciplines

I think this would be a wonderful way forward and maybe the software testing club could be (form) the society? I am not sure nor have I investigated what would be needed but it looks like that they do some of it already publication of articles etc. I would be most interested in what the people at the software testing club think of this and what the general community feels within all of the different schools.

Finally to answer the last question by James:

What counts as certification?

There are many definitions of certification the main being one in which an organization recognise individual/company etc. that meet certain criteria. These criteria could be passing exams, years of experience, publication of articles and so on.

However this really does not ask the question that James asked. Within the survey it shows how many people hold a certification. However as correctly noted by James it does not say which certification. I would have expected this to be much higher. I have many certificates, PAT testing, rugby coach and first aid. None of these are really relevant for my day to day job of testing so I still no sense in the results as they are displayed. However even if it said testing certification which would it mean? ISEB? AST? Etc etc. This one question really stumped me since I could not find an answer that sat easy with me. If I write regular testing article (blog, magazine) and publish should I be certified? If I get my work colleagues to write a report on how competent I am at testing would make me certified? I really do not have an answer for this one and as James did on twitter open up this question to the community.

So the challenge is set:

In your opinion:

What counts as certification?

Is Product Knowledge essential for effective testing?

I might not be blogging or being online as much as I have been this is due to family life, those close to me know what I mean but here is a new article. I do have lots of ideas and thoughts it is difficult finding time to put them together. I will be at Eurostar in Manchester this year and I hope lots of you will be attending.

It has been awhile since I posted a new blog article so here is a new one.

I recently read an excellent article by Paul Gerrard about all testing being exploratory and thought it was so good I posted within our company intranet. I got an unexpected reply which made me think about testing and the skills that are required to be an effective tester. This reply is the reasoning behind this blog post.

The reply I got was as follows:

The interesting challenge to the basic idea is that the tester needs to have knowledge , and very good knowledge of how the system is supposed to work. Only with that knowledge in place is it then really possible to 'intuitively' carry out testing that will be good exploratory testing. Without that deep knowledge it turns into 'random' testing, which, while it has its place in a test approach, I’m not sure it could form the bedrock of a test plan.

The challenge then becomes how to get that knowledge to the tester/test team. I can see how for long term projects/products, the tester becomes truly expert in his Component Under Test, but for new things, or new people to that test team, the ramp up time and 'completeness' of such an approach is questionable and a bit difficult to scale.

For sure exploratory has a part to play - but hard to see how its 'all'.

This made me think about product knowledge and is it really essential for effective testing. So I posted the question on twitter:

Interesting discussion about needing 2 have product knowledge 2 do good exploratory #testing and without this becomes random testing. (1/2)

I have my views on this but would love 2 have #testing community opinions, views, counter views on this. Might do blog post on this (2/2)

I got some replies to this very quickly (as I would expect from such a dynamic community) – sorry if the time order appears a little wrong – I wish twitter would let me do this easier than a cut and paste job


@steveo1967 before gaining experience and understanding I'd have agreed that my testing was more random than exploratory.

@steveo1967 I've recently been exposed to Microsoft AX and found that with experience my exploratory testing is becoming more fruitful.


@steveo1967 @QualityFrog Well, I would argue that if I know how the software behaves, I don't need to test at all, do I? :)

@steveo1967 @QualityFrog "what I believe it should do" is not knowing to me. :)


@steveo1967 @mgaertne product behaviour can be observed in testing. It is implementation, not requirement.


@steveo1967 if you know how to do Exploratory Testing well then no prior product knowledge is required. Someone on the team needs it tho!

@steveo1967 IMO that is a very good post on ET. I see nothing there that says prior product knowledge required. (cc @paul_gerrard)

Which were very interesting since they were opposing views about the statement made…

@Radionotme stated that he found product knowledge useful to prevent random testing while mgaetne and qualityfrog stated that it was not necessary and that you could learn about the system whilst exploring.

I countered this with:


@QualityFrog @mgaertne the counter claim made is that knowing how the product behaves is essential to test quicker and save time

By this time @Michaelbolton had joined the debate


@steveo1967 @mgaertne @qualityfrog Whether knowledge or belief, where do you obtain it? From /testing/ what you know.

steveo1967 It takes exploration to develop a decent strategy, tactics, and checks. To me,

@steveo1967 One can develop product knowledge, learning through exploratory #testing. Running scripts helps to suppress that learning.

Trying to keep up with all the threads I replied:


@mgaertne @QualityFrog not really you would still test that what you believe it should do it actually does do.

@michaelbolton they are not saying ET is not useful they are saying it is more effective when you have product knowledge. Do you agree?

@Radionotme interesting experience with some product knowledge making your #testing more efficient would like 2 know more.

@Radionotme was it random because u had no structure? SBTM for example helps to structure ET or is it ur ET skills improved?

@mgaertne @QualityFrog v true but the view expressed states have knowledge of the product and how u expect it to behave is essential

More people started to enter the debate:


@steveo1967 last thought: is Jazz music random? To the untrained ear, perhaps it is. To the experienced, you see it takes great skill.


@can_test @steveo1967 What is what we perceive as random, really isn't random at all, just we lack sufficeent perception to see its order?


@can_test if someone on the team needs product knowledge why can this not be the tester?


@steveo1967 I didn't say it _couldn't_ be the tester. I said it doesn't _have_ to be the tester. Financial Svcs jobs are bad about this.


@can_test very well put about random being undisciplined that is my point ET without discipline is random


@steveo1967 that's what it sounds like to me anyway. "Effectiveness" is meaningful in a certain context. Don't blame the tools.

@steveo1967 I think it's about Trust, or lack thereof. That is, I will trust your ET if you are a product expert, otherwise no, its random

@steveo1967 experience in anything increases your testing efficiency. ET usually looks random to those who don't understand it.

@steveo1967 IMO, good testing is about changing your perspective on the system. That's harder when you are the SME too.

@steveo1967 what does "random testing" mean to them? I may have many hypotheses I want to test that are off the main path. Is that random?

@steveo1967 the statement I disagree with is that Exploratory Testing requires prior industry/product knowledge. That's not true.


@michaelbolton @steveo1967 @mgaertne @qualityfrog You either belief there’s milk in the fridge or you don’t be


@steveo1967 In addition, exploration implies that someone intends to discover something. Knowledge can never be known to be complete.

@steveo1967 Biases can't be eradicated, but they can be recognized, controlled, and managed in a number of ways. It starts with awareness.


@michaelbolton @steveo1967 I am currently the 'fresh eyes'. Old and new eyes both find bugs, though they can be different bugs. 1/?


@WadeWachs @michaelbolton very good point. My concern with knowing product is having bias expectations of behavior and not seeing problems


@steveo1967 More effective /for what/? More product knowledge /vs. more what-else/? Heuristic: fresh eyes find failure. #testing

So from this lively debate what can we conclude?

Some people think that you do not need to have any product/domain knowledge to be able to carry out exploratory testing since one of the principals of ET is that you learn about the system as you test. Other people say that you would be just doing random testing if you had no knowledge of the product since your expectations of how it should work can guide your testing.

My own personal view is very much in the middle it has been known for me to say that I can test any product without any prior knowledge (domain or otherwise of the product), however the important word missing there is ‘effective’ How effective is my testing without domain knowledge? Does it suddenly become hit and miss and as stated by @radionotme more random. I am currently working on a product which is very niche without understanding how certain packets are formed and transmitted you could spend a lot of time testing unnecessary stuff (there is a counter to this that no testing is unnecessary – in that it exercises the system in nonstandard ways – true but doing too much of this soon makes it less effective)

So to conclude I do not think there is an obvious answer to this. In some cases I feel domain/product knowledge could be essential to make the testing efficient. l it does not mean that a competent tester could not learn this domain knowledge very quickly and start to be effective and efficient at testing the product. However it needs to be recognised that when someone joins a team and comes without domain knowledge there will be some ramp up time for them to become familiar with the domain. In my opinion this where exploratory testing comes into its own, as an approach to use for someone to get on board with a system and learn about the system it is the most effective way especially if you can afford to do paired exploratory testing.

Domain/product knowledge is not essential to do effective testing but it can certainly help.

Sunday, 29 May 2011

A Competent Tester

You can teach a student a lesson for a day; but if you can teach him to learn by creating curiosity, he will continue the learning process as long as he lives. ~Clay P. Bedford

I started writing this blog article in draft about a month ago to have a little rant about how certification does not make you a competent tester based upon my experiences and frustrations whilst trying to recruit testers. I was prompted by a post by Rob Lambert to revisit the article and try to complete it.

Recently I have been trying (and I emphasis the word trying, it has been really trying and challenging to find the right people) to recruit testers to work with me on some quite technical projects. None of the projects have any real UI interfaces nor are they web based applications, this is real hardcore technical testing at a binary level.I became very frustrated and even got to the stage that I felt my standards were too high.

I lot of the CV and candidates that were interviewed all stated that they had ISEB or ISTQB certification and from that I assumed they would have a basic grasp of boundary and edge cases. How wrong was I!!! To make it worse I asked about their views on any of the current new approaches and techniques in testing, even having to prompt to what do you think of context driven testing and all I got back in return was a blank look. I asked what articles or books have you read recently about software testing or any book that you could relate to software testing and again all I got was blank looks and shrugs of shoulders.

I am not against certification in any shape or form and I do not have a view if people try to make money out of it that is not the issue. The issue I have is that in some (most) cases these schemes are being sold on the basis that once you have completed them you are now a ‘skilled’ tester and know everything there is to know about testing.


Rob Lambert in his article explains in great depth what makes testers ‘skilled’ and I have to agree with him. Those who read my blog know I have a major interest in the social sciences and psychology and how this can collate to testing. More importantly how it can help make you a better tester.

So if you want to become competent at testing you have to read more, interact more with the testing community, become self-learning. My ethos is always to keep on learning and never stop doing so.

Taking any of the certification courses can be, for someone new in the world of testing, a good STARTING point, grounding in SOME of the techniques and skills. These courses will NOT teach you about how testing fits into Agile and about exploratory testing. Nor will they teach you how to test and make you think ‘outside the box’ Only getting involved within the testing community will do that. I good start is the Software Testing Club – read some of the articles and blogs there, subscribe to the excellent RSS feed. I am not being paid by the Software Testing Club to say this I am promoting it because again it is a good starting point for those would wish to learn more about testing and want to improve so they can become competent testers.

I will finish with a quote about learning.

It's what you learn after you know it all that counts. ~Attributed to Harry S Truman