Monday 27 September 2010

Hybrid Testing

There has been a lot of talk within the testing community about the scripted v non-scripted approach to testing. I have read and heard from people aligned to each school of thought trying to debunk the other schools approach. This can be very confusing to those that work in the profession of software testing. There are arguments on either side with people on either side presenting their point of view. I thought I would blog about my experiences of using both approaches in my day to day testing job.

When I first started in testing I worked for many companies which had adopted the Prince 2 methodology of software development and loosely followed a V-model process. This meant that requirements and specifications were gathered before any development work started. Using these documents as a tester I would do some gap analysis from a testing perspective to see where requirements contradicted each other and design specifications did not meet requirements. These were very heavyweight documents and it was a laborious task that engaged me as a tester to a certain point. Using these documents I would start to create scripted tests and build up a repository of test suites. Once the software started to be developed and I could gain access to certain features my engagement as a tester increased. I would run through my scripted tests and find that a large amount of them would need altering since I had made the wrong assumptions or the requirements and specification did not meet what was delivered. As I ‘explored’ the software I found more and more test ideas which would become test cases. The amount of discussions I had with senior management on why the number of test cases was increasing is another story altogether. I would spend a large amount of time adding detailed steps to the test scripts and then when we had another drop of the software run them again as a regression pack. I tried to automate the tests which for some easy parts worked and for others did not. What I did not know at the time was I was carrying out exploratory testing without knowing I was doing so. Once I had the software it was the most engaging time as a tester, it was what made me feel like I had done a good job by the end of the day.

So let us jump forward to today: - we have TDD, agile and a multitude of different approaches to software development. It is all about being flexible and developing software the customer needs quickly and efficiently and being able to adapt quickly when customer needs change. As testers we get to see and explore the software a lot sooner.

A lot has changed from a tester perspective we are now engaged more in the whole process, we are expected to have some knowledge of coding, IMO not always necessary but a good tool to have. We get to see the software a lot sooner and able to exercise and explore the software and to engage our testing minds to what the software should, could or may do. However have things changed that radically?

What has made me think about writing this blog has been the debates that have been going on about scripted vs. non-scripted. I am currently working on a new project in which there are many dependencies on internal components and external 3rd parties all of which are working to different timescales. Some of the components can be simulated which others cannot due to time constraints and other technical problems. We have some pretty good requirement documents and some design specifications. What we do not have at the moment is fully working end to end software. So I am back creating scripted test cases to meet the requirements, finding discrepancies in the documents and asking questions. The difference is that now I do not fully step out my scripts I create pointers on how to test the feature, I note test ideas that could be interesting to look at when the software arrives, I make a note of any dependencies that the software would require before testing that feature. So I create a story about testing the feature rather than create a step by step set of instructions. It is more a testing checklist rather than a test script. So with this I am combining both scripted and the non-scripted approach. I am sure a lot of readers will read this and think that they are doing the same.

The people who talk about exploratory testing have never said to my recollection that there is no need for scripted tests. Some of the requirements I have are fixed they will not change; the results should be as stated; so those requirements I can script or automate if possible. It does not mean you are doing anything wrong nor does it mean that you are not following an exploratory approach. Exploratory testing is just that an approach, it is not a method, it is not a do this and nothing else. It is a tool to use to enable you to test and hopefully engage you in testing rather than just being a checking robot. If you still create detailed step by step scripts then there is nothing wrong in doing that, I still do when required.

Exploratory testing can be used without the software, you can look at available documents and explore them for test ideas and new creative ways to test what the documents are stating, you can question, you can analysis you can make suggestions and improvements, you can use your brain.

Wednesday 8 September 2010

We test the system and gather

I have noticed that it has been awhile since I did a blog due to family commitments and vacation time. I had an idea to blog about the importance of gathering evidence when testing especially when using an exploratory testing approach. I decided to take the example of why it is importance from an internal workshop that I run on exploratory testing.

So are you sitting comfortably?

Here is the story of Timmy the Tester and Elisa the Explorer

Timmy Tester

Timmy Tester has been given a new build to test and decides he is going to test it using the exploratory approach. Timmy writes that his mission statement is to test function x has been implemented.

He installs the release and starts to enter different values and pressing buttons around the new feature at random. He does this for about half an hour and then all of a sudden the application crashes.

Timmy takes a screen shot of the crash and a system dump of the crash and goes to talk to the developer. The first question the developer asks is

“Have you tried to reproduce the problem?”

At this point Timmy says no and goes back to try to reproduce the problem.

Two days later Timmy has been unable to reproduce the problem and now thinks it could have been one of those strange things.

3 months later the application is now live on the customer site. Within 30 minutes there are a large number of calls to support stating the application is crashing. The problem gets passed back to Timmy who notices that the problem appears to be the same as the one they saw when carrying out ‘exploratory’ testing….

Elisa the Explorer

Elisa has been given a new build to test and decides she is going to test it using the exploratory approach. Elisa creates a mission statement stating they are going to be testing the new function.

Elisa installs the new application and starts to enter different values around the new feature. As she is doing this Elisa has another computer in which she makes notes and takes screenshots at relevant points to aid clarity of each step that she has carried out. At certain points Elisa finds behaviour of the system which does not seem correct so she starts another mission statement to look into that behaviour. Elisa then starts to examine the strange behaviour in more detail making notes of the steps she is carrying out at the same time. All of a sudden when pressing a button the application crashes.

Elisa makes some notes, takes a screen shot and a system dump of the crash.

Elisa then resets the application back to a clean system and repeats the last set of steps which she had made a note of. The crash happens again.

Elisa then goes to see the developer and states she has managed to produce the problem more than once and here are the steps.

Elisa sits with the developer while they go through the steps together and the developer sees the crash.

Two days later Elisa has been given a fix for the crash. She now has an automated test for the crash and runs it straight away. The test passes and Elisa continues with the rest of her testing.

It may seem like common sense but I have seen more cases of Timmy than Elisa from people who have said they are using exploratory testing. It is extremely important to record everything and remember exploratory testing does not remove any of the principles of testing:

“All tests are repeatable”
“All problems are reproducible.”

There are many ways we can gather evidence of our testing sessions and there are a large amount of tools available to the exploratory tester. In much the same way that when the first humans decided to explore the North Pole they took tools with them that could help and support their efforts exploratory testers can do the same when exploring the system under test.

Maybe I should look at some of these tools and write a blog about them – or even better people who read this blog might be able to suggest some good tools that they have had experience of using.