Exploratory Testing vs Scripted Testing: Who Wins?

Real-world benefits of exploratory testing:

Traditionally, software testing has been a very rigid activity, but in recent years there’s been a shift away from script-based testing. Exploratory testing, which is more context-driven, has come to the fore. That’s because it gives testers more freedom to exploit their skills and knowledge, and it makes them responsible for optimizing the value of their own work. 

Exploratory Testing vs Scripted Testing

Not everyone is sold on the value of exploratory testing. The perceived lack of formality and emphasis on personal responsibility can set alarm bells ringing. But that concern is largely based on a misinterpretation of exploratory testing. It’s not about throwing rules out the window and testing at random, it’s actually very structured and systematic. And it’s also highly effective. 

Skeptics want concrete proof that it does more than improving tester’s morale. That’s why we decided to conduct a study that would pit context-based, exploratory testing directly against a script-based testing approach. The results were very interesting, as you’re about to find out.

Context-based (Exploratory Testing) vs Scripted Testing Teams

Two teams, two approaches:

We started by dividing the testers into two teams of three. Testers in each team had the same comparable application knowledge. The same definitions for defect severity (major, minor) were established for both teams. Both teams had the same application build delivered to them. One team (“scripted”) would apply a traditional script-based testing approach and the other team (“exploratory”) would adopt a context-driven testing approach. The testing activities would be divided into two phases of three days each.

The script-based team identified five business work-flows to test and generated 15 test cases. The test cases were limited in scope, so testers didn’t have any freedom to explore outside the confines of the script.

The exploratory team created two visual mind maps, one that identified the test coverage and test charters, and the other covering product components/modules. The process produced 24 test charters in total. The charters defined were high-level and allowed contextual interpretation, extending the scope of the test session for the testers.

Phase 1:

The scripted team managed to complete 6 test cases in the three days allotted. They reported 6 major defects in that time.

The exploratory team managed to complete 13 test sessions ranging from 30 minutes to 180 minutes each. They reported 10 major defects and 5 minor defects.

scripted vs exploratory testing 1

Interestingly the exploratory team reported all of the defects that the scripted team had reported.

Phase 2:

The scripted team managed to complete 9 test cases this time. They reported 10 major defects and 8 minor defects.

The exploratory team completed 18 sessions. They reported 14 major defects and 5 minor defects.

scripted vs exploratory testing 2

In phase 2, the scripted team reported 2 major and 1 minor defect that the exploratory team didn’t find, but the exploratory team reported 3 major and 1 minor defect that the scripted team didn’t report.

This doesn’t take into account the relative complexities of the workflows that may have been chosen by the testers within these sessions and the test cases, but we can still draw some interesting conclusions.

What does it mean?

It would appear that an exploratory approach, and the responsibility and flexibility it engenders, results in a more effective form of testing. It may be possible to cover more ground by developing and adapting your test charters as the test sessions progress, based on what makes sense in context. This freedom is lacking in script-based testing and it can prevent defect discovery.

Sticking rigidly to scripts creates well-worn paths, and it’s only by deviating from those paths that we’re going to uncover all the defects. As mentioned several times by thought-leaders within the testing community, “If you imagine a product as a field of landmines and each landmine is a defect, then it’s pretty clear that treading the same path over and over is not the way to find them all.”

In the end, neither approach was perfect, because each team reported defects that the other team did not identify, even if the exploratory team did report more, overall.

Realistically, this may mean that the right approach, with regard to coming as close as possible to “minimal” defects, is going to be a mixture of the two. But, there are many benefits with the context-driven approach that speak in its favour. It requires less preparation time, less documentation, identifies issues earlier, and challenges testers to use analytical skills and deductive reasoning. They gain a deeper and more thorough understanding of the product and really act as advocates for the end user.


The end result demonstrates that exploratory testing does lead to reporting of more defects before go-live, resulting in a better product delivered by the team, and ultimately, more satisfied/fulfilled testers which are all desirable outcomes, any way you look at it.

About the Author

Mush Honda is QA Director at KMS Technology, a provider of IT services across the software development lifecycle with offices in Atlanta, GA and Ho Chi Minh City, Vietnam. He was previously a tester at Ernst & Young, Nexidia, Colibrium Partners and Connecture. KMS services include application management, testing, support, professional services and staff augmentation.

Do you agree? Feel free to post your comments, questions below.

PREV Tutorial | NEXT Tutorial #4: Exploratory Testing with HP Sprinter

Related Post

Leave a Reply

Your email address will not be published.