Departure Criteria in Testing:
“Well begun is half done” – This adage not just pertains to everything else, but also software testing.
Recommended IPTV Service Providers
- IPTVGREAT – Rating 4.8/5 ( 600+ Reviews )
- IPTVRESALE – Rating 5/5 ( 200+ Reviews )
- IPTVGANG – Rating 4.7/5 ( 1200+ Reviews )
- IPTVUNLOCK – Rating 5/5 ( 65 Reviews )
- IPTVFOLLOW -Rating 5/5 ( 48 Reviews )
- IPTVTOPS – Rating 5/5 ( 43 Reviews )
Software testers often kick off a project with much gusto and zeal. With full fervor, we readily create test documents such as Test Strategies, Test Plans, or Test Cases.
As soon as we initiate the testing process, we plunge right into it! The adrenaline rush of uncovering fascinating bugs in the initial stages further adds to our enthusiasm. Rectifying these bugs contributes to our sense of achievement.
Nevertheless, as we uncover a myriad of bugs and wrap up the initial round, we tend to go easy during the second round. It’s only human to feel weary from repeatedly testing the same aspects.
Testers frequently find the later testing rounds mundane and their interest wanes in repeatedly testing the same software. By the time we get to the third round or so, a question begins to loom large: “When is it time to halt testing the software?”
I wager you have posed this question to yourself at least once. So, let me ask you this: “When, where, and how should we call it a day in testing?” 🙂
From a theoretical perspective, we have learned and many testers are of the opinion that there is no specific condition or formula to decide “When to call it a day in testing?” Several aspects need to be weighed in before drawing a conclusion.
In this piece, I aim to share my views on how to gauge when testing activities should wind up. We will delve into real-world examples within a typical testing cycle.
What You Will Learn:
When is Testing Enough?
How do we ascertain when we have tested enough? Can testing ever be truly wrapped up?
To respond to these queries, we need to examine testing activities from the start to the end. Keep in mind that I will break down these activities in layman’s terms, not in intricate technical terminology.
Picture yourself embarking on testing a new project.
Preliminary Activities:
- The requirements are handed over to the testing unit.
- The testing unit commences planning and design.
- The initial test documents are compiled and scrutinized.
Testing Run #1)
- Upon receiving the developed product, the testing team begins conducting tests.
- In this phase of testing, they implement various scenarios to pinpoint bugs and to break the software (Given that the application is fresh, the bug rate is higher during this initial review).
- The issues are fixed by the developers and sent back to the testing team for retesting.
- The testing team retests the bugs and carries out regression testing.
- Once the majority of significant bugs are rectified and the software seems stable, the next version is released by the development team.
Testing Run #2)
- The testing team launches into the second round of testing, performing activities akin to the first round.
- During the second testing round, they stumble upon a few more bugs.
- The developers fix these bugs and hand them back to the testing team for retesting.
- The testing team retests the bugs and carries out regression testing.
This cycle has the potential to persist endlessly. Run 3, Run 4, and so on, until all bugs in the software are unearthed and the software becomes glitch-free.
If we were to depict these activities via a flow chart, it would resemble this:
From the flow chart above, it is clear that testing can proceed until all bugs in the software are uncovered.
But here’s the catch: Is it feasible to uncover every single bug in the software? Let’s endeavor to solve this million-dollar question :).
Halting when all bugs are uncovered: Is it feasible?
Most software is intricate and offers ample scope for testing. While it’s not unachievable to detect all bugs in the software, it would be time-consuming to the point of infinity.
Even after identifying numerous glitches in the software, no one can vouch for the software being now entirely bug-free. There would never arise a situation where we can assert with confidence that testing is wrapped up, all bugs have been uncovered, and the software is free of bugs.
Moreover, the objective of testing is not to detect each single bug in the software. The aim of software testing is to ensure that software functions as anticipated by discovering issues or discrepancies between its present behavior and the expected behavior.
Given that the number of glitches in software is limitless, it is not pragmatic to test until every single bug is detected. We would never know which bug would be the terminal one. The fact is, we can’t bank entirely on uncovering all bugs to wind up our testing.
In practical terms, testing is a perpetual process, and testing cycles would continue until a decision is taken on when and where to halt. If “halting when all bugs are uncovered” is not the yardstick for stopping testing, then how should we decide when to halt?
Decision to halt testing: Exit criteria
Now, let’s attempt to comprehend – What are the key factors to mull over when winding up testing activities? I’m inclined to believe that the decision to halt testing is influenced predominantly by Time, Budget, and Scope of Testing.
The most widespread technique is to halt when either Time/Budget is expended or when all testing scenarios have been implemented. Nonetheless, this technique compromises the quality of testing and does not inspire sufficient confidence in the software. So, how do we strike a balance?
Let’s scrutinize an illustration.
Test Scenario:
Let’s say you are testing a software module. You are allocated a definite budget for doing so. The project timeline is a month. The total count of test scenarios is 200. You are the only tester for this module.
Scenario #1)
Week 1: On the initial day, you spot a showstopper/Severity 1 bug that halts all testing for 3 days. For these 3 days, you aren’t able to execute any scenarios. After the Severity 1 bug is fixed on the fourth day, you resume testing.
By the end of the week, you have finished 20 scenarios and opened several crucial high priority bugs.
Week 2: The second week of testing begins and your focus is on bug detection. In addition to carrying out remaining test scenarios, you identify more Severity 1, Severity 2, and Severity 3 bugs. By the end of the week, you have covered 70 scenarios.
Week 3: At the start of the third week, all high-priority bugs have been rectified. In addition to carrying out pending scenarios, you now have to retest all the bugs unearthed during the testing phase. Making steady progress, you manage to cover 120 scenarios and stumble upon additional bugs.
Currently, all high-priority bugs have been identified and reported. Only Severity 3 bugs remain to be discovered.
Week 4: In the fourth week, you need to retest most of the opened bugs and finish the remaining 80 scenarios. By the end of the week, you have finished up to 180 scenarios, with all high and medium priority bugs rectified and retested.
Presenting this information in a tabular format:
Weeks | Activities Undertaken in Testing | Outcome at the Week’s End |
---|---|---|
Week 1 | • Day 1 – Showstopper Bug Detected. • Testing Halted due to Severity 1 bug detected on Day 1. • Blocker bug rectified on Day 4. • Test Execution resumed till the end of week 1. |
• High Priority / Significant Bugs opened. • 20 Scenarios accomplished. |
Week 2 | • Focus on bugs detection. • Execution of remaining Test Scenarios. • Retesting of fixed bugs. |
• More Severity 1, Severity 2, and Severity 3 bugs opened. • A sum of 70 Scenarios fulfilled. |
Week 3 | • Retesting of all high-priority bugs. • Execution of remaining Test Scenarios. • Only Severity 3 bugs remain to be discovered. |
• More Severity 1, Severity 2, and Severity 3 bugs opened. • A sum of 120 Scenarios fulfilled. |
Week 4 | • Retesting of all High and Medium Priority Bugs. • Execution of remaining Test Scenarios. |
• More Severity 3 bugs opened. • A sum of 180 Scenarios accomplished. |
Should you halt here?
Considering that you have wholly utilized the testing duration and reported and fixed most of the high-priority bugs, would you feel confident in the software if you were to stop at this point? Unfortunately, no. Here’s why:
- The scenarios are yet to be wholly executed.
- Some flows have not been tested at all.
- All the completed scenarios have been executed just once.
- There are still bugs present in the software.
- Regression testing remains incomplete.
Scenario #2)
Week 1: On the first day, you spot a Severity 1 bug that brings all testing activities to a standstill for 3 days. After the bug is rectified on the fourth day, you resume testing.
By the end of the week, you have completed 20 scenarios, just as in Scenario #1.
Week 2: During the second week of testing, you uncover more Severity 1, Severity 2, and Severity 3 bugs, however, your focus is on executing more scenarios to make up for the backlog from the first week. By the week’s end, you have covered 120 scenarios.
Week 3: As the third week kicks off, all reported bugs have been rectified. In addition to executing pending scenarios, you have to retest all the bugs identified in the testing phase now. Continuing at a good pace, you end up covering a sum of 200 scenarios, along with additional bugs.
Now, you can report only Severity 2 and Severity 3 bugs.
Presenting this information in a tabular form:
<
div id=”tablepress-91-scroll-wrapper” class=”tablepress-scroll-wrapper”>
<
table id=”tablepress-91″ class=”tablepress tablepress-id-91 tablepress-responsive”>
• Testing Halted due to Severity 1 bug detected on Day 1.
• Blocker Bug Rectified on Day 4.
• Test Execution resumed till the end of week 1.
• 20 Scenarios accomplished.