Throughout my software testing career spanning 12 years, I have had the opportunity to work with a variety of development methodologies. I have seen the shift from Waterfall to Agile and have noticed a significant change in the expectations, roles, and skill sets of test engineers.
An increasing number of quality assurance (QA) professionals are reinventing themselves from simply “Bug Hunters” to “Bug Preventers”. They are expanding their skills to include Automation, TDD, BDD, White box testing and not to mention, Black box testing. They are more solution-driven now, actively collaborating with developers and key business stakeholders.
Recommended IPTV Service Providers
- IPTVGREAT – Rating 4.8/5 ( 600+ Reviews )
- IPTVRESALE – Rating 5/5 ( 200+ Reviews )
- IPTVGANG – Rating 4.7/5 ( 1200+ Reviews )
- IPTVUNLOCK – Rating 5/5 ( 65 Reviews )
- IPTVFOLLOW -Rating 5/5 ( 48 Reviews )
- IPTVTOPS – Rating 5/5 ( 43 Reviews )
I strongly advocate for continual learning, and platforms such as softwaretestinghelp.com serve as an excellent medium to share my experiences and learn about new concepts and tools.
So, here I am again, eager to share my insights and experiences as an “Agile Tester” and keen to hear all that you think and feel.
“The Agile Tester – A Shift in Mindset” is a complex topic that cannot be effectively addressed in just one segment. Consequently, I will categorize my article into 3 main areas of knowledge:
- Syncing the Agile tester with the Agile manifesto
- The role of testers in TDD, BDD, and ATDD
- Introducing automation in Agile
Here’s What You’re Going to Learn:
Syncing the Agile Tester with the Agile Manifesto
Agile is synonymous with “flexibility” and “rapid movement”.
Agile testing is NOT a standalone testing technique; it signifies a shift in mindset towards delivering testable components.
As we delve deeper into the subject of Agile testing, it is important to understand the genesis and philosophy underpinning Agile.
The old way
Before Agile became the norm, the Waterfall methodology dominated the software industry. While I won’t elaborate on the Waterfall model here, it’s important to understand some practices usually followed by teams when implementing features.
Everything shared here is from my own experience and of course, your opinions may differ.
Here’s some things worth noting…
- Developers and QAs functioned as separate teams, often acting as opponents.
- Both developers and QAs used the same reference document containing all the required specifications. The developers focused on designing and coding, while QAs came up with their test cases based on the same document. Everything from planning to execution was done autonomously.
- Test case reviews were solely conducted by QA leads. Sharing test cases with the developers was discouraged to avoid any possible bias that might influence their coding.
- Testing was viewed as the last step in an implementation cycle. Accordingly, QAs usually received the feature at the final stage and were pressured to conclude testing within a strict timeline.
- QAs’ work centered around detecting bugs and defects, and their performance was evaluated on the number of legitimate bugs/defects they managed to find.
- Communication was typically handled through emails, and the Software Testing Life Cycle (STLC) and Defect Life Cycle were diligently followed during execution.
- Automation was perceived as a final step, primarily concentrating on UI automation. Regression test suites were often the preferred choice for automation.
Of course, these practices had their own disadvantages:
- Communication between developers and QAs was usually restricted to discussions about bugs & defects.
- The role of QAs was limited to executing test cases on the final product.
- QAs had minimal opportunities to review the code or engage with developers or key business stakeholders.
- Due to the expansive timeframe and prolonged time to market, the focus often shifted from delivering a top-quality product. The main goal eventually became to simply complete the implementation and move the code to User Acceptance Testing (UAT).
- QAs were primarily concerned with executing the test cases, completing the task, and moving on to a different project/module. Their attention was not so much on speeding up delivery and maintaining quality, but more on finalizing test case execution (and some amount of automation).
Understanding Agile
Agile’s origins can be traced back to the beginning of 2001 when 17 professionals gathered in Utah, USA, for a retreat filled with skiing, dining, relaxation and meaningful discussions. The outcome of this meeting was the Agile Manifesto.
As a quality assurance professional, it’s important to grasp the essence of the manifesto and adapt our thoughts to align with it:
Let’s realign our approach to software testing with that of the Agile manifesto. In Agile, the teams are multifaceted with everyone contributing to the development of the product or feature.
Hence, I prefer to address the entire team as the “Development Team”, which comprises of developers, testers, and business analysts. Hereafter, I will use the terms “Developers” and “Testers” to refer to the respective roles.
#1) Prioritizing working software OVER extensive documentation
In Agile development, the key objective is to deliver potentially shippable software or increments within a short period, thus making time to market crucial. However, this doesn’t signify a compromise on quality. Due to the tighter time to market window, the testing strategy and planning should be more focused on the system’s overall quality.
Testing is an ongoing process, and testers need to establish certain parameters to approve the product’s move to the production stage. Testers must actively partake in developing the “Definition of Ready (DOR)” and “Definition of Done (DOD)”, ensuring they articulate the “Acceptance Criteria of the story.”
The test scenarios and test cases should center around the Definition of Done and Acceptance Criteria. Rather than writing extensive test cases containing infrequently used data, testers should concentrate on creating precise and relevant scenarios. I have been using the following template to jot down my test cases.
The aim is to include only the data in the test scenarios/cases that is necessary and serves a purpose.
#2) Favoring customer collaboration OVER contract negotiation
Let’s establish direct communication with our consumers concerning our testing tactic and maintain transparency in sharing test cases, data, and results.
Regularly hold interactive discussions with the users and share the test outcomes with them. Ask them if they find the test results satisfactory or if they want specific scenarios to be addressed. Let’s not confine ourselves to simply asking questions or seeking clarifications from the product owner or business to comprehend the functionality and business needs.
The better we understand the feature, the more accurate our test coverage will be.
#3) Being responsive to changes OVER rigidly sticking to a plan
Change is the only permanence!
We cannot control change; rather, we must accept that features and requirements will inevitably change, and we must adapt and implement these changes accordingly.
Agile embraces frequent changes in requirements and as testers, we need to ensure our test plans and scenarios remain flexible enough to include new changes.
Conventionally, a test plan is created and followed throughout the project lifecycle. In an Agile environment, the plan needs to be adaptable and responsive to changing requirements. The concentration should be on meeting the Definition of Done and the Acceptance Criteria of the story.
There’s no need to craft a test plan for every story. Instead, we can compile a test plan at the epic level. As epics are simultaneously written and worked upon, test plans can also be created concurrently. A specific template may be optional. The goal is to assure comprehensive coverage of the quality aspect of the epic.
Use the Product Increment (PI) planning days to devise high-level test scenarios for the story, founded on the Definition of Done and Acceptance Criteria.
#4) Valuing communication & teamwork OVER processes & tools
While testers are often process-driven (which is perfectly alright), we need to keep in mind that a speedy response or resolution of issues should not be compromised by adhering too strictly to a process.
With a co-located team, issues can be sorted out through direct communication. Daily stand-up meetings provide a great platform to address concerns. While logging a defect is important for tracking, testers should team up with developers to collaboratively address the defect. If necessary, involve product owners too. Testers should actively and proactively engage in Test Driven Development (TDD) and work closely with the developers’ team to share scenarios and identify defects at the unit level.
Wrapping up
“Testing in Agile” is not a unique testing technique; it’s a mental shift that can’t be made overnight. It needs knowledge, skill, and proper guidance.
In the next segment of this series, I will cover the role of testers in Test-driven Development (TDD), Behaviour Driven Development (BDD), and Acceptance Test Driven Development (ATDD) within Agile.
About the author: This piece was penned by STH author Shilpa, who has more than a decade of experience in the software testing industry, working in fields such as Internet advertising, Investment Banking, and Telecom.
Keep an eye out for more to come.