Challenges and complex problems do appear often when dealing with a significant product with a large-scale testing unit.
The crux of the matter is in maintaining the standards of quality, disseminating knowledge, and broadening the scope for expertise.
Recommended IPTV Service Providers
- IPTVGREAT – Rating 4.8/5 ( 600+ Reviews )
- IPTVRESALE – Rating 5/5 ( 200+ Reviews )
- IPTVGANG – Rating 4.7/5 ( 1200+ Reviews )
- IPTVUNLOCK – Rating 5/5 ( 65 Reviews )
- IPTVFOLLOW -Rating 5/5 ( 48 Reviews )
- IPTVTOPS – Rating 5/5 ( 43 Reviews )
Allow me to examine the issues more closely and offer some probable resolutions.
I would like to assert, before we go any further, that these strategies are based largely on my personal management experience. You can be certain that employing these strategies will be beneficial to everyone on the team, regardless of their level or the team’s size.
Suggested reading => What are the steps to build a successful QA team?
Article Overview:
- 5 Main Challenges with Large QC Teams and their Solutions
- Problem #1 with sizable Test Teams: Upholding Good Quality Consistently
- Problem #2 with larger teams: Concentrated Knowledge and Expertise
- Problem #3 with larger Teams: Creating an Enjoyable and Fascinating Work Atmosphere
- Problem #4 with larger Teams: Task Distribution/Assignment
- Problem #5 with larger Teams: Recognition and Motivation
- Conclusion
5 Main Problems Involved with Extensive QC Teams and How to Address Them
Problem #1 with expansive Test Teams: Maintaining Consistently Good Quality
Dilemma: Many of us have experienced the struggle of ensuring optimum performance when the team size increases, as opposed to when it was more compact.
Solution: This discrepancy arises because everyone has varied abilities, expertise in distinct domains, special talents, and employs different approaches for testing, not to mention everyone’s experience levels vary.
Such disparities profoundly affect the situation because instructing people on how to test or convincing them to adopt a specific testing method is challenging. Testing depends heavily on individual autonomy to act based on personal judgment.
Attempting to restrict team members’ thought process and steering it in a different direction is not always successful and should be reserved for critical situations only. In order to tackle this, mentors or team leads should invest dedicated efforts into building a strong rapport with individual testers.
Returning to the original dilemma, when presented a particular feature to test, not every team member will be able to detect all bugs. The quantity, consistency, and timing of bug discovery will differ.
Instances of possible outcomes in such a scenario include:
- Discovering faults in recently released models
- Overlooking a few unrealistic test cases or scenarios
- Focusing too much on extreme cases while neglecting routine ones
- Insecurity amongst team members regarding their own performance
- Multiple test cycles to regain lost confidence
Difficulty #2 with larger Teams: Knowledge and Expertise Centralization
Complication: Another issue that leaders often confront is when only a few team members possess specific expertise or knowledge about a module or a type of testing.
For Instance: Consider a team with members John, Johny, and Janardan where John is adept at Security Testing, Johny has an in-depth understanding of the Workflow module, while Janardan is crucial for Performance Testing and the Payment module.
While having experts for individual modules or types of testing is beneficial for managers in the short term – as they can appropriately delegate the respective tasks to these experts, assuring quality and more efficient results – this becomes a challenging dependency in the long term. If an essential resource goes AWOL it can disrupt the release process. The unavailability could be due to unexpected leaves, illnesses, internal severance, or even resignations.
What about exchanging knowledge?
Expanding on the previous situation, wouldn’t Johny gain from knowing which module Janardan is working on? Wouldn’t John’s knowledge about the Workflow module assist Janardan? Wouldn’t being knowledgeable about the entire product help individual team members understand the impact their tasks have on the others?
I’m confident that you would concur.
Solution: Having discussed the problems, now let me share with you how my team operates to stay ahead of these and many other such challenges.
The guiding factor to our approach constitutes two principles that might already be familiar:
- People are superior to Process Orders.
- Collaboration is triumphant over Documentation.
Our workflow follows these steps:
1) Gathering Requirements: A narrative, say XYZ, is ready for discussion with the Business Analyst.
2) Discussion on User Requirements: If the latest requirement is intricate, it is first addressed at the team lead level in a meeting that includes the Business Analyst, Development Lead, and Test Lead. The objective of this meeting is to clear any existing uncertainties or gaps in the narrative before the group note exchange (the requirement discussion with the team) takes place. This saves the team’s time.
3) Group Consultation: A meeting we fondly call “Grooming” or “Hole Punching” is led by the Business Analyst, it involves the rest of the team where they go over all the existing requirements. This session provides 90% transparency to the tester responsible for the narrative (hereafter “Narrative Proprietor”).
Here transparency means that the tester is fairly sure about all the test narratives and situations. There remains a 5-10% grey area still left for unknown scenarios.
4) Brainstorming Sessions*: After grooming, the development and testing groups reconcile for a technical discussion on the requirement. This discussion touches all aspects like impact on existing features, requirement for data patches, hidden scenarios, as well as surmising effort.
The important factor here is that more than one person from the testing group takes part in this to add to the Narrative Proprietor’s lists of scenarios. This also helps expand their knowledge base. Hence, multiple testing minds come together and provide the solution instead of just relying on one person.
As per our previous instance, even though Johny is the Narrative Proprietor, he is aided by Janardan and John’s insights and problem-solving abilities.
After consultation, the Narrative Proprietor adds all the discussed scenarios to the current list from the grooming session. Ideally, this meeting covers almost all aspects, except a few random errors that can be addressed only by efficient exploratory testing.
5) Creation of Test Scenarios: The Narrative Proprietor begins formulating test scenarios based on the requirement and scenarios gathered from the previous steps into test cases, while the coder initiates coding the new requirements. Upon the completion of the test script creation, the Narrative Proprietor shares them with the coder for review and reference.
6) Testing with Test Scenarios and Exploratory Testing: Upon completion by the coder, the story becomes the Narrative Proprietor’s property for examination based on the written test scenarios while also including exploratory testing. Once the desired quality level has been attained (based on test coverage, count of meddling factors, and the Narrative Proprietor’s confidence) the narrative then gets labelled as “QC Done.”
7) Overlapping Session*: After completion of QC, the Narrative Proprietor holds a presentation on the new feature explaining its functionality and the testing that has been conducted. This “Overlapping Session” is mandatory for all testers unlike in the brainstorming session.
At this point, everyone is generally familiar with what is being released. They can also consider any impacts that it may have on their own narrative’s testing and vice versa. They can also address questions to the Narrative Proprietor.
8) Overlapping Testing Round*: We follow this important but optional step – the “Overlapping Testing Round”, conducted by somebody other than the Narrative Proprietor, usually within an hour or less.
The overlapping tester concentrates solely on extreme negative scenarios, under the assumption that cases that are more mainstream were already tested. This provides an added layer of assurance before the release.
Maybe your team has well-established practices that cover most or not all of the steps. In our case, every step is greatly important though the real distinguishing factors compared to other teams lie in the the three steps I have marked with an asterisk (*), as they provide some significant benefits.
Let me reiterate, because these steps result in large returns on investments that are hard to quantify here.
- Brainstorming Session
- Overlapping Session
- Overlapping Testing Round
These steps simplify my work as a supervisor to ensure maximum possible quality and efficient knowledge transfer amongst team members.
However, these methods afford the greatest benefit to my team rather than just me. My team of 25+ members consistently gives us excellent feedback and informs us that these methods have greatly benefitted them and that we should continue executing them.
Problem #3 with larger Teams: Making Work Exciting and Engaging
Issue: Does it seem like all I do is dish out work without considering the fun factor or personal growth? As a good leader, that would be unfair. If you keep feeding work to your team without thought, they will eventually reach a point of exhaustion and boredom and frustration may set in.
Solution: Our work environment is extremely friendly, not just within our own team but throughout the entire company. When it comes down to my own team, our relationship is more like that of friends than a traditional reporting relationship.
Here are some of the best practices we follow:
Once every week, all the testers on my assignments gather for a Q&A Session that lasts about a couple of hours.
This session includes:
- Open to any individual to ask any question, not just limited to work and others can provide their answers.
- The sharing of ideas, suggesting process improvements or talking about personal achievements are freely encouraged. Any topic deemed relevant is promoted.
- In a cyclical fashion, each participant shares something new within the testing sphere that others may not know about. This greatly benefits our collective understanding.
- Does this sound too much like work? Well, we try to make it lighter by taking photos, playing fresh games, and most importantly, playing MAFIA :). We regularly share these details on our Trello Board (which is another story for another day) or on our Slack channel.
Problem #4 with larger Teams: Task Division/Assignment
Complication: Task distribution amongst team members can also be a challenging task for larger teams. If you simply assign tasks based on your preference, you may encounter the following situations:
- An individual keeps receiving tasks of the same complexity, stunting their personal growth.
- An individual keeps testing the same modules, leading to potential frustration over time.
- You normally assign certain modules to experts ensuring the best results which aggravates the issue of knowledge centralization as discussed maybe earlier.
- Your team may feel like robots, merely obeying orders, hence leading to a loss of value and self-worth.
Solution: For this scenario, I have introduced a process of auctioning. Yes, during the release planning, we sit as a team and make use of a Release Dashboard which contains all user narratives. Each team member gets the opportunity to bid for the work they would prefer to handle.
By doing this, we automatically address the above-mentioned problems, as the team decides the narrative ownership rather than being solely directive-led. I rarely overwrite bids, and only when it is necessary to ensure quality and meet release deadlines. And of course, I clarify my reasons behind the decision that I take.
Problem #5 with larger Teams: Recognition and Motivation
Dilemma: Team members can be categorized as rockstars, performers, or underperformers.
The real issue comes when you have a lot of rockstars and performers but no underperformers. In such cases, performers tend to appear as underperformers as compared to the rockstars.
Solution: Initially, understand and accept the fact that not every individual can be a rockstar. Everyone has varying talents, learning speed, and work style. Being a leader, you need to appreciate this fact and respect it.
You can ensure performers become rockstars but this can’t be accomplished single-handedly. You will need to spot the organic leaders within your team irrespective of their designated roles and involve them in this responsibility.
Being a manager or a leader, you can’t dedicate enough time to each individual in your team. You need to delegate certain roles to the right leaders once you’re sure they understand your vision.
I’ve employed this approach and I feel proud to say that every leader on my team consistently receives a rating of 4.5 and above during equivalent performance reviews.
You can reach as many people as you want, but it’s important to keep up continuous dialogue with the leaders, guiding them through, appreciating their endeavours, and sharing your vision. They will look after the rest while you can just oversee the progress.
Of course, the traditional practices of writing appreciation emails, delivering motivational speeches and publicly applauding the work of your team should still continue.
Final Thoughts
As the saying goes, “Collectively, we are smarter than any individual could be.” Therefore, utilise the strength of being a team and scale new heights.
Also worth a read => Leading a Happier and more Successful Test Team
About the article: This has been authored by Mahesh C, a team member of STH. Currently, Mahesh is employed as a Senior Quality Assurance Manager and has substantial experience in leading testing efforts for multiple complex products and components.
I hope that my experience will be of some help to you and your team. I am fairly confident that it will bring a positive change.
I would be thrilled to know your opinion. I encourage you to leave a comment or give any feedback.