3 Worst Defect Reporting Habits and How to Break Them

Defects are serious business and small mistakes can be expensive.

You know what to do when you find a defect. You report it; either in a Defect Tracker/Defect Management tool or in an Excel sheet. The underlying principles are the same for both methods.

Defect Management tools don’t guarantee better reporting. It is good practices that save the day.

To appreciate the good, we must recognize what’s not.

3 Worst Defect Reporting Habits and How to Break Them


Here goes:

#1) Laziness

Not taking the time to do the best that you can.

This is the defect tracking process followed in most teams:

(Note – click on any image for enlarged view)


As you can see, Test lead review the defects before sending them out of the QA teams.

This review includes confirming:

  • Validity- Is it a bug?
  • Completeness- Title, steps, data, screenshot etc.
  • Duplicates
  • Reproducible or not…etc.

I know firsthand that it is impossible for a QA lead to be 100% thorough.

So, the attitude, “I will report the problem the way I want. The QA lead can recheck. He can decide if the defect is valid/complete or not” is the end of your QA team’s and your credibility.

Did you know that some clients have an SLA for the number of acceptable invalid defects? Once the number exceeds, they start penalizing the contractors for every invalid defect reported?

Remedy: Do your due diligence and be responsible for your deliverable. A defect came back for not enough information or that it is not a bug? It may not always be the development team’s fault.  It is not that they don’t want to own the problems in the application. It could be a genuine QA team mess-up. Don’t let it happen.

#2) Rushing

Let’s do this with an example.

Below is a screen shot of OpenEMR’s create-patient screen. It is an open source hospital management system.

This screen lets the user enter the patient’s date of birth through a calendar feature. What it does not do is restrict the entry to choosing from the calendar. What I mean is, you can choose the DOB as say, “31-Mar-1983” from the calendar. Later change it to “31-Feb-1983”.



Why 31st February? To Implement error guessing and try a negative data in the field; which is the whole point of testing, isn’t it?

Once done, I click “Create Patient”. Since the date is invalid, I expect the system to display an error and not create the patient. But that doesn’t happen. It creates the patient as below.

Note the Age and Date of birth fields in the below screen:


When testing, you might try this a few times and decide that:

  • It is a bug.
  • It is reproducible.
  • It is not a duplicate (Check with your team to confirm)
  • You know the exact description of the problem
  • Also, you know the exact steps that make it happen.

Now that you have the raw material, you are good to go.

You start reporting it. Assigning defect severity is a compulsory step and your team might be using something similar to the following table for reference:

1 (Critical)• This bug is critical enough to crash the system, cause file corruption, or cause potential data loss
• It causes an abnormal return to the operating system (crash or a system failure message appears).
• It causes the application to hang and requires re-booting the system.
2 (High)• It causes a lack of vital program functionality with workaround.
3 (Medium)• This Bug will degrade the quality of the System. However there is an intelligent workaround for achieving the desired functionality – for example through another screen.
• This bug prevents other areas of the product from being tested. However other areas can be independently tested.
4 (Low)• There is an insufficient or unclear error message, which has minimum impact on product use.
5 (Cosmetic)• There is an insufficient or unclear error message that has no impact on product use.

Since this defect is not crashing the system or blocking a vital functionality or preventing the other areas of the application from being tested, we might go with “Low”.

Looks about right?

WRONG. From the patient’s data, all the immunizations and other reminders are overdue. This may or may not be right. Also, for a patient their age determines if they see a pediatrician or a general physician, etc.

It affects the dosages of medicines and many other treatment areas that we might not even know of.


So, I am going to go with “High”. I agree it is unlikely that the hospital staff will enter the DOB of a patient wrong. But let that be a factor that impacts the priority of when to fix the issue.

My job as a tester is to make sure that I communicate the seriousness of the problem as best as I can.

Remedy Don’t be in a hurry to report. Be 100% sure that you understand the impact of the problems from many angles. It’s the best value-add we testers can provide. We are not just saying, “Something is not working”. We are also saying, “Here is what will happen if this continues to not work.” Tons of difference, isn’t it?

#3) Lack of Creativity

The Testers have a wonderful opportunity of making suggestions to improve the software.

In your Defect Management tool too, you can submit a defect of type “Enhancement Suggestion.” This is where you can get creative.

Remedy: Think outside the box. If you think a certain feature is missing a “Wow” factor and you know how to bring it into it, put the idea forward. At worst, it could get rejected and that’s OK. The important part is trying.

Also, use this super power with caution. Try not to make comments such as “I hate the color of the banner, please change it.”

Here is a good example of an enhancement suggestion that I came across: Replacing “Email to dealer” with “Chat with the dealer” option on a car dealership site. It is predicted to convert more traffic into sales.

I wish I was that creative! But, maybe we can all work towards it.

Here’s a bonus. A checklist to break free these bad habits:

1. Does my title convey the problem clearly and concisely?
For example: “Create patient is not working” is not a good title. “Create patient fails even when all the input fields contain correct values” is.

2. What is the rate of reproducibility?
In other words, does it always happen? Do I know the exact sequence of steps that will repeat the problem?

3. Is this problem platform, browser or user specific?

4. Are the steps complete and get you to your issue?

5Do I have a screenshot included?

6. Do I need to annotate my screenshot to highlight any particular areas?

7. Is the name of the image file attached descriptive?
Don’t use something like, “Untitled.jpg.” Give it a descriptive name.

8. Did I include the test data?
For example: For a defect in an Admin module that needs authorization credentials, include them. The development team may or may not have access to the QA environment. You don’t want a delay and follow-up on something as basic as that.

9. Can I give any other details to reinforce my defect?
(Example: a reference to the FRD or a conversation with the client, etc)

10. Do I understand how severe the problem is from different perspectives?

11. Do I know the root-cause of the problem? If yes, do I have evidence (maybe log files) and can I include it? Please note that you might not always know this or need to know this. But if you do, it doesn’t hurt to include it.

12. Is the defect report free of grammar, format, spelling and punctuation problems?

13. Do I know of a way to improve the product?

Are you thinking this is time-consuming? Well, once it becomes a habit, it won’t be anymore.

Rooting for better defect reporting routines!

About the author: This article is written by STH team member Swati.

Feel free to post your queries/comments below.

Related Post

Leave a Reply

Your email address will not be published.