In the previous discussions on test automation, we have delved into a variety of topics such as launching the automation process in your organization, choosing the right automation tool, diverse frameworks, and creating sustainable scripts.
To access every tutorial in this automation testing series, click here.
Recommended IPTV Service Providers
- IPTVGREAT – Rating 4.8/5 ( 600+ Reviews )
- IPTVRESALE – Rating 5/5 ( 200+ Reviews )
- IPTVGANG – Rating 4.7/5 ( 1200+ Reviews )
- IPTVUNLOCK – Rating 5/5 ( 65 Reviews )
- IPTVFOLLOW -Rating 5/5 ( 48 Reviews )
- IPTVTOPS – Rating 5/5 ( 43 Reviews )
Now, we shall pay attention to the implementation plan for the scripts.
Learnings From This Article:
Execution Plan
Vital Suggestions for Successful Execution Plans:
An important part of the execution plan is planning it prior to generating the scripts. Before developing scripts, there are several factors you should take into account.
As an example, let’s take the software under test to be a web application. In this situation, formulate scripts for testing the website’s login function.
Initially, creating 8 to 10 scripts to cover the primary aspects of the login function might seem straightforward. However, you might face multiple challenges when it comes to executing these test cases. Therefore, it is recommended to tackle these queries before starting script development.
These queries include:
Q #1. What is the execution environment?
This is indeed the most pivotal question to address. The execution environment signifies the venue where the test cases will be performed.
For instance, you might be examining a web application saved on a local environment. When producing the test cases, you should be mindful that the website will eventually be hosted on a live server on the internet. By being aware of this beforehand, you can structure the scripts in such a way that allows for easy alteration of elements such as the website URL, usernames, passwords, and emails to adapt to the server switch. These variable values can be stored extraneously from the script, either in an excel sheet, database, or a configuration settings file for effortless modification without altering the script itself.
Also, you should decide where the tests will be performed. Will they be performed on a virtual machine (VM) or a physical machine? If you are using a VM, how many VMs are required and what RAM and processor specifications are needed for each VM? Answering these questions is crucial.
Another key consideration is whether it is feasible to run the test cases without installing the tool on the VMs. Mounting the tool on a VM implies additional costs for obtaining licenses. Some tools offer an execution engine, which is a more cost-effective version of the tool specifically designed for test case execution. Microsoft Coded UI, for example, offers the “MS Test Agent” and Test Complete provides the “TestExecute” for this purpose.
Utilizing a VM enables you to begin the execution and minimize the VM, making your physical machine available for other activities while the scripts are executing. This optimizes productivity during script execution time.
Another environmental consideration is determining the browsers that the scripts should execute on. If the scripts need to run on Chrome, Firefox, and IE, ensure that the techniques used to identify elements are compatible with all three browsers. For instance, if you are using CSS techniques, confirm that the CSS attributes are supported on all browsers for uninterrupted execution.
Q #2. Who and when will these test cases be performed?
When considering the question of “when,” it is essential to think beyond just executing the test cases on every build.
Currently, many companies are embracing continuous integration practices. Continuous integration involves the automatic integration and release of builds as soon as developers verify their code. In some cases, these builds are deployed and automatically tested using automated scripts.
The objective of this question is to determine whether the automated scripts will be incorporated into the continuous integration process. If they are, it is vital to ensure that the scripts can integrate seamlessly with CI servers such as Jenkins, MS Build, and Hudson.
Some organizations depend on overnight builds. In this scenario, the scripts need to be able to initiate and execute automatically without requiring human intervention.
Other businesses manually release builds, and the automation team or manual testers are responsible for running the scripts. The execution method depends on the application scale and the number of environments. If there are multiple environments, it is suggested to appoint a dedicated resource for script execution.
Q #3. Should test cases stop at the first failure or continue their execution?
Some applications have a flow that heavily depends on prior actions.
For instance, contemplate testing a payroll module in an ERP application. The tests might involve creating an employee in the database in the first test case, and printing a salary check for that employee in the subsequent test case.
Now, consider the following scenario:
If the employee is not created due to an application bug, proceeding with the following test case wouldn’t make sense as there is no employee to receive a salary. In such cases, if the first test case fails, there is no reason to continue with the following test case.
Hence, it is critical to consider the dependencies between test cases. If the test cases depend on each other, they should be arranged accordingly. When a failure happens, the execution should be halted, and the bug should be reported.
However, some applications have a base state. A dashboard page from which all links can be accessed could serve as an example. In this case, the dashboard page becomes the base state for every test case. If any test case fails, the execution shouldn’t be stopped. Instead, the failure should be marked, and the test case should return to the base state before proceeding with the next test case. This method is also known as a recovery scenario. At the end of the execution, a report is produced to show the number of failed and passed test cases. Failed test cases can be debugged and reported as bugs. (The login page scenario discussed at the start of this article also features a base state).
It is crucial to pose these questions and know the answers as automation engineers to guarantee successful script development and smooth execution. Though creating a simple automated login script may seem easy, running it without considering these points can be quite challenging.
Reporting
Suggestions For Successful Test Execution Reporting:
Having excellent scripts is crucial, but reporting is equally significant for effectively identifying bugs through automation.
If you are interested in manual testing projects, you might as well check out “How to Smartly Report Test Execution”.
Transparent and comprehensive reports allow us to draw conclusions once the script execution is over.
The reporting formats may vary depending on different tools, but the following are the most crucial aspects that should be incorporated in the report.
1) Batch Report:
If numerous test cases are included in a batch and the batch is executed, the report should incorporate:
- Total number of scripts.
- Index of all test cases in tabular form.
- Test results (Passed or Failed) for each test case.
- Each test case’s duration.
- Name of the machine or environment in which every test case was executed.
2) Detailed Report for a Single Test Case:
When observing the detailed report for a single test case, it should encompass the following details:
- Test case’s name.
- If connected to a test case repository, the ID of the test case.
- Duration of the test case (in minutes and seconds).
- Status of the test case (Passed or Failed).
- Screenshots (only upon failure or at every step).
- The exact line number in the script where the test case failed.
- Any other useful logs written in the script that should be shown in the report.
Examples:
Example of a Batch Report (Selenium C# Web Driver and MSTest):
Example of a Detailed Report for a Passed Test Case:
Example of a Detailed Report for a Failed Test Case:
(Click on the image to view it enlarged)
Additional points to bear in mind:
- Duration is a noteworthy factor. Test cases taking longer times to complete should be debugged and optimized for quicker performance. Shorter durations are preferred.
- Screenshots should be taken only upon failure to accelerate execution.
- The report should be exportable in formats that can be effortlessly shared, such as PDF or Excel files.
- Custom messages should be written on assertions and checkpoints to provide clear data about what went wrong in case of an assertion failure.
- It is a good habit to log a line of text before each action. This log will be displayed in the report, helping users to promptly identify the action at which the test case failed.
Final Thoughts
Smooth execution and efficient reporting play a significant role in test automation.
In this tutorial, I’ve attempted to explain these aspects in a straightforward manner based on my personal experience. We have encountered many challenges when we didn’t have an execution strategy in place. However, in our subsequent projects, we deployed appropriate execution planning, which led to considerably smoother execution.
We are interested in hearing your views as well. Your comments can provide useful insights and help increase our knowledge.