I recently finished the first phase of a successful QA automation project built on a Docker environment. Below are some of the key steps we took as part of the project:
Setting expectations and choosing the framework:
My first step was to help educate the client on what could be accomplished for their project with automation, which was important to ensure they were aware of both the benefits and limitations of an automation initiative. Once we had a good understanding of the project scope and underlying development technologies (knowing we will need to further integrate the code with the Automation framework in the future), I provided a recommendation for the QA languages and framework. We decided to use Selenium Webdriver, Java, Maven to manage the libraries, TestNG and AWS (docker containers) to run the tests scripts in a Linux with headless browsers.
Incremental results and benefits:
We proceeded with implementation on the primary system workflow, and once we completed the first stage of smoke testing, the client was able to see the initial results from our multi browser simultaneous testing. At this point the client clearly understood the potential benefits of automation; seeing that they could increase coverage and the velocity of software delivery.
Initially we ran Smoke Test for two primary scenarios:
- Continuous Integration – when a new user story is finished an automated process executes the Smoke on the docker container which contains code changes from the development team.
- Partial Regression – when system enhancements are applied to production – ensuring key system functionality still works as expected.
Once initial Smoke Test deployment was completed, our next step was to designate which processes should be automated based on business priorities, which was a combination of the system functionality deemed most critical and the time required for each process took to be manually tested. However, we also had to consider if there were adequate test cases already available for us to generate the necessary scripts for the affected modules.
One we identified the priorities, we created the test scripts, and set a docker container to execute the regression test for each new release or bug fix. With automation, we can now run the regression test at night, allowing us to get the results when we start working next day.
Why did we use Docker?
The answer is very simple, it allows us to have an environment in a container (it is similar to a VM) in AWS using the minimum resources required to run Automation. It is a linux environment without UI to run the tests in headless mode – since our focus is to ensure the product works as expected, docker helps us be efficient with our time and cost. (more about this in my next blog entry)
We also needed to present results to client in an easy to understand manner, so we implemented a report tool called Vigobot that takes the TestNG results and presents them in a easy to understand fashion – and management is provided a history of the executions to know how the system behaves over time.
Project Results and Benefits.
The project is stable and following what I believe to be sound QA processes, we now quickly get results, we can test a specific change/module and we can efficiently execute a full regression test. Key client goals for automation have been achieved, execution time has been decreased while issues are addressed, and the client has more confidence that their product is working well.
In summary, we followed some of the important steps when beginning a project, including setting expectations, setting priorities, providing incremental results/benefits to management, and meeting key business goals.
My next discussion will be describing the Docker test environment and techniques in more detail.
Jason Campos – Senior QA Engineer – GlobalNow IT