In my previous blog, I talked about the first phase of a successful QA automation project built on a Docker environment. Now, I would like to describe the advantages we realized by using Docker, and some of the techniques we used to implement.
Typically, when we begin a new automation project, we spend significant time configuring our environment and fixing associated issues as they surface. With Docker technology, we just need to build a file that configures an environment that normally works “issue free” – avoiding the time spent in configuration and problem solving.
Docker is an open source software that allows us to package applications in a container, Containers are like a VM. However, they have a very important difference – a container can share the system kernel with others, which means we can run multiple containers (with their own user space) simultaneously run on a single host machine.
1- Easy to use: Docker is very simple to use, we can quickly set up the container, just by configuring a docker file with all the configuration and packages it needs to meet our purpose.
2- Speed: Since Docker uses the kernel of Linux it is very lightweight and quick to start, in just a few seconds we have it running and ready for use, compared to a VM that takes a while to start.
3- We can use Docker Hub: Docker hub is like the play store, there you can find public images already configured. For example, we can find images with selenium-grid, java 8 and chrome driver already set, so we just need to download it and test it to know if it does what we need.
4- Saving time: As mentioned above, this is an important advantage since with docker, we just need to have the dockerfile and run a command that takes less than an hour to finish, saving significant time normally required to configure an environment.
5- Modularity and scalability: I mentioned previously that docker allows for several containers running on the same machine. It is useful for developers since they can have the DB running in a container and the Backend code in another one. For us QA engineers, we can set our selenium grid and nodes in several images that belong to a container allowing us to execute a regression for different browsers. Additionally, if we need to add a new dependency or package it is very easy to do, since docker is configured in a file that contains what it needs to build.
Once we decided to begin using Docker, I had two options – download an image from Docker Hub with the applications and packages that I needed or create the image myself to make sure it has everything I need – plus allow me to learn how to configure a container from scratch. I decided to configure it by myself, so, I downloaded Kitematic, Then, I created a couple of files called DockerFile and docker-compose.yml.
DockerFile is where we write the instructions to build an image. Some instructions could be:
RUN apt-get update
RUN apt-get install -y openjdk-8-jdk maven (Install some packages the image will have)
In this file we can perform other actions like expose a port, or set environment variable, run a sh file that runs some processes we need etc., depending on your requirements.
Docker-compose.yml is where we can configure networks, volumes and services to our container. For instance, to set up a volume we need to add this line to the yml file:
That line maps (we can say it share the folder located in my physical machine) the automation code inside the container in the opt folder, that will allow us to run a test using any branch we want to use by just changing which branch we have in the source folder.
Once we had the Dockerfile and docker-compose.yml configured, we just ran a command to build the files and get the image configured with the packages installed, the volume already set and the processes we started with a .sh file (for example we can start the selenium hub and nodes)
So now, thanks to the Docker portability, this container can be set up on any machine in less than an hour, by just getting the DockerFile and docker-compose.yml files and running the build command. So, we can now run regression a in a local machine by just getting access to those files and the automation code or we can run it in AWS, this last solution is using continuous integration and it is the one we have implemented. We configured a job that executes the regression periodically, with each result reflected in a report using an external tool called Vigobot.
In conclusion, our benefits realized from use of Docker include:
- Reduction in time and cost associated with configuration and set up
- Easier establishment of continuous integration
- Flexibility – Ability to execute regression anytime/anywhere
Jason Campos – Senior QA Engineer – GlobalNow IT