Automating API Testing using Postman

Last September I wrote this blog entry describing how to implement API testing using a combination of Python and Behave.

In this new post, I would like to describe how I also use Postman to test APIs. Postman is a Google Chrome app which we typically use to quickly test APIs to see the returned results. Postman is a powerful HTTP client that allows us to test complex HTTP requests in order to validate the responses to make sure they retrieve the right information. Using Postman allows us to request data and get the responses in a friendly GUI.

Postman has several methods to interact with endpoints, these methods are:

I am not going to explain all of them, I just want to explain those that are best used when testing an API, they are:
● GET -> fetch information
● POST -> add new data
● PUT- > replace all the existing data
● PATCH -> update some existing data fields
● DELETE-> delete existing data

Let me begin by explaining how to create a collection in postman. Collections allow you to group individual requests together, and it helps to organize them into folders.

So, our first step is to create a new collection, we can name it “jsonplaceholder”, which will be displayed in the left panel. The next step is to create our first request. For this example, we are using the jsonplaceholder endpoints that are open to play with its services.

To create a new request we follow these steps.

  1. Right click on the collection created, select “Add Request”
  2. Enter a name like “Consume jsonplaceholder”, then click on “Save to jsonplaceholder”
  3. Click on the request displayed under the collection we have created.
  4. Make sure the “GET” method is selected
  5. In the field “Enter request URL” please enter ->
  6. Click on “Send” button

At this point we have created our first request, and receive our first response from the endpoint with a list of posts and a Status code of 200. We should receive a similar response to this screenshot.

When we validate responses, we should make sure the json retrieved is the expected one, with the right forma and status code; and check the time it took to receive an answer.

Now that we are able to get information from and endpoint, we can attempt to store some data to an API. For this we would need to use the “POST” method. We create by following these steps:

  1. Right click on the collection created, select “Add Request”
  2. Enter a name like “Add post”, then click on “Save to jsonplaceholder”
  3. Click on the request displayed under the collection we have created.
  4. Click on method dropdown, select “POST”
  5. In the field “Enter request URL” please enter ->
  6. In “Body” section, please enter the following json body:
    “userId”: 4545,
    “id”: 101,
    “title”: “This is an example”,
    “body”: “some information”
  7. Click on “Send” button.

Once the request has been processed, a 201 Status code (Created) is displayed as a response as well as the endpoint will retrieve a json response with the id that belongs to the post we have created in the system.

As you can see, testing APIs using Postman is not complicated. Postman also offers some interesting additional features. For example, it allows us to create environments.

What is the environment?

Basically, it allows to run requests and collections against different data sets. It means we can have an environment for Prod, another for Testing and another one for development. An important fact of using environments is that we can store variables that will be used in several tests. For example, let’s imagine we need to authenticate to and endpoint before calling any other service, the authentication endpoint will retrieve a token that needs to be sent to each request since it is required to get a response; the token can be stored in a variable that will be passed to each test.

How to create this environment and variables?

  1. In the right corner of Postman we can see a “configuration” icon
  2. Click on “Add” button. Please add an environment name like “Testing”.
  3. Under “VARIABLE” enter “url”.
  4. Under “INITIAL VALUE enter “”
  5. Click on “Add” button.

At this point we created a new environment as well as a variable called “url. Probably, you are wondering what is the purpose of it? Lets modify our tests that we created before in order to use the variable we have set. Open both requests and replace the url we have for this -> {{url}}

As we can see, we are using the same variable in two different requests, so let’s click on “Send” for both requests, the same responses from the previous execution will be reflected.

Postman also allows us to create variables either in Pre-request Scripts or Tests according to what we need. Here is how to save a specific value from the json response in a variable that we would need in another request. In order to save a value in a variable we should follow these steps:

  1. Click on “Tests” tab.
  2. Please enter the following code:

var post_id = JSON.parse(responseBody);

  1. Click on “Send” button.

Here we have created a new variable, in order to see this variable, we can click on “configuration” icon next to the “Testing” environment. A modal will come up with the new variable created, each time we call this endpoint, that variable will be overridden with the new id it gets as response.

Postman has become one of the most useful tools for API testing since we can create very powerful tests that can even assert data. If you are considering beginning testing API testing, you can easily research this tool to discover all the advantages it can bring to your project. In this blog, we learned how to get up and running with testing HTTP API endpoints with Postman, how to save variables and create environments. By using Postman, you can make your team and your development workflow more productive by reducing the time spent on testing and sharing API specifications and endpoints. This is especially important when working in an Agile team environment.

Getting started with QA Automation Can be Painful – How to Accelerate Deployment

Note: The automation tool referenced below was created from the great work and by our skillful Automation Engineering team

Yeah, it can be, if this is the first time that you are considering the implementation of test automation in your software project, the evaluation process and deployment of the solution could become a nightmare. There are many unknowns and with the number of techniques and tool options available, it is easy to go down the wrong path. Or maybe your team does not have the necessary skills to build an automation framework from scratch, in the end, a mistake in this area could easily impact your expected ROI. Important considerations include – which language should you use, should you try with an open-source or a proprietary one? what about reports? Parallel executions? Or loggers, what if you need to use data-driven testing?

We developed a Software QA automation accelerator tool, that helps our clients successfully address the above challenges as part of our client engagements. It has proven success in multiple web-based automation projects, and it has been implemented using both Java and C# , based on client’s platform requirements. One of the strengths of our framework is that the stack of tools used is all open source, meaning no incremental charges for third party licenses when implementing your automated test cases.

How do we accelerate the automation process?

It is simple, based on our experience of working on previous projects, we realized that every web automation solution is following a common pattern of functionality that must be present for the solution to be robust, reliable and useful. So, we took those basic units and included the implementation in our framework. Let’s describe it:

Selenium, by far it has become the standard framework for automation. It is open source, with huge support from the technology community and constant updates made to the framework. With Selenium as the foundation, we use the Page Object pattern, an industry standard that creates an object for each page assessed in the test cases. This helps with the encapsulation of the implementation and improves the quality of the test case classes – since they will be focused on the test “stimulus”, and not the interaction with the selenium driver.

In the following image, you can see an example of a functional Page Object with the useful web elements from the page and the actions that can be executed with them.

With selenium the implementation of Selenium Grid is essential, selenium grid is a tool that allows you to execute multiple instances of browsers on the same machine at the same time. This allows parallel execution, thus reducing the entire execution time of the suite. In some cases, selenium grid was executed using a virtual machine, but there is often a problem with this approach – it can consume significant resources from the host machine and sometimes the virtual machine stalls or is in an invalid state – negatively impacting execution. As a solution to this problem we often implement the selenium grid in docker instances. Docker is a virtualization tool that creates containers, you can create as many containers as you want and as the host can support. Each container is the host of a single browser driver, which allows us to execute the test cases in parallel. The main benefit from Docker compared to virtual machines is that if at some point something fails you can stop the failing container and create a new one within minutes. In the following image, you can see a selenium grid instance with 2 chrome browsers and one Firefox browser.

Normally, there are test cases that basically share the same steps but differ on the input data. The typical way to automate them is to create one test case for each scenario and copy and paste the steps, but this is inefficient and difficult to maintain. Instead, we use a data-driven approach, that basically means that we reuse the same steps for all the scenarios, but the input stimulus data will come from a .json file. This is much easier to maintain and generates a high level of flexibility for the implementation of new test cases. Instead of creating new steps you only add valid stimulus in the input file and apply it accordingly. In the following image you can see a simple example of two test cases with data-driven testing:

You can have the best automation framework that never fails and finds multiple bugs, but if you are not capable of effectively sharing the results, then the framework is of no used to the business. Which is why we have included Extent Reports in the accelerator, that shows in a very user-friendly way how the execution job performed, how many test cases passed, were idle, or failed, For failed cases, a screenshot of the last screen that was seen at the moment of the error is attached. The reports include the time, user, operating system and details of the machine where they were executed. In the following images, you can see examples of the automation report.

Also included is logger (a best practice in automation), it is useful because it will note every action where the driver interacted with the web page. This allows the user to understand what happened in real-time with the test case, and in case of failures – easily see the last successfully command that was executed , allowing debugging to begin from there. In the following image, you can see a small example of the details that the logger prints for one specific test case:

We are also working on the development of an automation accelerator for mobile platforms using Appium as the mobile framework, this one will include all the features described above in order to reduce the time for automation.

Rest API testing using behave in Python

by: Jason Campos – Senior QA Engineer – GlobalNow IT

API testing is an important component in a company’s QA strategy since their client’s often consume the API endpoints to get the necessary information for execution of the business. In most cases, manual testing is initially performed at the most important endpoints to ensure the customers retrieve the information needed; followed by automation of the test scripts for daily execution using Continuous integration.

I recently implemented test automation for a client’s REST API; using Python + behave as the primary framework, and I would like to share my experience and techniques used for this automation project.

Read more

A software QA professional’s perspective on building trust within distributed teams

By Alonso Badilla – GlobalNow IT QA Lead

As a software QA professionally, I would like to share my experience in building trust within distributed product teams from locations around the world; which at times can be a challenging due to factors such as cultural differences, personalities, skill levels and communication issues.

From my personal experience a good approach is to get to know those people working with you; which helps to build confidence in each other and creates a genuine team bond. This means initially taking some time to inquire about the person’s life and interest by engaging conversations at the level person feels comfortable with.   This engagement lays the foundation of trust which allows team members to better organize work and receive and assign tasks in a more enthusiastic way. It also encourages members of the team to be forthcoming with problems that need resolution while sharing ideas in a “safe” environment.

To maintain this cohesion, it is important to communicate frequently and routinely; Ive learned that just calling people directly and letting them hear your voice (and not a text) can have a positive impact on productivity. Being forthright and timely is essential – never fear to contact anyone at any moment if you find a roadblock or have something to share about any relevant topic.

Read more

Why and How we used Docker for our QA automation project

In my previous blog, I talked about the first phase of a successful QA automation project built on a Docker environment. Now, I would like to describe the advantages we realized by using Docker, and some of the techniques we used to implement.

Typically, when we begin a new automation project, we spend significant time configuring our environment and fixing associated issues as they surface. With Docker technology, we just need to build a file that configures an environment that normally works “issue free” – avoiding the time spent in configuration and problem solving.

Docker is an open source software that allows us to package applications in a container, Containers are like a VM. However, they have a very important difference –  a container can share the system kernel with others, which means we can run multiple containers (with their own user space) simultaneously run on a single host machine.

Read more