The complexity of software testing has certainly kept pace with the complexity of software development. And the cost of not finding issues early in the lifecycle has never been more expensive, given the need for any consumer or business application to be available and perform consistently and specifically across a variety of browsers as well as mobile and tablet platforms. Add in the entertainment devices supporting OTT, IPTV and gaming, and software quality assurance is a daunting task for the person owning that challenge.
Solving the problem is a similar exercise to building software. You first need to understand or define your product roadmap to nail down what problems you are solving sooner than later with an eye on the horizon to make sure you don’t create a future roadblock. Then, you need to look for tools that can address and solve the complex problems that are not in your business interest to build. And finally you need to put together an integrated architecture framework and set of processes to support the different testing areas at appropriate points in the SDLC, iterating until you achieve your complete software testing solution. As you get into this exercise, you quickly realize there is no one software vendor tool/suite that has all the boxes checked. You have to creatively fill in the gaps.
A test management component is the first step in the framework. This provides both a repository of all your test cases and an organization of them for the different types of software testing you will be doing at different points in the SDLC, as well as a vital reporting capability for testing results.
An Automation component should be an early step in the framework. Different tools will have different strengths and weaknesses, so knowing your specific challenge will help you in deciding what the right vendor or freeware tool is to leverage. Some important things to keep in mind is the ability to integrate the automation component to the test management component, to avoid duplication of test case management and reporting as well as the ability to integrate the automation component from a software process management component. This is important for enabling a continuous integration environment for software development.
A software process management tool is essential to building a continuous integration environment, which addresses the software quality as new software is being pushed into the central repository.
In recognizing new software in the central repository, an updated build can be moved into the development environment and key regression tests can be applied to ensure the latest changes have not negatively impacted the core. Developers and QA personnel should be notified of issues. This reduces cycle time and the impact of code errors in the SDLC.
As the SDLC moves into a formal QA cycle, a variety of testing will be applied. A mix of automated and manual testing will address functional and regression testing. In a multi-platform environment, the automation tool will need to be able to exercise many of the same tests on different platforms. Some tools come with virtual device farms supporting many of the popular devices and versions of OS. Some tools will leave the device exercise to be solved by the buyer. Building and maintaining a physical device farm is an expensive option for some in such a case, however, virtual farms may not provide the most accurate results. API Testing As the SDLC moves past functional and regression testing, additional layers of testing may be performed.
These include load testing and performance testing. Load testing tools address simulating concurrent access to the system by users as well as integration API’s that utilize system resources and assist in collecting performance data under load conditions for the different parts of the application providing necessary insight prior to deploying into a production environment. Performance testing addresses the overall health of the application, specifically focusing on areas such as memory leaks, performance bottlenecks, fault simulation and code coverage analysis.
Following the completion of user acceptance testing, if such a formal step exists in the process, the software is released into production. Typically, simple testing is done by the operations and deployment team to ensure the system comes up properly and is available. The reality is, in most corporate environments from small to large, human and system environment resources are limited and rarely mimic true production. Even with the significant amount of effort applied in all the previous steps, on nearly every release, new issues will only become evident in the production environment. Therefore, planned production testing is a smart post release process to attempt to identify issues before your user community finds them or worse. Using some combination of automated non-polluting regression suite to test the main functions as well as leveraging targeted manual testing across devices in the hours or days following a release is simply a smart insurance policy against a significant business impact.
No one size ever fits all and budgets dictate compromise. Thinking through your requirements and putting together a sensible testing solution framework that addresses your biggest pain points will undoubtedly improve the quality of the software you produce.
Jim Leichtenschlag is the GlobalNow Practice Director.