QA Automation – Requires not only tools but Innovation as well – Selenium example

From my experience, successful software QA automation requires not just tool expertise but also team innovation and creativity. For one of our recent engagements, our QA engineers established an automated smoke test using Selenium. The smoke test was incorporated into the continuous integration environment and applied following system builds to ensure the latest software changes did not break any of the main system functionalities.   The use of Selenium as the base tool required the creation of a custom framework to launch the automations and enable the use of a dataset to support the testing “paths” dynamically rather than hardcoding each of the data locations within the automations themselves. Engineering a testing path/location repository solution as part of this framework provided the ability to manage a single automation with a variety of data conditions and locations. There are 3 main benefits of this approach: Maintenance – The number of automations to create and maintain are reduced significantly. Therefore as the application changes, and the testing automation needs to be updated, there is less automation code to maintain, which reduces ongoing costs. Velocity – New data paths can be added quickly and painlessly without the need to open up the automation code/logic itself. Quality – spending less time on keeping existing testing conditions working means more time can be spent on assuring new code works as expected. By solving the problem in such a way that focuses on the data conditions rather than the automation, the ROI on automation testing is improved tremendously. The agility to make adjustments maintains the velocity of the software development process and more time can... read more

Automation – Impacting LATAM Solution Providers and the Broader Workforce

I recently shared a Nearshore Americas article which touched on Costa Rica’s advanced “readiness” to deal with the onslaught of workforce automation. According to the article, “Automation, sometimes referred to as ‘no-shoring,’ is still in its infancy, but when combined with business process as a service (BPaaS), has all the potential to be a powerful force for disruptive change.” The same article also spoke to the progress Colombia has made on the AT Kearney Global Services Location Index (a measure of countries by business environment and availability of skilled talent), jumping 23 spots to be ranked 20th, making the largest advance of any country in the Index, with significant impact from its growing technology workforce. It forecasts that Colombia is among the brightest hotspots amid a global services environment where the threat of machines taking jobs from humans seems to be at an all-time high. This got me thinking about the bigger picture of automation readiness for the global workforce. How should societies specifically deal with this ongoing, disruptive and imminent shift of human labor to automation? In the U.S. and elsewhere, we initially saw this in manufacturing and we are now seeing more of it in the services arena. Plus, ironically, automation itself often requires a new type of skill to manage and administer. We see this in our own QA automation practice, where QA engineers now need more technical IT background, and even stronger interpersonal skills to allow them to effectively collaborate across functions and geographic borders. So what is the secret for preparing a workforce to handle the automation push? How can it be embraced in a fashion that... read more

“The Culture of the New Workforce” William Gordon’s presentation at our DevOps session

We were fortunate to have William Gordon, Regional CIO of NetApp speak at our DevOps community session last week here in Dallas. I’ve seen William speak several times previously, and as always, William’s presentation was engaging and informative, and his topic “The Culture of the New Workforce” was very relevant to the DevOps movement. William provided great insight regarding the current job demands faced by CIOs and how it relates to the expectations from the millennial workforce. I tried to capture some of his key thoughts – highlighted below: The job tenure of CIOs appears to be shorter than ever, with significant pressure to meet multiple objectives – some that often appear to be conflicting. For many CIOs, data and system security is the overriding concern, as we can all understand and appreciate. But, the CEOs are demanding (and applying strong pressure) that CIOs enable rapid product delivery to meet market demands. In general, the millennial workforce is often more comfortable and capable with “balancing” security requirements for agility (at least that’s the perception of many CEOs). This reality/perception can sometimes be a disadvantage to more senior CIOs, as they can be viewed as out of touch with the skills and methods necessary for rapid delivery.     So how is the above relevant to DevOps?   As William pointed out, there are at least two tangible tie-ins: DevOps inherently can improve speed and accuracy of delivery, meeting the needs of millennials and the business If implemented properly, DevOps can help save the job of CIOs that are under fire for rapid delivery Most of us understand that DevOps, without the... read more

Why Colombia?

We were fortunate enough to attend the ProColombia “Bring IT On” event this past week in Toronto; listening to great speakers and it was a wonderful opportunity to meet other tech companies from Colombia, Canada and United States. After opening a Globalnow affiliate in Colombia this year (GlobalNow Colombia S.A.S), we often receive this question from our associates and clients: “Why Colombia?”   From my perspective there are two primary reasons: 1. Economics and 2. Culture. Regarding economics, below are some of the facts that make Colombia appealing for doing business in some fashion (the source for this information is from ProColombia and notes from the event): The Colombian government initiated an Information and Technology plan in 2010, with a strong focus on education and maximizing the use of the internet/technology. In addition, significant tax incentives and other subsidies offered by the Colombian government for business assistance – with free trade agreements in place for major markets including the U.S. and Canada A key goal from the program was/is to reduce poverty through more employment, creating a highly competitive and thriving society. The program has seen great success, as can be seen from some of these results: GDP growth over last ten years averages 4.5 percent – with Colombia now having the third largest GDP in LATAM behind Brazil and Mexico Large penetration of broadband internet (via fiber) across the country – plus 10 submarine cables supporting international traffic Recognized to have the lowest operating cost in LATAM among the major markets Large educated population with excellent bilingual technical talent Middle class has doubled, poverty has been cut in half... read more

GlobalNow DevOps HH – Paul Grizzaffi presents “Sail or Fail” with Automation

By: Lee Carter – Director of Business Development – GlobalNow IT Last night, we hosted a DevOps Happy Hour at a restaurant in the Plano area just north of Dallas. It was the first in a series of invite-only Happy Hours that we’ll be hosting in order to shine a light on new developments in the DevOps arena. In spite of an artic blast that caused temperatures in North Texas to plummet into the upper 80s, attendees braved the elements to join us as Paul Grizzaffi, QA Automation Manager and Program Architect at Alpharetta, GA based MedAssets presented “Sail or Fail: Navigating Test Automation Pitfalls.” Paul is a real rockstar when it comes to automation tools and frameworks and we couldn’t think of a better speaker for our inaugural event. Over the course of 60 minutes, Paul addressed key topics such as: Is my organization READY for automation? What will automation help my organization accomplish? What is Framework Layering? How can it help? Will automation fix my broken processes? Automation is the magic bullet that will solve all of my problems, right? (Spoiler alert: It isn’t.) Paul has given this presentation at the DFW Association for Software Engineering Excellence and STPCon and we were lucky to have him present it once again for us. The space and the atmosphere weren’t really conducive to shooting video, but here’s a link to Paul giving the presentation to another group. It has evolved since this video was filmed, but the core content remains. Thanks to everyone who joined us for a couple of hours of drinks, food, and education. We hoped you... read more

DevOps Readiness – Is my organization ready to implement DevOps?

By: Jim Leichtenschlag – Practice Leader – GlobalNow IT There’s little doubt that if you are part of an organization that delivers software, you are aware of the DevOps movement, which is all about increasing the velocity and frequency of software delivery to production with the highest level of quality.   If you think that following Agile Methodology means you are doing DevOps because you are building software fast, think again. Building software fast is one thing, delivering it to production is where DevOps comes in, and its quite another problem to solve. The question any IT leader should be asking is: Is my organization capable and ready to embrace DevOps? Like Agile Development impacted the processes, tools and talent necessary to manage defining and delivering software product requirements through the build cycle, DevOps offers a similar set of challenges to the second half of the software delivery processes. DevOps is meant to streamline the processes from software construction to test to deployment. Like Agile Development, for DevOps to be most effective, the processes need to be improved earlier in the cycle, which for DevOps really starts with Continuous Integration. The best part of DevOps is that it essentially applies all the principles of software development to the end to end delivery process. The delivery process should be looked at as an automation programming challenge. Like any other programming challenge, this means: Define the problem Break it down into achievable steps Identify the useful process metrics Identify the available tools you can leverage Fill in the gaps with programming Minimize the human dependence Enable configuration and exception Iterate through it... read more

Careers in Software QA – Observations from Costa Rican University QA workshop

By: Mauricio Navarro – GlobalNow Senior QA Engineer I recently had the opportunity to present a QA session to students and faculty at the Universidad Latina de Costa Rica.  My primary topic was “practicing software QA in the real world”, with the idea to explain the basic approach of QA, and what it really means to business outcomes.  Many of the students are computer engineering majors, and some now see QA as a possible career path or end game position.  So my intent was to provide some “real world” experiences based on what we do at GlobalNow as part of our DevOps practice. I covered all of the normal topics, such as Quality Control vs Quality Assurance, Types of QA (regression – stress – load – white box – black box), basic best practices, standard methodologies (Agile), ISO 9126 / ISO/IEC 25010:2011, what we often verify (functionality, reliability, usability, efficiency, maintainability and portability), Manual vs Automation testing. After describing the above and the basic purpose of QA, I explained how the practice of QA has shifted from being an isolated compliance function to being an integral part of the deployment team.  This was especially interesting to the audience, as they see that QA has an integral role to play as part of the delivery mechanism for new products and services.   Traditionally, QA has been organized in its own silo with the primary purpose of identifying problems after code delivery and halting delivery to control risk. Although it is still important that testing efforts ensure overall quality and reduce risk, today’s DevOps oriented QA engineers are truly a part of the delivery... read more

Continuous Integration with Jenkins

by Rodolfo Cordero – GlobalNow DevOps Engineer With some agile methodologies it is common to see weekly or even more frequent deployments, with new release having amazing features or functionality important to the business. Of course it’s a priority to test the new functionality, but it is also common to neglect the adequate testing of existing functionality and less important new features. Unfortunately, this can be damaging since to find issues related to broken/bad code at the end of a sprint or worse, in production, is very costly. We attempt to avoid the above through “DevOps” type processes such as Continuous Integration. The concept of Continuous integration is to automatically test early in the development life cycle, to reduce errors, rework, and improve overall system quality. One of the first steps in deploying continuous integration is to create test cases. At a minimum you should identify a subset of cases for the smoke tests and implement some means of version control on them (e.g. GIT or Subversion). Of course it’s beneficial to already have a complete test plan (regression test, sanity test and unit test) implemented, but CI should focus on the cases designated for the smoke test. Which brings us to Jenkins: Jenkins is an open source continuous integration tool written in Java, and is a powerful tool used by many companies and projects. Some may think it may not have the best graphical interface, but Jenkins is also incredibly capable and highly scalable – and if more functionality is need, plugins are available to help. Once you have your test cases prepared, Jenkins can help you. Jenkins... read more

Why is Manual testing still Alive?

By Mauricio Navarro – GlobalNow Senior QA Engineer Manual testing is alive and well. Even though our industry is going strong into the land of automated testing, the truth of the matter is that manual software testing is still necessary and very much being used by many QA engineers. Why is this?   1. Budget Every project is unique and so is its budget; unfortunately one of the first areas that suffers from budget cuts is QA. Management is not always willing to pay for supporting tools or resources to do a complete QA plan, least of all to create automated tests. Often, a project manager will find a QA specialist or business user and specifically ask them to test without creating formal Test Plans since the objective is to make the QA portion of delivery as fast and cheap as possible. This is often a sign of a project being run on a very low budget that is emphasizing the development time while hoping for a project free of issues, clearly this is not healthy and normally the results will be negative. Speed How many time have we heard the phrase “This must be done ASAP”? I bet the answer is too many times. This is another clear sign that QA needs to be done too fast – which then requires manual testing. In some cases the team leader will even ask the QA specialist to start testing without a final release of the code. If this case automated testing is not a good option – since it will most likely be doomed to fail because many objects... read more

Implementation of Regression Automation – Considerations

by Alex Chaves – GlobalNow QA Team Lead This is a follow on to my previous discussion on Regression Testing – Types and Considerations. A critical first step for any automation initiative is to create test cases, ideally using a test management system. Those test cases should be organized by system function in a fashion that will allow users to test specific areas of the application as needed.  There are many test management tools, both open source and commercial, that can be selected based on your requirements and budget. Picking a suitable testing automation tool is the next very important consideration. Along with available budget, the technology used by the targeted systems and the endpoint devices of the supporting interfaces are the most important consideration when selecting the automation tool. There are a variety of commercial and open source tools that can be leveraged. Keep in mind, however, that open source is not always the cheapest total cost of ownership. As the application changes, the tests themselves change and therefore automation must keep pace. The frequency and volume of application changes is a key component to the total cost of ownership equation and should heavily influence your choice of tool. For example, as system functionality changes, Selenium (an open source tool) requires test engineers to update each element in multiple locations since every test case instance is hard coded. Commercial tools such as Ranorex record the case objects/scripts, assigns a unique ID in a repository for each instance, which in turn allows the test engineer to update the repository once so all the impacted test cases can use the same object. Another major... read more

Sign Up to Receive Ongoing Tips on QA and Nearshore Development

Share This Page
Share on FacebookTweet about this on TwitterPin on PinterestShare on LinkedInShare on Google+Email this to someone