Low cost automation challenge

A New Zealand based customer in the heath care domain embarked on a journey of migrating their Delphi-based products to Microsoft technologies. The products use specialized GUI controls that are not recognized by the typically popular tools. The company was keen to embark on automation right from the early stage of migration. And the budget to develop automation was tight.

We conducted Proof-Of-Concept (POC) to identify the tool that would support automation for both Delphi and VB.Net. We discovered that most popular tools were indeed not compatible with the developed product. The POC concluded that Test Complete did support both Delphi and VB.Net with a few constraints. It was very cost effective however but not user friendly. We convinced the management of our decision. The project started off with us identifying test cases which could be automated. Seven modules were automated and demonstrated.

We developed reusable Keyword Driven Framework for the client. Both individual test case execution and batch run was possible just by choosing the test cases. STAG provided detailed demo of the framework to the in-house QA team.

However some of the test cases chosen for automation were not complete. We validated the test cases, made the necessary changes and then initiated the scripting. The automation work was divided between the STAG and customers team. As we automated the test cases, we guided and trained the customer’s team to automate.

The result – By automating 326 test scenarios, the testing time was cut down from 80 hours to 12 hours! We saved the customer significant money spent on the tool, but more by enabling them to release the product to market ahead of schedule!

We demystified the automation puzzle. Relentless validation tamed!

A large global provider of BI solutions has a product suite that runs on five platforms supporting thirteen languages with each platform suite requiring multiple machines to deliver the BI solution. The entire multi-platform suite is released on single CD multiple times a year.

The problem that stumped them was “how to automate the final-install validation of multi-platform distributed product”. They had automated the testing of the individual components using SilkTest, but they were challenged with “how to unify this and run off a central console on various platforms at the same time”.

Considering each platform-combination took about a day, this required approximate two months of final installation build validation, and by the time they were done with this release, the next release was waiting! This was a relentless exercise, consuming significant QA bandwidth and time, and did not allow the team to do things more interesting or important.

The senior Management wanted single-push-button automation -identify what platform combination to schedule next, allocate machine automatically from the server farm, install and configure automatically, fire the appropriate Silk scripts and monitor progress to significantly reduce time and cost by lowering QA bandwidth involved in this effort. After deep analysis, in-house QA team decided this was a fairly complex automation puzzle and required a specialist! This is when where we were brought in.

After an intense deep-dive lasting about four weeks, we came up with a custom master-slave based test infrastructure architecture that allowed a central console to schedule various jobs onto the slaves, utilizing a custom developed control & monitoring protocol. The solution was built using Java-Swing, Perl, Expect and adapters to handle Silk scripts. Some parts of the solution where on Windows platform while some on UNIX. This custom infrastructure allowed for scheduling parallel test runs, automatic allocation of machines from a server farm, installing appropriate components on appropriate machines, configuring them and finally monitoring the progress of validation through a web console.

This test infrastructure enabled a significant reduction of the multi-platform configuration validation. The effort reduced from eight weeks to three weeks. We enjoyed this work simply because it was indeed a boutique work fraught with quite a few challenges. We believe that this was possible because we analyzed the challenging problem from wearing a development hat and not the functional test automation hat.