Busting the Gray hair myth

A leading global e-commerce provider was looking at automating over 75 components of its global network services platform to strengthen its security and reliability. The task was complex and the company felt it needed a skilled team with lot of gray hairs.

The company discussed the challenge with STAG and we jumped at it. To win their confidence, we even offered to do a pilot to showcase our capability. We took the complete ownership of the deliverables also. Did we just shoot ourselves in the leg?

Our customer is a leading worldwide provider of business-to-business EDI and supply chain integration, synchronization and collaboration solutions. The Indian Development Center were entrusted with the responsibility of making changes in product solutions followed by smooth product migration from QA environment to deployment in pre-production environment and subsequently to production environment. Any change in the product component called for full validation of the entire product suite that had multiple impacts due to multiple locations and different components used by different users across the globe. The in-house QA team handled the manual functional testing quite effectively but the challenge at hand was to cut down the test cycle time thereby facilitating faster migration to pre-production environment and subsequently to production environment. This called for superior script-writing skills apart from performing regression testing of the entire product suite. STAG was entrusted with this project and the expectations set

  • Automate (server-side scripting) tests to perform verification and regression of both dataflow and admin-flow
  • Automate (scripts for WinRunner) certain pre-determined functionalities

To handle the issue of large number of test cases covered by each component we formulated smart automation strategy. We ensured the automation architecture was flexible and reusable. This helped to cover optimal test cases in a single script. We created around 52 verification scripts and around 28 regression scripts – for toolset WinRunner. Further, we also developed over 330 server-side PERL scripts   Customer formally certified each script and only then, it was released to QA team for use. We developed custom tool –“test harness” in Java and test scenarios were called from xml file – The xml file was the placeholder for Perl script name and parameters required to execute the script. – to use as UI front-end to execute the automation scripts.

The best part was none of the delivery team had gray hairs.

Trusting a complete stranger

A leading provider of mobile device management (MDM) solutions faced a problem of their existing and experienced QA team unable to focus on new product road map activities, as they were pre-occupied with constant maintenance testing. Their new product revenue goals were affected. They wanted their experienced team to focus only on new feature testing and wanted to build a new continuous engineering team. Their concern – this could take a while. The management then took a bold step of working with a partner who could build and operate the Continuous Engineering (CE) practice. Wouldn’t that be committing Hara-kiri? How could you trust your most valued customers with a stranger? What about the credibility built over many years? How did we fare?

The customer has around THIRTY customers for their product. Every product deployment is a customized one, resulting in multiple code branches. Hence maintenance required multiple code branches to be validated within stringent time lines. The existing QA team was into delivering the various fixes putting serious pressure on validating the next version product and thereby stalling new product releases. That is when they approached us to build a continuous engineering test team very rapidly and take ownership of the new continuous engineering (CE) team.

This was interesting challenge, as we had to ramp-up the product knowledge rapidly, put together a efficient CE process and ensure high quality releases. Once we defined and agreed upon overall competency required in the team a full time on-site CE team consisting of test professionals with clear roles and responsibilities was setup. Senior staff at STAG provided the strategic direction & support during the build-up stage. The engagement was approached in a staged manner – Build-up, Stabilize and Optimize. HBT turned out to be the savior. Its ability to bring clarity helped us ramp-up in an accelerated mode. Within no time, we were releasing the patches. In all, we were responsible for 27 change requests, 10 hot fixes and 31 general releases.

Our client’s customers were indeed in safe hands.

Help! Protect my investment.

The customer develops software for clinical trial process management for numerous healthcare majors. We are talking about best-of-breed, mission-critical systems to support customers throughout the drug development. The solution was built using Oracle forms. The QA decided to take the automation path, and the customer invested multiple licenses of an expensive tool on recommendation by a reputed consultant. They hired specialist automation engineers. All set to go, came the shocker – the tool did not support their application!

Flabbergasted, they were recommended to approach STAG to protect their investment.  Did we?

The customer was looking for an automated solution to improve their regression cycle time and hence enable faster and consistent delivery of the product to the market. To achieve this objective customer was looking for cost effective automated solution which included suggesting the right tool, putting across the right automation framework. The QA identified a reputed test organization to consult. The company suggested a very popular tool known to work on all technologies and got the customer to invest on it. The company, hired specialist automation engineers with experience on the tool. Then came the shocker – the tool did not support their application!

They were recommended to contact STAG to solve this issue. How do I protect my investment? How do I achieve the target goal of automation? After detailed analysis and frustrating debugging we found the problem.  Initial feasibility study was done to demonstrate the capability of the fix. Our team assessed the existing automation artifacts to identify the suitable candidates for automation which would give the best ROI for the investment on automation.

We developed Hybrid framework architecture to address the application complexity and size. Development principles of reusability, maintainability, usability and scalability were built into the architecture to enable quicker and effective generation of automated scripts, which enabled higher productivity of script development.

Key reusable components for UI test object navigation, automatic object repository conversion and data loading from UI for oracle database were built. Reusable application business validation components were developed. Automated scripts were built using these libraries and applying best practices of development such as coding conventions and good documentation. The scripts were very easily maintainable.

We were delighted with the results – 68 major modules automated, 1700+ functional threads automated,  70% effort reduction of effort via automated data population of a 80 GB data for migration validation.

Guy Fawkes – Beautiful fireworks, not a blast!

We had an interesting challenge posed to us by a large UK based government health organization. It was to assess if their large health related eLearning portal would indeed support 20000 concurrent users (They have 800K registered users) and deliver good performance. There was indeed a cost constraint, and hence we decided to use the open source tool JMeter.

The open source toolset has its own idiosyncrasies – max 1GB heap size support , supports only a few thousand users per machine and and has a nasty habit of generating a large log! To simulate the load of initially of 20000 and later 37000 concurrent users, we had to use close to 40 load generators and synchronize them.

We identified usage patterns and then created the load profile scientifically using the STEM Core Concept “Operational Profiling”. We generated the scripts, identified the data requirements, populated the data and setup synchronized load generators. During this process, we also discovered interesting client side scripting , we flattened them into our scripts. Now we were ready to rock and roll.

When we turned on the load generators, sparks flew and the system spewed out enormous logs – 3-6 million lines, approximately 400-600 MB! We wrote a special utility to rapidly search for the needle in the haystack! We found database deadlocks, fat content and heavy client side logic. Also the system monitors were off the chart and the bandwidth choked!

Working closely with the development team, we helped them identify bottlenecks, This resulted in query, content and client side logic optimization.Now the system monitors were under control and the deployed bandwidth was good enough to support the 20000 concurrent user load with good performance. To support higher loads in the future, system was checked with nearly twice this load and additional resources to support identified.

The FIVE weeks that we spent on this was great! (Hmmm- tough times over at last!)

Healthy baby at birth!

A large India Development Center (IDC) of a major consumer electronics& peripherals delivers 3-4 releases of their product every year. They had “birthing” problems – early stage defects were bogging them down. The root cause was identified as ineffective development testing. The team was mature, had good practices and were focused on “unit testing”. The big question that nobody wanted to ask was “what in the name of God is an Unit?”. This resulted in everyone in both early and late stage doing similar tests with poor results.

Applying STEM, we clearly identified the expectations of code from development and listed the types of defects that should not seep from development. Having setup clear cleanliness criteria, we had gotten around the “notion of unit”, and setup a goal focused development test practice. The test cases increased many-fold (this did not increase effort/time) and fault traceability made them purposeful.

The code coverage jumped from 65% to 90% with the remaining 10% identified as exception handling code that were hand-assessed. Now all early stage code were ‘completely assessed’. The RESULT – Defect escapes to QA team dropped by 30-40% and the specialist QA team could focus on their job and releases were made on time.

From premature babies needing incubators, we had transformed the organization to deliver bonny babies!

Quality injection – Scientific validation of requirements

Validating early stage pre-code artifacts like requirement document is challenging. This is typically done by rigorous inspection and requires deep domain knowledge. One of our Japanese customer threw a challenge – “How can you use HBT/STEM to scientifically validate requirements without knowing the domain deeply?” .

The core aspect of HBT is to hypothesize potential defect types that prove that they do not exist. These are identified by keeping in mind the end users and the technology used to construct the system. So how do you apply this to validate a pre-code artifact?

We commenced by identifying the various stakeholders for requirement document and then identified key cleanliness attributes. These cleanliness attributes if met would imply that the requirements was indeed clean. We were excited by this. We then moved and identified potential defect types that would impede these cleanliness attributes/criteria.

Lo behold, the problem was cracked and we then identified the various types and the corresponding evaluation scenarios for validating the requirements/ architecture document. We came up with THIRTY+ defect types that required about 10+ types tests conducted over TEN quality levels with a total of SIXTY FIVE major requirement evaluation scenarios to validate a requirement.

What we came up is not yet-another-inspection-process that is dependent on domain knowledge, but a simple & scientific approach consisting a set of requirement evaluation scenarios that could be applied with low domain skill to ensure that the requirement/architecture can indeed be validated rapidly and effectively. These ensure that the requirement document is useful to the various stakeholders over the software life cycle and does indeed satisfy the intended application/product attributes.

It was more than just validation. It was ensuring nation’s pride.

A large petroleum major was rolling out a specialized solution to ensure that fleet tracking solution to ensure zero pilferage during transport. The solution consisted of a plethora of technologies (GPS, GSM, Web, Mapping) and our role was to ensure that the final solution is indeed risk-free for deployment.

With the launch date in the next few weeks, we got cracking on applying HBT to extract the cleanliness criteria from the business and technical specifications outlined in the tender document.The cleanliness criteria consisted of multiple aspects- deployment environment correctness, cleanliness of software, clean working of hardware/software interfaces and finally the ability to support a large load, volume with real-time performance.

We identified the potential types of defects that spanned the entire spectrum of hardware and software. The first step was to understand system development process and identify senior consultants visited the vendor’s facility to assess the people and processes used to develop the system. This provided a clear picture of what to expect and the work that lay ahead of us.

Post our understanding of the development system, we developed a scientific strategy and the evaluation scenarios. A variety of tests were identified – individual feature validation, simulating various business use-cases, understanding load limitations and performance evaluation of the system.

Now we were ready to validate the final system in the data center. The first cut of the solution was used to develop a set of automated scripts for large scale load/stress/performance testing. The system was populated with large data representing a real life system. Vehicles were fitted with the vehicle mounted unit. We were ready to roll now.

The various vehicles were set in motion in various terrains, at various speeds and mapping of the fleet on the India map was validated. We simulated large number of vehicles with data arriving from the simulators at a high rate to ensure that performance was indeed real-time.

In addition, the deployment environment was validated, configurations checked, legality of the software verified. We also verified that the solution integrates with customer’s SAP database as well.

Bugs popped up and were fixed. We recommended changes in the system capacity, pushed the vendor to close all the critical, high and medium priority defects before providing a qualitative feedback on the solution and the potential risks. Once satisfied our customer’s investment was safe, we gave a go-ahead to rollout the solution.

Low cost automation challenge

A New Zealand based customer in the heath care domain embarked on a journey of migrating their Delphi-based products to Microsoft technologies. The products use specialized GUI controls that are not recognized by the typically popular tools. The company was keen to embark on automation right from the early stage of migration. And the budget to develop automation was tight.

We conducted Proof-Of-Concept (POC) to identify the tool that would support automation for both Delphi and VB.Net. We discovered that most popular tools were indeed not compatible with the developed product. The POC concluded that Test Complete did support both Delphi and VB.Net with a few constraints. It was very cost effective however but not user friendly. We convinced the management of our decision. The project started off with us identifying test cases which could be automated. Seven modules were automated and demonstrated.

We developed reusable Keyword Driven Framework for the client. Both individual test case execution and batch run was possible just by choosing the test cases. STAG provided detailed demo of the framework to the in-house QA team.

However some of the test cases chosen for automation were not complete. We validated the test cases, made the necessary changes and then initiated the scripting. The automation work was divided between the STAG and customers team. As we automated the test cases, we guided and trained the customer’s team to automate.

The result – By automating 326 test scenarios, the testing time was cut down from 80 hours to 12 hours! We saved the customer significant money spent on the tool, but more by enabling them to release the product to market ahead of schedule!

Validation Suite – An innovative way to leveraging test assets & reducing cost of validation

The test lifecycle produces rich set of assets-strategy, test scenarios/cases, defects, scripts, test data. Other than using these to validate the current release of product, how are these significantly leveraged in the future? How do these assets enable faster ramp-up, de-skilling, optimize testing? Patterns have been used as a method to extract experience and enable a normal person to behave like an experienced person. Sadly, patterns are not commonplace in test engineering.

Validation Suite is a product from STAG that is similar to patterns. Based on HBT (Hypothesis Based Testing) methodology, it enables identifying potential defect types for common features/construction-components along with a common validation strategy consisting of Quality Levels, Test Types and finally a list of common test scenarios/cases/data. The intent is to ensure that we do not start all over again, and leverage the experience typically encoded as tacit knowledge in the individuals.

STAG has applied this to create specific validation suites in the areas of eLearning, ERP, Bluetooth and Mobile applications. These can reduce the cost of validation by 30% and reduce ramp-up time significantly. The key business benefits are (1) Fast ramp-up (2) Readymade test strategy (3) Higher product quality (4) Faster time-to-market and (5) Lower validation cost.

Validation suite for your product can be created by mining your test assets by applying the HBT methodology to create a structured and rich asset base that provides significant leverage subsequently.

Delivering peace of mind- Assessing release worthiness

The product helps to detect different types of telecom fraud, be it in wireless or wire line networks. It also helps to detect fraud in roaming, pre-paid and post-paid environments and tailor-made for GSM/CDMA/Fixed/GPRS network. The product team comprised of strong development team ably supported by an in-house QA team. The product was developed using J2EE technologies and had undergone multiple versions of build – currently in version 6.0 – with wide installation base in Asian/US market. The company had an ambitious plan to expand the product reach and move into a new market – European market. The product went through multiple feature upgrade/modifications to meet the needs of the new market. Though the product was tested diligently by the in-house QA team, the management was skeptical about the release worthiness of the product. They preferred to have an independent third party product assessment to enhance their delivery confidence before the formal product launch.

STAG singularly focused to ensure the defect escapes are minimized. Hence a three-pronged approach was adopted to determine the breadth and depth of testing required –

  • Identify what poses high business risks? What has been de-risked already? What remains as risk that is to be assessed?
  • How well has the “net” been cast to uncover defects in the lifecycle? Are the methods to uncover defects expansive/complete?
  • Are the test cases (i.e. those inputs that have the ‘power’ to detect anomalies) good?  Do the already existing test cases and therefore the tests conducted have the power to uncover high-risk business issues?

Fixing the high impact defects improved the stability of the product– which otherwise could have led to USD 250K support cost in the initial months. The release worthiness certificate lowered the business risk for the customer and newly gained delivery confidence by the customer management powered their successful product launch and on time to market.