Intelligent Automation – What does it take?

As we mature, automation needs to become smarter and intelligent to enable us to make superior decisions faster. It is no more about being a servile appendage that assists in doing things faster. It is about a leap from mindless repeated testing to continuos health assessment, to provide valuable information about the state of the system, to enable us to do less, yet deliver to meet the continuously increasing customer expectations.

Intelligence means scripts that easy to build & maintain, scripts that are purposeful enabling me to clearly identify the problem, scripts that are resilient to ensure that maximal number of scripts executed in a run and finally scripts that are intelligent enough to analyze outcomes and suggest actions rather than report test outcomes as a report.

Ultimately intelligence is about “Enable me to do more with less ” – Smarter. Cheaper. and Better.  Building scripts quickly, adapting them to newer versions quickly and cheaply, being smarter by analyzing outcomes rather than dump large data into a report. It is no more about fighting and solving technical problems, but to graduate to a higher level of delivering “Do more, but do NO more”.

Let us step back for a minute and examine the objective of automated testing. Initially the test scenarios and the corresponding scripts are ‘defect seeking’ in nature focused on uncovering defects. As the system matures with time, the ‘potent’ test cases becomes ‘immune’ and objective shifts to health check rather than find defects.

Intelligence requires that we are able to diagnose the outcomes and display the health of system clearly to instill confidence rather than present data.

Now what is the role of test scenarios in intelligent automation? It would be desirable to ensure that scenarios be not very volatile. And multi-faceted test scenarios that can  uncover various defect types make the script inherently complex slowing up the ability to build rapidly and adapt quickly.  It is essential that the scenarios be analyzed be for fitness and broken down into multiple levels (In Hypothesis Based Testing, there are NINE quality levels) so that the scripts are purposeful and small.

Now what would it take to create scripts intelligently ? Other than handcrafting, we have various choices : (1) Build them by assembling components (2) Do not build at all, generate them (3) If it does not need to handcrafted, then ensure the framework is smart so that require less intelligence to build them i.e factory model to building.

To enable smartness, the framework structure needs to be flexible. Only then can the script can be adapted quickly by “reconfiguring the connections” or by “re-assembling/re-wiring components”.  And to it all, can embed intelligence of testability inside the system, rather that have external intelligent scripts that assess the system. This would result in a system that can assess itself, the highest form of intelligence!
Moving from creation to execution, what does it take to do intelligent execution? (1) Run as many scripts without stopping, intelligently ‘jumping over obstacles’ so as maximize the number of scripts executed. The obstacles posed may be defects, environment or setup issues (2) Rapid setup/teardown to create the necessary environment and adapts as needed to minimize issues that prevent the script to run (3) Finally rapid adjustment to adapt to new test environment.

Lastly let us appreciate the role of intelligence in test outcomes. Automated testing typically generates detailed reports that require deeper analysis to extract information. Intelligent reporting is about minimizing this, about presenting crisp descriptive analysis rather that voluminous outcomes.  


So what does it take to make intelligent automation? The mindmap alongside summarizes this. Fitness of test cases, ‘Levelized’ test cases that are purposeful, good ‘execution design’ to maximize runs, rapid script creation, rapid adaption and crisply analyzed outcomes .

 

0

Yin and Yang in Testing: The magic is in the middle

As we mature we see more opposites. Is the objective of testing to find more issues or prevent them by being more sensitive? Is behavioural information more suitable that structural information to design test cases? Is automated testing superior to finding issues?

There are very many things in the discipline of testing that seem contrary, like the Yin and Yang, creating a constant tension as to ‘what-to-choose’.

This explores the idea the various opposites and outlines the view that “The magic is in the middle”. That it is not a tussle, but a perfect state with the opposites balancing each other.

Let us look at the act of understanding … Should we know the entire system completely, in detail ? Will this bias us? Should we attempt to understand the whole at one go or should be understand only as much required and defer the rest to later ? What should we understand – how the system should-work/is-working or how it is architected/built ? And what is the granularity to which we need we understand – precise or approximate? And do we read the spec and understand or play/experiment and figure it out ?

As we can discern from here, it is ‘neither this nor that’, but something in between. What the middle is, is based on experience. And experience is in knowing the governing variables that enable decision making. Some are these are:
1. State of the product/application (brand new vs. existing)
2. Area of complexity (internal I.e structural OR external I.e behavioural)
3. Degree of availability of information.
And application of general principles like: onion peeling, experiment in addition to trust, approximate and iterate.

Let us to explore the opposites in test design:
‣ Should we have more test cases or have focussed & specialised test cases?
‣ Which information is more suitable to design test cases – external behavioural information or internal structural information?
‣ Should we take the viewpoint of end user or deep technical viewpoint to design?
‣ And what should the distribution of positive & negative test cases look like?

And once again, it is neither the extremes, it is the middle! And what are the governing variables here ? Some of these are: quality level for which we are designing (early stage building blocks, or end user use cases), #inputs to combine to generate test cases, type of defect we are looking forward to uncover.

Moving onto the aspect of test execution- Is automated testing better that using intelligent humans? Should we do more cycles of execution or less ? And finally should we test more often I.e scrub more.
The middle is based on some of these governing variables – is the objective stability/cadence (health check), defect yield, feature addition frequency.

Lastly examining the process of validation, how heavy/light should the process be, how disciplined versus creative should we be, how much of thinking (a-priori) versus observation & learning should we adopt, and how much compliance & control vs freedom? A process is most often mistakenly understood as an inhibitor of creativity, when the real intent is to ‘industrialise’ the common things, ensure clarity in interactions so that we can channel our energies from the mundane stuff to higher order to deliver performance. Doing this requires us to be mindful and be in the present and ‘enjoy the doing’. Some of the variables at govern this are : complexity, error proneness, degree of availability of information, fun factor.

Now the big question – is the objective of testing to detect issues or to enable us to have a heightened sensitivity so that we can prevent issues ? You figure this out.

Summarising, there exist opposites like the yin and yang. The trick is find the middle that is harmonious, requiring the right mix of two the polar ends. Not via a compromise, but a wise choice of the middle discovered by using by a set of governing variables gained with experience. The harmony that experience in the middle when we balance the two ends is the the “magic”.

On a philosophical note, zero is said to be infinity ! This means when you are empty, unattached, you are filled with bliss! And they are not really opposites as we perceive, the magic is the middle- ‘the bliss’. As a long distance endurance cyclist, I discovered the magic, the bliss when I focused on the front wheel rather than the mile marker during long rides lasting multiple hours. Not to be worried about the distance to destination, or judge the performance in the distance covered, but to enjoy the cycling. The magic is to be in the present, to be mindful. Not the past, nor the future.

The magic is in the middle.
Recognise it, exploit it harmoniously to deliver high performance. Cheers.

0

Low cost automation challenge

A New Zealand based customer in the heath care domain embarked on a journey of migrating their Delphi-based products to Microsoft technologies. The products use specialized GUI controls that are not recognized by the typically popular tools. The company was keen to embark on automation right from the early stage of migration. And the budget to develop automation was tight.

We conducted Proof-Of-Concept (POC) to identify the tool that would support automation for both Delphi and VB.Net. We discovered that most popular tools were indeed not compatible with the developed product. The POC concluded that Test Complete did support both Delphi and VB.Net with a few constraints. It was very cost effective however but not user friendly. We convinced the management of our decision. The project started off with us identifying test cases which could be automated. Seven modules were automated and demonstrated.

We developed reusable Keyword Driven Framework for the client. Both individual test case execution and batch run was possible just by choosing the test cases. STAG provided detailed demo of the framework to the in-house QA team.

However some of the test cases chosen for automation were not complete. We validated the test cases, made the necessary changes and then initiated the scripting. The automation work was divided between the STAG and customers team. As we automated the test cases, we guided and trained the customer’s team to automate.

The result – By automating 326 test scenarios, the testing time was cut down from 80 hours to 12 hours! We saved the customer significant money spent on the tool, but more by enabling them to release the product to market ahead of schedule!

0

We demystified the automation puzzle. Relentless validation tamed!

A large global provider of BI solutions has a product suite that runs on five platforms supporting thirteen languages with each platform suite requiring multiple machines to deliver the BI solution. The entire multi-platform suite is released on single CD multiple times a year.

The problem that stumped them was “how to automate the final-install validation of multi-platform distributed product”. They had automated the testing of the individual components using SilkTest, but they were challenged with “how to unify this and run off a central console on various platforms at the same time”.

Considering each platform-combination took about a day, this required approximate two months of final installation build validation, and by the time they were done with this release, the next release was waiting! This was a relentless exercise, consuming significant QA bandwidth and time, and did not allow the team to do things more interesting or important.

The senior Management wanted single-push-button automation -identify what platform combination to schedule next, allocate machine automatically from the server farm, install and configure automatically, fire the appropriate Silk scripts and monitor progress to significantly reduce time and cost by lowering QA bandwidth involved in this effort. After deep analysis, in-house QA team decided this was a fairly complex automation puzzle and required a specialist! This is when where we were brought in.

After an intense deep-dive lasting about four weeks, we came up with a custom master-slave based test infrastructure architecture that allowed a central console to schedule various jobs onto the slaves, utilizing a custom developed control & monitoring protocol. The solution was built using Java-Swing, Perl, Expect and adapters to handle Silk scripts. Some parts of the solution where on Windows platform while some on UNIX. This custom infrastructure allowed for scheduling parallel test runs, automatic allocation of machines from a server farm, installing appropriate components on appropriate machines, configuring them and finally monitoring the progress of validation through a web console.

This test infrastructure enabled a significant reduction of the multi-platform configuration validation. The effort reduced from eight weeks to three weeks. We enjoyed this work simply because it was indeed a boutique work fraught with quite a few challenges. We believe that this was possible because we analyzed the challenging problem from wearing a development hat and not the functional test automation hat.

0