Analysing change impact

When an application moves into the life cycle of maintenance, small changes to the application seem to demand a large effort of testing before release. The challenge is in optimizing the test effort. To accomplish this, the typical approach is to automate the ‘regression’ test cases. But automation requires a non-trivial investment of money, effort and time. Is there a smarter way that enables one to scientifically choose the minimal set of test cases to regress and therefore optimize regression effort?

Given the large set of test cases that an application has, it is not possible to execute all of these when changes are made to the application. Therefore it becomes imperative to select the minimal subset of test cases that need to be executed based on the changes done to the application. Selection is done based on deep understanding of the application functionality, architecture and the changes done. It is common to choose a larger subset due to the fear of unwanted side effects.

Classifying and re-architecting the test cases based on Potential Defect Types (PDT) by applying HyBIST (Hypothesis-Based Immersive Session Testing) allows a clear focus on fault detection ability of the test suite.

Setting up a good baseline of the elements of the system-under-test, their interactions and the associated test cases enables one to understand the domino effect of changes. Now connecting these two i.e. fault detection ability and system interaction baseline, enables one too logically analyze the ‘domino effect of change’ in terms of fault propagation and therefore choose the optimal subset of test cases to regress. This allows for a logical selection of the test cases that is adequate yet optimal.

Leave a Reply 0

Your email address will not be published. Required fields are marked *