“It took a few hours to incorporate the change but took a few days to test, upsetting the customer. Why? Well, we found out that our QA team was doing too much regression. Wish we could be smarter” – Engineering Manager of a mid-sized IT company.
Have you ever felt this way? Have you wished if you do less regression to release faster?
In the current world of rapid development, software is constantly updated with new features, incremental additions and bug fixes. While features (new & incremental) are the focus for revenue generation and market expansion, bug fixes are necessary to ensure that customers stay.
While on the path of progression towards revenue enhancement, the challenge is “Did I break any existing features that are working well”? That may necessitate a regression test.
Note that as the product grows, so does regression, increasing cost and slowing down releases.
Regress means ‘go backwards’, and in this context, it means ‘checkout prior quality risks to ensure that they still are under control’. This implies that the product is retested from both functionalities and attributes aspects to ensure that functionalities and attributes like performance, security etc. are not compromised.
So, how can one regress smartly?
* Figure out how much not to regress by doing a smarter impact analysis using a scientific approach to understand fault propagation due to change.
* Figure out how much not to regress by analysing defect yields over time to understand what parts of the system have been hardened
* Well, automation is an obvious choice, ensure that the scenarios are “fit enough for automation” so that you don’t end spend much effort maintaining the scripts to be in sync with every change.
Change as we all know is very imminent, and does cause a domino effect. The smartness lies in, validating only those that have the potential for domino effect thereby doing less and exploiting automation to do faster.