In the current world of rapid development, a software is constantly updated with new features, incremental additions and bug fixes. While features (new & incremental) are the focus for revenue generation and market expansion, bug fixes are necessary to ensure that customers stay.
While on the path of progression towards revenue enhancement, the challenge that is “Did I break any existing features that are working well”? That may necessitate a regression test.
Note that as the product grows, so does regression, increasing cost and slowing down releases.
Regress means ‘go backwards’, in this context this means ‘checkout prior quality risks to ensure that they are still under control’. The product is retested from both functionalities and attributes aspects to ensure that functionalities and attributes like performance, security etc. are not compromised.
But, how do we tackle this?
Given the necessity of ensuring that the functionality and attributes are not compromised, we have to retest the functional/non-functional aspects constantly resulting in repetitive testing.
To do this well, we typically adopt:
1. Massive regression test automation to re-test thoroughly.
2. Deep product knowledge to assess the potential impact of changes and do focused regression.
So, what is the challenge?
1. Well, automation is great, but it requires continual investment to build and maintain.
2. In-depth product knowledge is also limited to a few people, and there are always in high demand!
Hmmm, how can we do better?
Instead of focusing only on how to do more and faster, could we do it less in a smarter way? Let us ask some questions to figure this out:
1. Are you doing too much regression?
Could we do a smarter impact analysis? Could there be a logical approach to analysing change impacts without only relying on deep product knowledge? Yes, one of HBT’s technique “Fault propagation analysis” could be useful here. The technique, in a nutshell, states “Given that an entity has been modified and is linked to other entities, what types of defects can indeed propagate and affect the linked entities?”
2. Is your defect yield from regression good enough?
Software with time hardens, i.e. becomes fit. This implies that the same test cases executed yield less defects later, i.e. test case yield drops. So the lingering question is “should be executing these at all?”. Just like living beings who develop resistance to certain diseases over time, the software also can be thought to become ‘resistant to test cases’ with time. In HBT, we call this ‘Test case immunity’ and use this to logically ascertain test cases that may be dropped and therefore do less.
3. Are your test scenarios fit enough for automation?
If the software is volatile, automation is more volatile! Changes to software necessitate automation to be in sync. So to ensure rapid modification, frameworks are used. That is great, but do you know that structure, i.e. architecture of test cases also matter? It is not just about frameworks and great code. But it’s about how well the test cases are organised. In HBT this is done by using a technique “Levelisation analysis” that ascertains if the test cases are organised into well-formed levels enabling rapid automation with rapid modifiability.
In closing : SMART REGRESSION
In summary, the three questions were all about “How can we do less to do more”? Do less regression. Do less automation maintenance. And therefore perform smart regression to progress further.
Smart regression complements the act of doing faster via automation by enabling one to do lesser.