T Ashok @ash_thiru on Twitter
In the Agile environment, the focus is on decomposing complexity and thereby demystifying it. In the process the code should be more correct than wrong. Also because the element of delivery ‘an user story’ is small, it should be easy to test and therefore easy to convert the test into script. Therefore testing is interwoven naturally with coding. Does it mean that the code is cleaner?
However, despite the code being delivered faster, quality challenges exist. Customer reported defects still keep the team busy. Challenges because of extreme focus on the ‘small’ and not on the larger picture?
In this article, T Ashok shares his experience in deploying HBT (Hypothesis Based Testing) in Agile environment. He recommends the focus of testing should be on “sensitise & prevent” defects rather than “design & execute” test cases.
As a consultant deploying HBT (Hypothesis Based Testing)in Agile environment, I made an interesting discovery. That the the focus of testing should be on “sensitise & prevent” defects rather than “design & execute” test cases.
In the Agile environment, the focus is on decomposing the problem into small user stories and delivering it. This implies means that we are decomposing complexity and thereby demystifying it. By deduction therefore, therefore the code should be more correct than wrong. Also because the element of delivery ‘an user story’ is small, it should be easy to test and therefore easy to convert the test into script. And therefore testing is interwoven naturally with coding. Does it mean that the code is cleaner?
In my interactions with the team, I discovered that despite the code being delivered faster, quality challenges exist. Customer reported defects still keep the team busy. Challenges because of extreme focus on the ‘small’ and not on the larger picture??
Let me illustrate…
The user story in point is a logging system. This creates detailed logs to enable better supporting. My focus was on testing it. The objective of this is to add entries in the log. As you might surmise, the functionality is not very complex and therefore functional test cases are indeed easy to generate. Therefore the functional test cases were kinda simple… Hmmm this does not seem right.
I proceeded to question beyond the typical behaviour of the user story- why are we implementing this, who is going to benefit from this, what they might expect from this and so on. The answers that I got via probing were interesting. The intent stated as the prime reason for this user story was to ‘enable better supportability by giving detailed information in the detailed logs’. Yeah, seems the typical reason.
On questioning on how it may look in real life, I discovered that this log file could be a pretty long (a few thousand lines) and not exactly machine analysable. Ouch- this means that the poor support guy would be glued to the monitor in a kinda “edit-search mode” looking for potentially interesting information. Hmmm.. seems a onerous task that will consume a non-trivial effort/time to diagnose the problem.
On laying out this potential situation after interrogating the user story team, the team understood that this requires serious rework as the “usability” of this is deeply flawed. The supportability is not getting any better. That is when a light bulb started to glow in me – I found an interesting bug, not in the code yet(as this is yet to be coded) but in the design itself.
The user story in this case was “checkpointing”, a set of APIs that allows an developer to implement “application level transactions” that are needed in the system. This enables one to take a checkpoint i.e. snapshot of the system to be taken before updating the system with new assets. Post deployment of the assets, in case of any issue, the system can be rolled back to the prior checkpoint.
Similar to Situation #1 the functional behaviour did not seem complex and therefore the functional test cases were simple. Again my nose wrinkled in suspicion, as the test cases designed were too simple. I embarked on the detailed probe and discovered “a critical situation” where the prior checkpoint would be deleted before the current one is completed resulting in a unrecoverable system. The light bulb in me glowed brightly, a serious flaw uncovered, once again not in the code but the “would-be” code. This happened when we dug into the design of the code, assumptions made (note that we were looking for bug related to environment) and questioning led to this potential flawed situation.
An user story is like a “sutra” – an aphorism, that needs to be delved into detail to understand its entirety. And this is needed if you want to test well. Questioning is a key activity to dig into the details as the typical documentation of user story is condensed. Most often the functional complexity of an user story is low, the challenge is understanding the behaviour of interactions with other stories, environment and the non-functional aspects.
What I discovered is that the act of breaking the “big” into “small” (user stories) makes one forget about who the end user is and what they value. Hence it is necessary to think from the end user’s perspective as what they do and how the user story fits in the end user flow and how non-functional attributes of the larger flow matter to the user story.
“Sutras” are powerful, as they communicate deep stuff in a few words. To understand the deep stuff, intense questioning is key. Therefore in the Agile context, testing is therefore not anymore an act of evaluation post coding, it is about intense questioning to “sensitise and & prevent defects” rather than “design & execute test cases”.
Write less. Communicate more.
Think deeply and may the light flow into you.
Have a great day.
Published in Tea Time with Testers Aug 2012