Blog3Pic-LandingPage-300x300_4198bb26033f2a065209727863a822f7

LEAN: It is not doing more, it is about doing less

So, what should be really happening in Agile context?
Lean thinking is what inspired the Agile movement. Lean is about not producing waste in the first place. It is doing things ‘clean’ at at the first place,  so that waste is ideally not there. Waste in software context are bugs. And in the early stages there are ‘unit bugs’. Since our focus in Agile is to find these earlier and to ensure that they are never there whenever we modify, we resort to a high degree of automation. Therefore we have a large body of test cases at the lower levels automated to ensure that we can continually execute them. This is great, but should we not focus on adopting a practice that in essence prevents these issues and lessen the need for large number of unit tests to uncover these?

It is not about doing more, it is about doing less
When we find issues in the product/app especially those that can be caught earlier, we focus on more rigorous dev test with extreme focus on automation. Yes, that seems logical. But what a minute, for a developer already busy writing code, is this the right approach? Given that dev test is largely about issues in L1 thru L4, could we not focus on getting this right or statically assess these via smart checklist?

Great quality early stage code is not about doing more testing, it really is doing about doing less test, by enabling sharper focus on ‘what-can-go-wrong’, ‘have-you-considered-this’.

The ebook outlines in detail outlines how to purposefully do DevTest in the “LEANest” by clearly outlining what issues a dev has to go after and outlines SmartDevChecklist to do this the most LEAN way.

SaveSave

Agile Sutra – “Sensitize & Prevent”, not “Design & Execute”

T Ashok @ash_thiru on Twitter

Summary

In the Agile environment, the focus is on decomposing complexity and thereby demystifying it. In the process the code should be more correct than wrong. Also because the element of delivery ‘an user story’ is small, it should be easy to test and therefore easy to convert the test into script. Therefore testing is interwoven naturally with coding. Does it mean that the code is cleaner?

However, despite the code being delivered faster, quality challenges exist. Customer reported defects still keep the team busy. Challenges because of extreme focus on the ‘small’ and not on the larger picture?

In this article, T Ashok shares his experience in deploying HBT (Hypothesis Based Testing) in Agile environment. He recommends the focus of testing should be on “sensitise & prevent” defects rather than “design & execute” test cases.


As a consultant deploying HBT (Hypothesis Based Testing)in Agile environment, I made an interesting discovery. That the the focus of testing should be on “sensitise & prevent” defects rather than “design & execute” test cases.

In the Agile environment, the focus is on decomposing the problem into small user stories and delivering it. This implies means that we are decomposing complexity and thereby demystifying it. By deduction therefore, therefore the code should be more correct than wrong. Also because the element of delivery ‘an user story’ is small, it should be easy to test and therefore easy to convert the test into script. And therefore testing is interwoven naturally with coding. Does it mean that the code is cleaner?

In my interactions with the team, I discovered that despite the code being delivered faster, quality challenges exist. Customer reported defects still keep the team busy. Challenges because of extreme focus on the ‘small’ and not on the larger picture??

Let me illustrate…
Situation #1

The user story in point is a logging system. This creates detailed logs to enable better supporting. My focus was on testing it. The objective of this is to add entries in the log. As you might surmise, the functionality is not very complex and therefore functional test cases are indeed easy to generate. Therefore the functional test cases were kinda simple… Hmmm this does not seem right.

I proceeded to question beyond the typical behaviour of the user story- why are we implementing this, who is going to benefit from this, what they might expect from this and so on. The answers that I got via probing were interesting. The intent stated as the prime reason for this user story was to ‘enable better supportability by giving detailed information in the detailed logs’. Yeah, seems the typical reason.

On questioning on how it may look in real life, I discovered that this log file could be a pretty long (a few thousand lines) and not exactly machine analysable. Ouch- this means that the poor support guy would be glued to the monitor in a kinda “edit-search mode” looking for potentially interesting information. Hmmm.. seems a onerous task that will consume a non-trivial effort/time to diagnose the problem.

On laying out this potential situation after interrogating the user story team, the team understood that this requires serious rework as the “usability” of this is deeply flawed. The supportability is not getting any better. That is when a light bulb started to glow in me – I found an interesting bug, not in the code yet(as this is yet to be coded) but in the design itself.

Situation #2

The user story in this case was “checkpointing”, a set of APIs that allows an developer to implement “application level transactions” that are needed in the system. This enables one to take a checkpoint i.e. snapshot of the system to be taken before updating the system with new assets. Post deployment of the assets, in case of any issue, the system can be rolled back to the prior checkpoint.

Similar to Situation #1 the functional behaviour did not seem complex and therefore the functional test cases were simple. Again my nose wrinkled in suspicion, as the test cases designed were too simple. I embarked on the detailed probe and discovered “a critical situation” where the prior checkpoint would be deleted before the current one is completed resulting in a unrecoverable system. The light bulb in me glowed brightly, a serious flaw uncovered, once again not in the code but the “would-be” code. This happened when we dug into the design of the code, assumptions made (note that we were looking for bug related to environment) and questioning led to this potential flawed situation.

The discovery…
An user story is like a “sutra” – an aphorism, that needs to be delved into detail to understand its entirety. And this is needed if you want to test well. Questioning is a key activity to dig into the details as the typical documentation of user story is condensed. Most often the functional complexity of an user story is low, the challenge is understanding the behaviour of interactions with other stories, environment and the non-functional aspects.

What I discovered is that the act of breaking the “big” into “small” (user stories) makes one forget about who the end user is and what they value. Hence it is necessary to think from the end user’s perspective as what they do and how the user story fits in the end user flow and how non-functional attributes of the larger flow matter to the user story.

“Sutras” are powerful, as they communicate deep stuff in a few words. To understand the deep stuff, intense questioning is key. Therefore in the Agile context, testing is therefore not anymore an act of evaluation post coding, it is about intense questioning to “sensitise and & prevent defects” rather than “design & execute test cases”.

Write less. Communicate more.
Think deeply and may the light flow into you.

Have a great day.

Published in Tea Time with Testers Aug 2012