Blog3Pic-LandingPage-300x300_4198bb26033f2a065209727863a822f7

LEAN: It is not doing more, it is about doing less

So, what should be really happening in Agile context?
Lean thinking is what inspired the Agile movement. Lean is about not producing waste in the first place. It is doing things ‘clean’ at at the first place,  so that waste is ideally not there. Waste in software context are bugs. And in the early stages there are ‘unit bugs’. Since our focus in Agile is to find these earlier and to ensure that they are never there whenever we modify, we resort to a high degree of automation. Therefore we have a large body of test cases at the lower levels automated to ensure that we can continually execute them. This is great, but should we not focus on adopting a practice that in essence prevents these issues and lessen the need for large number of unit tests to uncover these?

It is not about doing more, it is about doing less
When we find issues in the product/app especially those that can be caught earlier, we focus on more rigorous dev test with extreme focus on automation. Yes, that seems logical. But what a minute, for a developer already busy writing code, is this the right approach? Given that dev test is largely about issues in L1 thru L4, could we not focus on getting this right or statically assess these via smart checklist?

Great quality early stage code is not about doing more testing, it really is doing about doing less test, by enabling sharper focus on ‘what-can-go-wrong’, ‘have-you-considered-this’.

The ebook outlines in detail outlines how to purposefully do DevTest in the “LEANest” by clearly outlining what issues a dev has to go after and outlines SmartDevChecklist to do this the most LEAN way.

SaveSave

Poor quality code is due to compromised unit testing

Poor quality code is due to compromised “Unit Testing”

The problem at large
The quality of early stage code is felt to be a concern by many engineering managers that I talk to. In my numerous consulting assignments, I have noticed that many issues found by QA folks seem to be ones that do not require the expertise of a specialist tester, compromising the effectiveness of ‘system test’  resulting in avoidable  customer issues.

Great quality code is not the result of intense system testing, it is result of well structured filtration of issues from the early stages. A compromised ‘unit test’ puts unnecessary strain on the QA folks who seem to be compelled to go after these issues at the expense of system test.

Developers on the other hand do not deliberately write bad code, it is just that accidents happen. Accidents seem to happen due to brute force push of unit testing without it being simple and practical, and developers already short of time are unable to  adhere to a heavyweight process. The other fallacy seems to the over dependence on automated unit tests as the saviour without paying attention to test cases. Also the incorrect notion of unit testing as being only white-box oriented with a skew towards code coverage results in ineffective tests that are introverted. Lastly the sheer emphasis of dynamic testing as the only method to uncover defects is possibly overwhelming, when easier static methods of uncovering issues could have been employed.

Business/economic impact
The impact of leakage of issues from early stage is not irritating, but serious enough. Issues reported by customers that are early stage simple issue like poor validation of inputs, results in significant drop of confidence in the customer. The QA folks focus on these issues result in their job of system validation being poor, resulting in field issues related to end-to-end flows  and sometimes attributes  being compromised.

Also the incorrect focus of a specialist QA results in insufficient time for doing things that can make system test more effective and efficient like automation of end-to-end flows, focus on non-functional requirements, revising the test strategy/approach and sharpening with knowledge gained every cycle.

Yes, this age old problem is boring. Don’t force your developers to more unit testing to solve the problem. Ask them to do it smartly by doing less. If you are keen to know how, check out the e-book listed below.

SaveSave

Regression

Is regression hindering your progression?

“It took a few hours to incorporate the change but took a few days to test, upsetting the customer. Why? Well, we found out that our QA team was doing too much regression. Wish we could be smarter” – Engineering Manager of a mid-sized IT company.

Have you ever felt this way? Have you wished if you do less regression to release faster?

In the current world of rapid development, software is constantly updated with new features, incremental additions and bug fixes. While features (new & incremental) are the focus for revenue generation and market expansion, bug fixes are necessary to ensure that customers stay.

While on the path of progression towards revenue enhancement, the challenge is “Did I break any existing features that are working well”? That may necessitate a regression test.

Note that as the product grows, so does regression, increasing cost and slowing down releases.

Regress means ‘go backwards’, and in this context, it means ‘checkout prior quality risks to ensure that they still are under control’. This implies that the product is retested from both functionalities and attributes aspects to ensure that functionalities and attributes like performance, security etc. are not compromised.

So, how can one regress smartly?
* Figure out how much not to regress by doing a smarter impact analysis using a scientific approach to understand fault propagation due to change.
* Figure out how much not to regress by analysing defect yields over time to understand what parts of the system have been hardened
* Well, automation is an obvious choice, ensure that the scenarios are “fit enough for automation” so that you don’t end spend much effort maintaining the scripts to be in sync with every change.

Change as we all know is very imminent, and does cause a domino effect. The smartness lies in, validating only those that have the potential for domino effect thereby doing less and exploiting automation to do faster.

Here is the link to TWO aids that can enable your QA team to regress smartly. Oh, ask your QA team to read this article before they use the tool.

tools for smart regression

Ideas to regress smartly

Ideas to regress smartly

The context
In the current world of rapid development, a software is constantly updated with new features, incremental additions and bug fixes. While features (new & incremental) are the focus for revenue generation and market expansion, bug fixes are necessary to ensure that customers stay.

While on the path of progression towards revenue enhancement, the challenge that is “Did I break any existing features that are working well”? That may necessitate a regression test.

Note that as the product grows, so does regression, increasing cost and slowing down releases.

Regression
Regress means ‘go backwards’, in this context this means ‘checkout prior quality risks to ensure that they are still under control’.  The product is retested from both functionalities and attributes aspects to ensure that functionalities and attributes like performance, security etc. are not compromised.

But, how do we tackle this?
Given the necessity of ensuring that the functionality and attributes are not compromised, we have to retest the functional/non-functional aspects constantly resulting in repetitive testing.

To do this well, we typically adopt:
1. Massive regression test automation to re-test thoroughly.
2. Deep product knowledge to assess the potential impact of changes and do focused regression.

So, what is the challenge?
1. Well, automation is great, but it requires continual investment to build and maintain.
2. In-depth product knowledge is also limited to a few people, and there are always in high demand!

Hmmm, how can we do better?
Instead of focusing only on how to do more and faster, could we do it less in a  smarter way? Let us ask some questions to figure this out:

1. Are you doing too much regression?
Could we do a smarter impact analysis? Could there be a logical approach to analysing change impacts without only relying on deep product knowledge? Yes, one of HBT’s technique “Fault propagation analysis” could be useful here. The technique, in a nutshell, states “Given that an entity has been modified and is linked to other entities, what types of defects can indeed propagate and affect the linked entities?”

2. Is your defect yield from regression good enough?
Software with time hardens, i.e. becomes fit. This implies that the same test cases executed yield less defects later, i.e. test case yield drops. So the lingering question is “should be executing these at all?”. Just like living beings who develop resistance to certain diseases over time, the software also can be thought to become ‘resistant to test cases’ with time. In HBT, we call this ‘Test case immunity’  and use this to logically ascertain test cases that may be dropped and therefore do less.

3. Are your test scenarios fit enough for automation?
If the software is volatile, automation is more volatile! Changes to software necessitate automation to be in sync. So to ensure rapid modification, frameworks are used. That is great, but do you know that structure, i.e. architecture of test cases also matter? It is not just about frameworks and great code. But it’s about how well the test cases are organised. In HBT this is done by using a technique “Levelisation analysis” that ascertains if the test cases are organised into well-formed levels enabling rapid automation with rapid modifiability.

In closing : SMART REGRESSION
In summary, the three questions were all about “How can we do less to do more”?  Do less regression. Do less automation maintenance. And therefore perform smart regression to progress further.

Smart regression complements the act of doing faster via automation by enabling one to do lesser.

SaveSave

SaveSave

SaveSave

as

Frictionless development testing

Very often in discussions with senior technical folks, the topic of developer testing and early stage quality pops up. And it is always about ‘we do not do good enough developer testing’ and how it has increased post release support. And they are keen on knowing ‘how to make developers test better and diligently’ and outlining their solution approach via automation and stricter process. The philosophy is always about “more early testing” which typically has been harder to implement.

Should we really test more? Well it is necessary to dig into basics now. Let me share my view as to what they probably mean by testing. My understanding is that they see testing as dynamic evaluation to ascertain correctness. To come up with test cases that will be executed using a tool or a human and checking correctness by examining the results. And therefore good developer testing is always about designing test cases and executing them.

And that is where the problem is. Already under immense time pressure, the developer faces serious time crunch to design test cases, execute (possible after automating them). In the case when it does happen, they all pass ! (Not that you would know if they fail!). And the reason that I have observed for the ‘high pass rate’ is that test cases are most often conformance oriented. When non-conforming data hits the system, Oops happens!

So should we continue to test harder? What if we changed our views? (1) That testing need not be limited to dynamic evaluation, but could also done by via static proving. That is, ascertaining correctness not only via execution of test cases but by thinking through what can happen with the data sets. (2) That instead of commencing evaluation with conformance test cases, we start in the reverse with non-conforming data sets first. Prove that the system rejects bad inputs before we evaluate for conformance correctness. (3)That instead of designing test cases for every entity, we use a potential defect type (PDT) catalog as a base to check for non-conformances first. Using PDT catalog as the base for non-conformance check preferably via static proving and devising entity specific positive data sets for conformance correctness.

So how do these views shift us to do better developer testing at an early stage? Well, the biggest shift is about doing less by being friction-less. To enable smooth evaluation by using PDT catalog to reduce design effort, applying static proving to think better and reduce/prevent defects rather that executing rotely, and finally focusing on issues (i.e PDTs) first complementing the typical ‘constructive mentality’ that we as developers have. Rather than do more with stricter process, let us loosen and simplify, to enable ‘friction-less evaluation’.

Think & prove vs Execute & evaluate
Picking up a PDT from the catalog and applying a mental model of the entity’s behaviour can enable us to rapidly find potential holes in implementation. To enable easy implementation of this idea, let us group the PDTs in into three levels. The first one deals with incorrect inputs only, while the second one deals with incorrect ways to accept these inputs while the last set deals with potential incorrect internal aspects related to code structure and external environment. Let the act of proving robustness to deal with non-conformances proceed from level 1 though 3 commencing by thinking through (1) what may happen when incorrect inputs are injected (2) how does interface handle incorrect order/relationship of these inputs and finally (3) how entity handles (incorrect)internal aspects of structure like resource allocation, exception handling, multi-way exits, timing/synchronisation or misconfigured/starved external environment.

Non-conformance first
Recently a senior executive was stating that his organisation’s policy for developer testing was based on ‘minimal acceptance’ i.e. ascertain if the entity worked with right inputs. As a result the test cases were more ‘positive’ and would pass. Post release was a pain, as failure due to basic non-conforming inputs would make the customer very irritated. And the reason cited for the ‘minimal acceptance criteria’ was the lack of time to test the corner cases. Here the evaluation was primarily done dynamically i.e executing test cases. When we get into the ‘Think & Prove’ mode, it makes far better sense to commence with thinking how the entity will handle non-conformance by looking at each error injection and potential fault propagation. As a developer, we are familiar with the code implementation and therefore running the mental model with a PDT is far easier. This provides a good balance to code construction.

PDTs instead of test cases
Commencing with non-conformance is best done by using patterns of non-conformance and this is what a PDT is all about. It is not an exact instantiation of incorrect values be it at any of the levels (1-3), it is rather a set of values satisfying a condition violation. This kind of thinking lends to generalisation and therefore simplifies test design reducing friction and optimising time.

To summarise, the goal was to enable to build high quality early stage entity code and we approached this by being ‘friction-less’. By changing our views and doing less. By static evaluation rather than resort to only dynamic evaluation. By focusing on robustness first and then conformance. By using PDT catalog rather than specific test cases.

Once the entity under development has gone through levels 1-3 quickly, it is necessary to come up specific conformance test cases and then dynamically evaluate them if the entity is non-trivial. If the entity under development is not a new one, but one that is being modified, then think through the interactions with other entities and how this may enable propagation of PDTs first before regressing.

So if you want to improve early stage quality, smoothen the surface for developer testing. Make it friction-less. Do less and let the entities shine. It is not doing more testing, it is about being more sensitised and doing less. Let the evaluation by a developer weave naturally and not be another burdensome task.

What are your views on this?

b

Horse Blinders & Fish Eye vision

In a system which is a collection of various processes, templates form an integral element to aid implementation. Templates provide a framework to capture information in a structured manner. Very necessary in systems that require rigorous compliance.

Why do horses used for pulling wagons wear blinders? Horses that pull wagons and carriages wear blinkers to prevent them from becoming distracted or panicked by what they see behind the wagon. They keep the horse’s eyes focused on what is ahead, rather than what is at the side or behind. (Courtesy- http://bit.ly/2gkjGeA & http://bit.ly/2fc5iZB)

Templates are like “horse blinders”. They enable sharp focus of a narrow field restricting purposefully the peripheral vision enabling strict compliance.

In a creative environment where a 360 degree vision is required, templates are a bad choice. What is needed is a “workspace”, that provides a good environment which can be adapted flexibly for different needs.

Workspace like the Fish Eye helps you see the complete big picture enabling you the connect the various individual dots

It allows you to see the full 360 picture, enabling you to proceed in a direction of choice and changing course as needed to adapt. A well thought out workspace provides you with an environment with high degrees of freedom yet ensuring that you are not adrift.

Rather than having ‘boxes’ to collect information, it provides you with ‘spaces’ to collect information as necessary without restricting you to a specific order, thereby enabling you to connect the dots to see the full picture.

Templates like horse blinders enable you to focus on ‘DOING WORK’, while Workspaces akin to Fish Eye help you to ‘THINK BETTER’.

The previous article highlighted the importance of ‘visual thinking’ to see better in the “mind’s eye”, this one continues on the same thread allowing you to “see better with the real eye” !

Immersive Session Testing (IST) is a style of testing that exploits the logical left brain with the creative right, enabling you to immerse deeply and test in short sessions. Powered by HBT (Hypothesis Based Testing) that provides the scientific rigour and “Workspaces” equipping you with the creative fluidity, it enables you to immerse, think logically, write less, do more with a sharp focus on outcome.

Reconnaissance workspace in IST helps you to see the users, their use cases, system features & attributes, environment, behaviour conditions, configuration settings, access control enabling you to see the complete big picture

Marketing blurb: If you are keen on adopting IST, a smart, scientific, rapid & modern approach to software testing, check out the one-day experiential workshop on Dec 9, 2016 by clicking here.

c

“Visual thinking” – Test smarter & faster

It is interesting that in the current technology/tool infested world, we have realised that human mind is the most powerful after all, and engaging it fully can solve the most complicated problems rapidly.

One of the key ingredients of an engaged thinker is “Thinking visually” ; to clearly see the problem, solution or gaps.

Design Thinking relies on sketching/drawing skills to imagine better ideas, figure out things, explain and give instructions. Daniel Ling(1) in his book “Completing design thinking guide for successful professionals” outlines this as one of the five mindsets – “Believe you can draw”.

Sunni Brown(2) in her book “The Doodle revolution” states “doodling is deep thinking in disguise – a simple, accessible and dynamite tool for innovating and solving the stickiest of problems“ by enabling a shift from habitual thinking pattern to cognitive breakthroughs.

David Sibbet(3) a world leader in graphic facilitation and visual thinking for groups in his brilliant book “Visual Meetings” outlines three tools for effective meetings to transform group productivity : (a) ‘Draw’ to communicate visually (b) ‘Sticky notes’ to record little chunks of information and create storyboard (c) ‘Idea mapping’ which are visual metaphors embedded in graphic templates and worksheets to think visually.

Dan Roam(4) in “Show and Tell” states that the three steps to create an extraordinary presentation are (a) Tell the truth (b) Tell it with a story and (c) Tell the story with pictures. The book ‘written’ beautifully in pictures entirely is about ‘how to understand audience, build a clear storyline, create effective visuals and channel your fear into fun’.

Jake Knapp(5) in “Sprint – How to solve big problems and test new ideas in just five days” outlines a five-day process to problem solving relies on SKETCHING on Day 2. He says that “we are asking you to sketch because we are convinced it’s the faster and easiest way to transform abstract ideas into concrete solutions. Sketching allows every person to develop those concrete ideas while working alone”.

It is interesting to note that visual thinking has taken centre stage now with emphasis on sketching, drawing as a means to unleashing the power of the mind.

As a keen practitioner of software testing, I am amazed how people get swooped into the thinking that automation is the solution to ensuring software quality. Indeed tools and automated testing practices enable continuous evaluation rapidly, but there is indeed no substitute for the power of smart thinking.

Testing is a funny business where one has to be clairvoyant to see the unknown, to perceive what is missing and also assess comprehensively what is present ‘guaranteeing’ that nothing is amiss.

To be able to do this very well, good visualisation is key. To see with stark clarity what is present, needed and missed out.

Immersive Session Testing (IST) is a style of testing that exploits the logical left brain with the creative right, enabling you to immerse deeply and test in short sessions. Powered by HBT (Hypothesis Based Testing) that provides the scientific rigour and “Workspaces” equipping you the creative fluidity, it enables one to immerse, think logically, write less, do more with a sharp focus on outcome.

Workspace is a visual aid that provides an environment to analyse & understand, design & evaluate using mind maps, ‘stick-its’, doodles to enable visual thinking and “see in your mind” with stunning clarity the users, flows, features, attributes, environment, behaviour conditions…

The power of visual thinking in IST enables you see the big picture of the system and its full context of end users, use cases, environment and attributes, visualise the end user’s usage to empathise with them, get under the hood to extract conditions to model behaviour and design test cases, and finally visualise the quality of delivered system.

IST enables old fashioned intelligent testing, by equipping you with modern thinking tools and paradigms which when combined with technology/tools makes testing smart, fun, fast, rich and value adding.

Have a great day.

Marketing blurb: If you are keen on adopting IST, a smart, scientific, rapid & modern approach to software testing, check out the one-day experiential workshop on Dec 9, 2016 by clicking here.

References

(1) Daniel Ling “Completing design thinking guide for successful professionals”, CreateSpace Independent Publishing Platform, 2015.

(2) Sunni Brown, The Doodle Revolution: Unlock the Power to Think Differently, Portfolio, 2014.

(3) David Sibbet, “Visual Meetings: How Graphics, Sticky Notes and Idea Mapping Can Transform Group Productivity”, Wiley India Private Limited, 2012.

(4) Dan Roam, “Show and Tell – How everybody can make extraordinary presentations” Penguin, 2014.

(5) Jake Knapp, “Sprint – How to solve big problems and test new ideas in just five days”, Bantam Press, 2016.

d

The elements of good testing

We want to test rapidly, deal with incomplete information yet stay clear, absorb vast amounts of information, process and analyse well, be a seeker and question deeply to test very well.

Good critical thinking skills are the order of the day. So how do we accomplish this?

Hmmm, it seems like the elements to good testing may be:

How we do (the DOING), How well we do (problem solving METHOD), the AIDS that we use to assist in our DOING. In addition to these physical stuff, it is also non-technical virtual stuff of how well we are engaged in the act to be in a flow, creative thinking in addition to being deductive/ logical, the role of language styles and diagrams/sketches in unleashing the right brain creativity.

Let us look at “the DOING” – “how we perform the various activities”.

Processes at a higher level list (org/division) list who, when, what need to be done to ensure clarity and consistency of work by enabling. Templates provide a simple boiler plate for seeking, recording information aided by checklists for activities to enable a consistent way of doing. The key benefit of processes is largely being consistent, whilst the effectiveness depends on the individual’s practice of doing – of choosing the right techniques , using it correctly, make intelligent choices by applying good principles, relying on ones/others experience(s), available as possible guidelines/heuristics to choosing the most appropriate path to doing. It is a winning combination of PROCESS & PRACTICE that ensures that outcomes of activities are consistent, effective and efficient. It is “the DOING” element.

Not only is it necessary that we have clarity of “what-do-do”, it is also necessary to doing it well. Be it any activity in the lifecycle like read/explore to understand or come up with scenarios/questions, it is the “how-we-do” i.e. “problem solving METHOD” that matters for effective testing.

The methods of problem solving are : technique are are person independent, heuristics/guidelines based on experience or decision making principles to make choices. A good practice that is largely individual based is the backbone to effective implementation of the various activities that we talked about.

To ensure good implementation of activities or the method to solve problems in the in activity(ies) “the AIDS” like templates, checklists, guidelines are often used.

Note that process aids like Templates, Checklists aid in compliance, ensuring that we do not miss any fine grained information or activity sometimes quelling the creative side of problem solving. In situations when it is required to be very diligent, checklists are very effective. There is a very interesting article in the book “Differential diagnosis” on how checklists were very instrumental in cutting down infections in hospital ICUs enabling faster recovery and saving a whole lot of money. “the AIDS” need not only be drudgery assistants but can be also intelligent ones to extract of better information or enable intelligent processing.

Beyond the DOING, the METHOD of doing and the AIDS lies interesting “human stuff” that as individuals we can exploit to do significantly better work. The first one is ENGAGEMENT- of engaging all the senses especially the “SEE/TOUCH/HEAR”, then immersing oneself and being in a flow, to unleashing the creative senses, and lastly being sharp and therefore doing far better work.

The CATALYST enables far better engagement of the senses to be able to unleash the sharp deductive/logical thought process powered by the left brain and the creative energy of the right brain.

It is not just about cold plain logic, but of thinking non-linearly, creatively, of enjoying the journey rather than just devise a detailed plan analytically. The “thinking APPROACH” of exploiting the combination of left and right brain paves the way for breakthrough thinking, of coming with variant approaches of not treading the same path, of staying curious.

The harmonious engagement of left and right brain, of being logical yet creative, plan meticulously of how-to-do, yet course correct and adjust rapidly, requires appropriate stimuli which is where “the representation STYLE” comes in.

It is about stimulating the various senses via crisp text, of non-linear representation like mind map, using creative sketching/diagram, and finally the writing styles of being imperative, declarative, interrogative to engage, sharpen, focus yet enable divergence.

This article is not pedagogic, nor prescriptive, but an attempt to weave the various elements into a interesting mosaic that enables one to understand the beautiful interplay of the DOING activities, the METHOD of doing, the AIDS, the CATALYST that aids doing, the thinking APPROACH, representation STYLE ‘jargonised’ as process, practice, templates, checklists, guidelines, techniques, heuristics, principle, good practices, creative/ad-hoc, disciplined, mind map, sketch, deep knowledge, experience.

It is the unique combination of these based on the context that fosters effective testing yet be efficient and consistent.

Cheers. Have a great day.

These elements of good testing have formed the backbone of “Immersive Session Testing” (IST).

Immersive Software Testing is a style of testing that exploits the logical left brain with the creative right, enabling you to immerse deeply and test in short sessions. Powered by HBT (Hypothesis Based Testing) that provides the scientific rigour and aided by “Workspaces” that provide the creative fluidity, it enables one to immerse, think logically, write less, do more with a sharp focus on outcome

NOTE : If you are curious about IST, click here for details of the one-day experiential workshop on Dec 9, 2016.