Blog3Pic-LandingPage-300x300_4198bb26033f2a065209727863a822f7

LEAN: It is not doing more, it is about doing less

So, what should be really happening in Agile context?

Lean thinking is what inspired the Agile movement. Lean is about not producing waste in the first place. It is doing things ‘clean’ at at the first place,  so that waste is ideally not there. Waste in software context are bugs. And in the early stages there are ‘unit bugs’. Since our focus in Agile is to find these earlier and to ensure that they are never there whenever we modify, we resort to a high degree of automation. Therefore we have a large body of test cases at the lower levels automated to ensure that we can continually execute them. This is great, but should we not focus on adopting a practice that in essence prevents these issues and lessen the need for large number of unit tests to uncover these?

It is not about doing more, it is about doing less

When we find issues in the product/app especially those that can be caught earlier, we focus on more rigorous dev test with extreme focus on automation. Yes, that seems logical. But what a minute, for a developer already busy writing code, is this the right approach? Given that dev test is largely about issues in L1 thru L4, could we not focus on getting this right or statically assess these via smart checklist?

Great quality early stage code is not about doing more testing, it really is doing about doing less test, by enabling sharper focus on ‘what-can-go-wrong’, ‘have-you-considered-this’.

The ebook outlines in detail outlines how to purposefully do DevTest in the “LEANest” by clearly outlining what issues a dev has to go after and outlines SmartDevChecklist to do this the most LEAN way.

SaveSave

Poor quality code is due to compromised unit testing

Poor quality code is due to compromised “Unit Testing”

The problem at large

The quality of early stage code is felt to be a concern by many engineering managers that I talk to. In my numerous consulting assignments, I have noticed that many issues found by QA folks seem to be ones that do not require the expertise of a specialist tester, compromising the effectiveness of ‘system test’  resulting in avoidable  customer issues.

Great quality code is not the result of intense system testing, it is result of well structured filtration of issues from the early stages. A compromised ‘unit test’ puts unnecessary strain on the QA folks who seem to be compelled to go after these issues at the expense of system test.

Developers on the other hand do not deliberately write bad code, it is just that accidents happen. Accidents seem to happen due to brute force push of unit testing without it being simple and practical, and developers already short of time are unable to  adhere to a heavyweight process. The other fallacy seems to the over dependence on automated unit tests as the saviour without paying attention to test cases. Also the incorrect notion of unit testing as being only white-box oriented with a skew towards code coverage results in ineffective tests that are introverted. Lastly the sheer emphasis of dynamic testing as the only method to uncover defects is possibly overwhelming, when easier static methods of uncovering issues could have been employed.

Business/economic impact

The impact of leakage of issues from early stage is not irritating, but serious enough. Issues reported by customers that are early stage simple issue like poor validation of inputs, results in significant drop of confidence in the customer. The QA folks focus on these issues result in their job of system validation being poor, resulting in field issues related to end-to-end flows  and sometimes attributes  being compromised.

Also the incorrect focus of a specialist QA results in insufficient time for doing things that can make system test more effective and efficient like automation of end-to-end flows, focus on non-functional requirements, revising the test strategy/approach and sharpening with knowledge gained every cycle.

Yes, this age old problem is boring. Don’t force your developers to more unit testing to solve the problem. Ask them to do it smartly by doing less. If you are keen to know how, check out the e-book listed below.

SaveSave

Regression

Is regression hindering your progression?

“It took a few hours to incorporate the change but took a few days to test, upsetting the customer. Why? Well, we found out that our QA team was doing too much regression. Wish we could be smarter” – Engineering Manager of a mid-sized IT company.

Have you ever felt this way? Have you wished if you do less regression to release faster?

In the current world of rapid development, software is constantly updated with new features, incremental additions and bug fixes. While features (new & incremental) are the focus for revenue generation and market expansion, bug fixes are necessary to ensure that customers stay.

While on the path of progression towards revenue enhancement, the challenge is “Did I break any existing features that are working well”? That may necessitate a regression test.

Note that as the product grows, so does regression, increasing cost and slowing down releases.

Regress means ‘go backwards’, and in this context, it means ‘checkout prior quality risks to ensure that they still are under control’. This implies that the product is retested from both functionalities and attributes aspects to ensure that functionalities and attributes like performance, security etc. are not compromised.

So, how can one regress smartly?

* Figure out how much not to regress by doing a smarter impact analysis using a scientific approach to understand fault propagation due to change.
* Figure out how much not to regress by analysing defect yields over time to understand what parts of the system have been hardened
* Well, automation is an obvious choice, ensure that the scenarios are “fit enough for automation” so that you don’t end spend much effort maintaining the scripts to be in sync with every change.

Change as we all know is very imminent, and does cause a domino effect. The smartness lies in, validating only those that have the potential for domino effect thereby doing less and exploiting automation to do faster.

Here is the link to TWO aids that can enable your QA team to regress smartly. Oh, ask your QA team to read this article before they use the tool.

tools for smart regression

Ideas to regress smartly

Ideas to regress smartly

The context

In the current world of rapid development, a software is constantly updated with new features, incremental additions and bug fixes. While features (new & incremental) are the focus for revenue generation and market expansion, bug fixes are necessary to ensure that customers stay.

While on the path of progression towards revenue enhancement, the challenge that is “Did I break any existing features that are working well”? That may necessitate a regression test.

Note that as the product grows, so does regression, increasing cost and slowing down releases.

Regression

Regress means ‘go backwards’, in this context this means ‘checkout prior quality risks to ensure that they are still under control’.  The product is retested from both functionalities and attributes aspects to ensure that functionalities and attributes like performance, security etc. are not compromised.

But, how do we tackle this?
Given the necessity of ensuring that the functionality and attributes are not compromised, we have to retest the functional/non-functional aspects constantly resulting in repetitive testing.

To do this well, we typically adopt:
1. Massive regression test automation to re-test thoroughly.
2. Deep product knowledge to assess the potential impact of changes and do focused regression.

So, what is the challenge?

1. Well, automation is great, but it requires continual investment to build and maintain.
2. In-depth product knowledge is also limited to a few people, and there are always in high demand!

Hmmm, how can we do better?
Instead of focusing only on how to do more and faster, could we do it less in a  smarter way? Let us ask some questions to figure this out:

1. Are you doing too much regression?

Could we do a smarter impact analysis? Could there be a logical approach to analysing change impacts without only relying on deep product knowledge? Yes, one of HyBIST’s technique “Fault propagation analysis” could be useful here. The technique, in a nutshell, states “Given that an entity has been modified and is linked to other entities, what types of defects can indeed propagate and affect the linked entities?”

2. Is your defect yield from regression good enough?

Software with time hardens, i.e. becomes fit. This implies that the same test cases executed yield less defects later, i.e. test case yield drops. So the lingering question is “should be executing these at all?”. Just like living beings who develop resistance to certain diseases over time, the software also can be thought to become ‘resistant to test cases’ with time. In HyBIST, we call this ‘Test case immunity’  and use this to logically ascertain test cases that may be dropped and therefore do less.

3. Are your test scenarios fit enough for automation?

If the software is volatile, automation is more volatile! Changes to software necessitate automation to be in sync. So to ensure rapid modification, frameworks are used. That is great, but do you know that structure, i.e. architecture of test cases also matter? It is not just about frameworks and great code. But it’s about how well the test cases are organised. In HyBIST this is done by using a technique “Levelisation analysis” that ascertains if the test cases are organised into well-formed levels enabling rapid automation with rapid modifiability.

In closing : SMART REGRESSION

In summary, the three questions were all about “How can we do less to do more”?  Do less regression. Do less automation maintenance. And therefore perform smart regression to progress further.

Smart regression complements the act of doing faster via automation by enabling one to do lesser.

SaveSave

SaveSave

SaveSave

as

Frictionless development testing

Very often in discussions with senior technical folks, the topic of developer testing and early stage quality pops up. And it is always about ‘we do not do good enough developer testing’ and how it has increased post release support. And they are keen on knowing ‘how to make developers test better and diligently’ and outlining their solution approach via automation and stricter process. The philosophy is always about “more early testing” which typically has been harder to implement.

Should we really test more? Well it is necessary to dig into basics now. Let me share my view as to what they probably mean by testing. My understanding is that they see testing as dynamic evaluation to ascertain correctness. To come up with test cases that will be executed using a tool or a human and checking correctness by examining the results. And therefore good developer testing is always about designing test cases and executing them.

And that is where the problem is. Already under immense time pressure, the developer faces serious time crunch to design test cases, execute (possible after automating them). In the case when it does happen, they all pass ! (Not that you would know if they fail!). And the reason that I have observed for the ‘high pass rate’ is that test cases are most often conformance oriented. When non-conforming data hits the system, Oops happens!

So should we continue to test harder? What if we changed our views? (1) That testing need not be limited to dynamic evaluation, but could also done by via static proving. That is, ascertaining correctness not only via execution of test cases but by thinking through what can happen with the data sets. (2) That instead of commencing evaluation with conformance test cases, we start in the reverse with non-conforming data sets first. Prove that the system rejects bad inputs before we evaluate for conformance correctness. (3)That instead of designing test cases for every entity, we use a potential defect type (PDT) catalog as a base to check for non-conformances first. Using PDT catalog as the base for non-conformance check preferably via static proving and devising entity specific positive data sets for conformance correctness.

So how do these views shift us to do better developer testing at an early stage? Well, the biggest shift is about doing less by being friction-less. To enable smooth evaluation by using PDT catalog to reduce design effort, applying static proving to think better and reduce/prevent defects rather that executing rotely, and finally focusing on issues (i.e PDTs) first complementing the typical ‘constructive mentality’ that we as developers have. Rather than do more with stricter process, let us loosen and simplify, to enable ‘friction-less evaluation’.

Think & prove vs Execute & evaluate

Picking up a PDT from the catalog and applying a mental model of the entity’s behaviour can enable us to rapidly find potential holes in implementation. To enable easy implementation of this idea, let us group the PDTs in into three levels. The first one deals with incorrect inputs only, while the second one deals with incorrect ways to accept these inputs while the last set deals with potential incorrect internal aspects related to code structure and external environment. Let the act of proving robustness to deal with non-conformances proceed from level 1 though 3 commencing by thinking through (1) what may happen when incorrect inputs are injected (2) how does interface handle incorrect order/relationship of these inputs and finally (3) how entity handles (incorrect)internal aspects of structure like resource allocation, exception handling, multi-way exits, timing/synchronisation or misconfigured/starved external environment.

Non-conformance first

Recently a senior executive was stating that his organisation’s policy for developer testing was based on ‘minimal acceptance’ i.e. ascertain if the entity worked with right inputs. As a result the test cases were more ‘positive’ and would pass. Post release was a pain, as failure due to basic non-conforming inputs would make the customer very irritated. And the reason cited for the ‘minimal acceptance criteria’ was the lack of time to test the corner cases. Here the evaluation was primarily done dynamically i.e executing test cases. When we get into the ‘Think & Prove’ mode, it makes far better sense to commence with thinking how the entity will handle non-conformance by looking at each error injection and potential fault propagation. As a developer, we are familiar with the code implementation and therefore running the mental model with a PDT is far easier. This provides a good balance to code construction.

PDTs instead of test cases

Commencing with non-conformance is best done by using patterns of non-conformance and this is what a PDT is all about. It is not an exact instantiation of incorrect values be it at any of the levels (1-3), it is rather a set of values satisfying a condition violation. This kind of thinking lends to generalisation and therefore simplifies test design reducing friction and optimising time.

To summarise, the goal was to enable to build high quality early stage entity code and we approached this by being ‘friction-less’. By changing our views and doing less. By static evaluation rather than resort to only dynamic evaluation. By focusing on robustness first and then conformance. By using PDT catalog rather than specific test cases.

Once the entity under development has gone through levels 1-3 quickly, it is necessary to come up specific conformance test cases and then dynamically evaluate them if the entity is non-trivial. If the entity under development is not a new one, but one that is being modified, then think through the interactions with other entities and how this may enable propagation of PDTs first before regressing.

So if you want to improve early stage quality, smoothen the surface for developer testing. Make it friction-less. Do less and let the entities shine. It is not doing more testing, it is about being more sensitised and doing less. Let the evaluation by a developer weave naturally and not be another burdensome task.

What are your views on this?

b

Horse Blinders & Fish Eye vision

In a system which is a collection of various processes, templates form an integral element to aid implementation. Templates provide a framework to capture information in a structured manner. Very necessary in systems that require rigorous compliance.

Why do horses used for pulling wagons wear blinders? Horses that pull wagons and carriages wear blinkers to prevent them from becoming distracted or panicked by what they see behind the wagon. They keep the horse’s eyes focused on what is ahead, rather than what is at the side or behind. (Courtesy- http://bit.ly/2gkjGeA & http://bit.ly/2fc5iZB)

Templates are like “horse blinders”. They enable sharp focus of a narrow field restricting purposefully the peripheral vision enabling strict compliance.

In a creative environment where a 360 degree vision is required, templates are a bad choice. What is needed is a “workspace”, that provides a good environment which can be adapted flexibly for different needs.

Workspace like the Fish Eye helps you see the complete big picture enabling you the connect the various individual dots

It allows you to see the full 360 picture, enabling you to proceed in a direction of choice and changing course as needed to adapt. A well thought out workspace provides you with an environment with high degrees of freedom yet ensuring that you are not adrift.

Rather than having ‘boxes’ to collect information, it provides you with ‘spaces’ to collect information as necessary without restricting you to a specific order, thereby enabling you to connect the dots to see the full picture.

Templates like horse blinders enable you to focus on ‘DOING WORK’, while Workspaces akin to Fish Eye help you to ‘THINK BETTER’.

The previous article highlighted the importance of ‘visual thinking’ to see better in the “mind’s eye”, this one continues on the same thread allowing you to “see better with the real eye” !

Immersive Session Testing (IST) is a style of testing that exploits the logical left brain with the creative right, enabling you to immerse deeply and test in short sessions. Powered by HyBIST (Hypothesis-Based Immersive Session Testing) that provides the scientific rigour and “Workspaces” equipping you with the creative fluidity, it enables you to immerse, think logically, write less, do more with a sharp focus on outcome.

Reconnaissance workspace in IST helps you to see the users, their use cases, system features & attributes, environment, behaviour conditions, configuration settings, access control enabling you to see the complete big picture

Marketing blurb: If you are keen on adopting IST, a smart, scientific, rapid & modern approach to software testing, check out the one-day experiential workshop on Dec 9, 2016 by clicking here.

c

“Visual thinking” – Test smarter & faster

It is interesting that in the current technology/tool infested world, we have realised that human mind is the most powerful after all, and engaging it fully can solve the most complicated problems rapidly.

One of the key ingredients of an engaged thinker is “Thinking visually” ; to clearly see the problem, solution or gaps.

Design Thinking relies on sketching/drawing skills to imagine better ideas, figure out things, explain and give instructions. Daniel Ling(1) in his book “Completing design thinking guide for successful professionals” outlines this as one of the five mindsets – “Believe you can draw”.

Sunni Brown(2) in her book “The Doodle revolution” states “doodling is deep thinking in disguise – a simple, accessible and dynamite tool for innovating and solving the stickiest of problems“ by enabling a shift from habitual thinking pattern to cognitive breakthroughs.

David Sibbet(3) a world leader in graphic facilitation and visual thinking for groups in his brilliant book “Visual Meetings” outlines three tools for effective meetings to transform group productivity : (a) ‘Draw’ to communicate visually (b) ‘Sticky notes’ to record little chunks of information and create storyboard (c) ‘Idea mapping’ which are visual metaphors embedded in graphic templates and worksheets to think visually.

Dan Roam(4) in “Show and Tell” states that the three steps to create an extraordinary presentation are (a) Tell the truth (b) Tell it with a story and (c) Tell the story with pictures. The book ‘written’ beautifully in pictures entirely is about ‘how to understand audience, build a clear storyline, create effective visuals and channel your fear into fun’.

Jake Knapp(5) in “Sprint – How to solve big problems and test new ideas in just five days” outlines a five-day process to problem solving relies on SKETCHING on Day 2. He says that “we are asking you to sketch because we are convinced it’s the faster and easiest way to transform abstract ideas into concrete solutions. Sketching allows every person to develop those concrete ideas while working alone”.

It is interesting to note that visual thinking has taken centre stage now with emphasis on sketching, drawing as a means to unleashing the power of the mind.

As a keen practitioner of software testing, I am amazed how people get swooped into the thinking that automation is the solution to ensuring software quality. Indeed tools and automated testing practices enable continuous evaluation rapidly, but there is indeed no substitute for the power of smart thinking.

Testing is a funny business where one has to be clairvoyant to see the unknown, to perceive what is missing and also assess comprehensively what is present ‘guaranteeing’ that nothing is amiss.

To be able to do this very well, good visualisation is key. To see with stark clarity what is present, needed and missed out.

Immersive Session Testing (IST) is a style of testing that exploits the logical left brain with the creative right, enabling you to immerse deeply and test in short sessions. Powered by HyBIST (Hypothesis-Based Immersive Session Testing) that provides the scientific rigour and “Workspaces” equipping you the creative fluidity, it enables one to immerse, think logically, write less, do more with a sharp focus on outcome.

Workspace is a visual aid that provides an environment to analyse & understand, design & evaluate using mind maps, ‘stick-its’, doodles to enable visual thinking and “see in your mind” with stunning clarity the users, flows, features, attributes, environment, behaviour conditions…

The power of visual thinking in IST enables you see the big picture of the system and its full context of end users, use cases, environment and attributes, visualise the end user’s usage to empathise with them, get under the hood to extract conditions to model behaviour and design test cases, and finally visualise the quality of delivered system.

IST enables old fashioned intelligent testing, by equipping you with modern thinking tools and paradigms which when combined with technology/tools makes testing smart, fun, fast, rich and value adding.

Have a great day.

Marketing blurb: If you are keen on adopting IST, a smart, scientific, rapid & modern approach to software testing, check out the one-day experiential workshop on Dec 9, 2016 by clicking here.

References

(1) Daniel Ling “Completing design thinking guide for successful professionals”, CreateSpace Independent Publishing Platform, 2015.

(2) Sunni Brown, The Doodle Revolution: Unlock the Power to Think Differently, Portfolio, 2014.

(3) David Sibbet, “Visual Meetings: How Graphics, Sticky Notes and Idea Mapping Can Transform Group Productivity”, Wiley India Private Limited, 2012.

(4) Dan Roam, “Show and Tell – How everybody can make extraordinary presentations” Penguin, 2014.

(5) Jake Knapp, “Sprint – How to solve big problems and test new ideas in just five days”, Bantam Press, 2016.

Requirements traceability is “Necessary but not sufficient”

When asked about “how do you know that your test cases are adequate?”, the typical answer is Requirement Traceability Matrix(RTM) has been generated and that each requirement does indeed have test cases.

Is this logic strong enough? Unfortunately NO! Why? Assume that each requirement had just one test case. This implies that we have good RTM i.e. each requirement has been covered. What we do know is that could there additional test cases for some of the requirements? So RTM is a necessary condition but NOT a sufficient condition.

So, what does it take to be sufficient? If we had a clear notion of types of defects that could affect the customer experience and then mapped these to test cases, we have Fault Traceability Matrix (FTM as proposed by HyBIST). This allows us to be sure that our test cases can indeed detect those defects that will impact customer experience.

Note that in HyBIST potential defects types are mapped to the Cleanliness Criteria derived earlier. Cleanliness criteria are those that have to be met to ensure that customer has a good experience with the system.

This is covered in

Aesthetics in Software Testing

Software testing is typically seen as yet another job to be done in the software development life cycle. It is typically seen as a clichéd activity consisting of planning, design/update of test cases, scripting and execution. Is there an element of beauty in software testing? Can we see outputs of this activity as works of art?
Any activity that we do can be seen from the viewpoints of science, engineering and art. An engineering activity typically produces utilitarian artifacts, whereas an activity done with passion and creativity produces works of art, and this goes beyond the utility value. it takes a craftsman to produce objects-de-art, while it takes a good engineer to  produce  objects with high utility value.
An object of beauty satisfies the five senses (sight, hearing, touch, smell and taste) and touches the heart whereas an object of utility satisfies the rational mind. So what are the elements of software testing that touch our heart?

Beauty in test cases
The typical view of test cases is one of utility– the ability to uncover defects, Is there beauty in test cases? Yes I believe so. The element of beauty in test cases in its architecture – “the form and structure”.
If the test cases were organized by Quality levels, sub-ordered by items (features/modules) then segregated by types of test, ranked by importance/priority, sub-divided into conformance(+) and robustness(-),  then classified by early (smoke)/late-stage evaluation, then tagged by evaluation frequency, linked by optimal execution order and finally classified by execution mode (manual/automated), we get a beautiful form and structure that not only does the job well (utility) but appeals to the sense of sight via a beautiful visualization of test cases. This is the architecture of test cases suggested by Hypothesis-Based Immersive Session Testing (HyBIST).

Beauty in understanding
One of the major prerequisites and for effective testing is the understanding of the product and the end user’s expectations. Viewed from a typical utility perspective, this typically translates into understanding of various features and intended attributes. To me the aesthetics of understanding is the ability to visualize the software in terms of the internal structure, its environment and the way end users use the software. It is about ultimately distilling the complexity into a simple singularity–to get the WOW moment where suddenly everything becomes very clear. It is about building a clear and simple map of the various types of users,  the corresponding use cases and technical features, usage profile, the underlying architecture and behavior flows, the myriad internal connections and the nuances  of the deployment environment. It is about building a beautiful mental mind map of the element to be tested.

Beauty in the act of evaluation
Typically testing is seen as stimulating the software externally and making inferences of correctness from the observations. Are there possibly beautiful ways to assess correctness? Is it possible to instrument probes that will self assess the correctness? Can we create observation points that allow us to take better into the system? Viewing the act of evaluation from the aesthetic viewpoint, can possibly result in  more  creative ways to assess the correctness of behavior.

Beauty in team composition
Is there aesthetics in the team structure/composition? Viewing the team collection of interesting people – specialists, architects, problem solvers, sloggers,  firefighters, sticklers to discipline and geeks etc. allows us to see the beauty in the power of the team. It is not just about a team to get the job done,  it is about the “RUSH”  that we get about the structure that makes us feel ‘gung-ho’, ‘can-do anything’.

Beauty in reporting/metrics
As professionals, we collect various metrics  to aid in rational decision-making. This can indeed be a fairly mundane activity. What is aesthetics in this? If we can get extreme clarity on the aspects that want to observe and this allows us to make good decisions quickly, then I think this is beautiful. This involves two aspects–what we collect and how we present these. Creative visualization metaphors can make the presentation of the aspects of quality beautiful. Look at the two pictures below,  both of them represent the growth of a baby.

The one on the left shows the growth of a baby using the dreary engineering  graph,  whereas the one on the right shows the growing baby over time. Can we  similarly show to growth of our baby (the software) using creative visualization metaphors?

Beauty in test artifacts
We generate various test artifacts –  test plan, test cases, reports etc. What would make reading of these a pleasure?  Aesthetics here relates to the layout/organization, formatting, grammar, spelling, clarity, terseness. These aesthetic aspects are probably expected by the consumers of these artifacts today.

Beauty in the process
The test process is the most clinical and the boring aspect. Beauty is the last thing that comes to mind with respect to process. The aesthetic aspects as I see here is about being disciplined and creative, being detailed yet nimble. To me it is about devising a process that flexes, evolves in complete harmony with external natural environment. It is in the hard to describe these in words, it can only be seen in the mind’s eye!

Beauty in automation and test data
Finally on the aspect of test tooling,  it is about the beautiful code that we produce to test other code. The beauty here is in the simplicity of the code, ease of understanding, modifiability, architecture and cute workarounds to overcome tools/technology limitations.
Last but not the least, aesthetics in test data is about having meaningful and real-life data sets rather than gibberish.
Beauty they say, lies in the eyes of the beholder. It takes a penchant for craftsmanship driven by passion, to not just do a job, but to produce object-de-art that appeals to the senses. As in any other discipline, this is very personal. As a community, let us  go beyond the utilitarian aspects of our job and produce beautiful things.