Aesthetics in Software Testing

Software testing is typically seen as yet another job to be done in the software development life cycle. It is typically seen as a cliched activity consisting of planning, design/update of test cases, scripting and execution. Is there an element of beauty in software testing? Can we see outputs of this activity as works of art?

Any activity that we do can be seen from the viewpoints of science, engineering and art. An engineering activity typically produces utilitarian artifacts, whereas an activity done with passion and creativity produces works of art, this goes beyond the utility value.  It takes a craftsman to produce objects-de-art,   while it takes a good engineer to  produce  objects with high utility value.

Click here to read the full article.

This article was recently published in “Tea-time with Testers” – an ezine on software testing.

We would love to hear your thoughts / comments on the article.

Landscaping – A STEM Core concept to understanding system & expectations

Understanding a system is non-linear process i.e. we have derive a lot of questions that are interconnected and then seek answers individually and then connect the various answers.

Landscaping is a core concept in STEM (STAG Test Engineering Method) that powers HBT (Hypothesis based testing) that lists the various aspects related to the system, customer, marketplace, technology and forces one to connect these concepts. What emanates is like an interesting web of information (aka Mindmap) that enables you to come up with intelligent questions and there enable rapid understanding.

For example, connecting marketplace, with requirements and end-users, one can arrive at the question: “What are the various markets that we are planning to deploy our system and in these markets, who are the various kinds of end user types and what does each one from the system?

To quote a specific instance, a two-liner requirement spawned 40+ questions rapidly, that when clarified allowed us to understand the requirement in about an hour. In fact, this uncovered issues in the product being built, as certain aspects of the requirements were completely missed by the developers in their implementation. It is to be noted that our ability to identify questions and therefore understand were not due our domain skills, it was purely due to the application of HBT methodology powered by the defect detection technology(STEM) to the problem at hand.

You can also check out a presentation and an article on Landscaping here.

Rapidly understanding the usage profile

Understanding the rate and number of transactions probably on a real system is critical to ensure that the system is designed well and later sized and deployed well. Good understanding of the business domain is seen as a key enabler to arrive at the usage profile.

Operational profiling (A STEM core concept) is a scientific way to quickly arrive at a real life profile of usage. Good understanding of this concept alleviates the problem of lack of deep domain knowledge to understand the usage profile. This core concept consists of these key aspects:

  1. Mode – Represents a time period of usage e.g. End of month, where the usage patterns are distinctive and different.
  2. Key operations (features/requirements) used
  3. Types of end users associated with the key features/requirements
  4. Number of end users for each type of users
  5. Rate of arrival of transactions

In short, for a given mode, identify the end users types and their key operations and then identify the number of users for each type of user and then identify the rate of arrival of transaction. Employing this core concept allows us to think better and ask specific questions to understand the marketplace and the usage profile in a typical and worst-case scenario.

This allows us to get a better understanding of the usage and helps in identifying business risks and derive an effective strategy.

Ensuring testable non-functional requirements

Non-functional requirements are notoriously non-testable! By this, we mean it is more typical that non-functional requirements are fuzzy/less-clear. In a simplistic form “The system should be robust” is non-testable i.e. It is definitely not clear as how to validate this!

Rather than identifying non-functional requirements and describing them, it is suggested that we look at each requirement and partition these into functional and non-functional aspects and probe into the key attributes to be satisfied for the requirement. For attribute, GQM (Goal-Question-Metric) of core concept of STEM enables deriving metric(s) to ensure that each attribute is indeed testable. Later the various similar attributes across all the requirements can be aggregated to create the system-wide non-functional requirement.

In this manner non-functional requirements are clearer and testable.

Effectiveness and Efficiency of test cases

Given that we have set of test cases, we would like then them to be effective. What does “effective” mean? Effectiveness of test cases is the ability of the test cases to be able to detect (or uncover) the defects that can affect the customer experience. So a clear understanding of what *types of defects* are we looking for and a mapping of the test cases to these test cases would enable a scientific way of assessing effectiveness.

What is efficiency? It is ensuring that we execute the test cases in as short a time as possible with optimal effort and no more. Understanding (1) the priority or business importance of test cases, (2) knowing what test cases to execute in what part of the lifecycle (3) clear segregation of test cases by various types of tests and levels enables to optimize testing and become efficient.

How many “negative test cases” should there be?

Before we answer the question let us ensure that we have common understanding of what is positive or negative test cases. In HBT, positive test cases are those whose input values are valid, while a negative test case is one with at least one of the test inputs (i.e. Test data) being incorrect (i.e. out of specification).

The objective of positive test cases is “conformance” while the objective of negative test cases is “robustness”. If a majority of test case are positive, then it implies that we are primarily interested in conformance i.e. ensuring the system handles correct inputs well. This we know is not sufficient, as accidental incorrect inputs should not result in unexpected possibly risky/dangerous behaviour. Hence we need test cases that are indeed “negative”.

So, coming back to the question, what is a good enough distribution of positive and negative test cases? Any quick answer like 75% should not be trusted as they have no basis. So, how do we answer this question? Step back now and look at the number of inputs for a given test case, the clue is there. For example at a lower level of testing, where say we are validating a screen, the number of inputs may be many as the screen may be populated. As we go up the testing level, e.g. Testing a feature that uses a few screens, the test data is not the various individual inputs on the screen, but aggregate data (think like a record) and these may be fewer in number compared to the earlier levels.

Having understood that the number of test data (or inputs) at lower levels is far higher than those at the higher levels, it is only logical to conclude that the number of negative test cases at lower levels. Now how much should that be?  To answer his finally without resorting to magic (!), let us illustrate with simple example. If there are 5 inputs and each input has six possible values (3 positive i.e. valid and 3 negative i.e. invalid) then using simple combinatorial math, we can see that there be (minimally) 3*5=15 negative test cases and (minimally) 5 positive test cases. In this case the 15/(15+5)= (75%) of test cases are negative.

In closing, understanding the number of inputs and clear understanding of what an input at a testing level is (The STEM Core concept “Input granularity principle” of HBT methodology helps in understanding as what an input at a level is) it is possible to quickly estimate the minimal number of negative test cases. This is very useful in quickly ascertaining whether the test cases are conformance and robustness oriented.