Language shapes the way we think

Language is not just for writing, it plays a significant part in our thinking process. It has been discussed widely as how “Language shapes the way we think” (Read and listen to one of these at for an interesting TED talk blog & talk).
Once we are comfortable with a language, we “think” in that language. For example, we form sentences in the mind to understand, form narratives to describe, think in terms of if-then to discern logic, create command sequences to create actions and so on.
Language is made of the syntax ’the rules’ and the content ‘the semantics’. The syntax i.e rules shapes the way and the depth of understanding of the content.
Language allows us to:

  1. Describe a story “Understand
  2. Breakdown the problem “Simplify”
  3. Setup clear boundaries “Baseline”
  4. State the purpose “Goal”
  5. Organize our thoughts “Plan”
  6. Issue instructions to get things done “Action”
  7. State what has happened “Report”
  8. Document stuff so as not to forget “Remember”.
    Now let relate these to ‘testing’. Look at the 1-7. Sounds familiar? This is what we do when we commence testing – Understand, Simplify, Baseline, Setup goal, Plan, Action/do(Test), Report and Remember for future.Now let us see how the language which we typically use to document/write shapes how we think.
  1. Describe a story “Understand”
    Understanding is a key element to good testing. To understand whose need(s) we are trying to satisfy and the value expected, it is critical to think like an end-user. We often use the term “think from the user point of view”, but it is easier said than practiced. To enable a deeper and better understanding of the system a persona-based approach i.e of Describing the behaviour and the associated attribute(s) in first person as if the end user is describing it from his/her point of view enables you to put yourself in the shoes of the end user, empathise and therefore understand better.
  1. Breakdown the problem “Simplify”
    Any non-trivial thing is presumed complex. A true hallmark of good understanding is de-mystification, that of making it simple. From a language perspective, it is about summarizing, of describing in short sentences and not exceeding a paragraph.
  1. Setup clear boundaries “Baseline”
    A clear baseline as to what-to-test (I.e requirements/features) is necessary to ensure clarity in what we need to and that we have indeed covered i.e evaluated completely.  Using a imperative style sentence that is short and precise forces us establish a clear baseline.
    For example a customer requirement may be stated as “That the system admin shall be able to …” while an example of technical feature is “That the system shall provide ….”.
    Note that a descriptive or a narrative style is a strict no-no here.
  1. State the purpose “Goal”
    Testing is about ensuring that the needs of the various end users  delivered via the technical features do meet their expectations. Not only it is necessary to clearly outline the needs as a clear baseline, it is equally necessary to ensure that the expectations are well stated.
    This implies that the baseline has to be qualified with a criteria that is indeed objective. For example “That the system admin shall be able to do ‘blah’ within ‘x’ seconds on these ‘y’ devices”.
    I.e A short imperative sentence with a qualifier that is objective.
  1. Organize our thoughts “Plan”
    This is one of the things that we do most frequently in daily life,  the To-do list. The way we do this is to list down activities in a numbered bulleted list in sequential order (based on time).
    The language that we typically use is in first person using an  imperative style. The method that we use to think is in terms of bullets with a imperative style heading with a narrative style to describe the plan of each To-do action in detail.
  1. Issue instructions to get things done “Action”
    This is where we come up with scenarios to test. What I have observed  is most often a narrative style description that describes the actions to performed, the data to be used, and the method of validations to assess correctness.
    From a language perspective it is necessary to be action oriented here I.e describe each scenario as a command and the associated expected result in single sentence and then describe the steps to perform. For example “Ensure that the system does/does-not ‘foo’ when ‘bar’ is done”. First be clear of what you want to accomplish before you jump to how-to-do.
  1. State what has happened “Report”
    Now this is the fun part as reporting can describe multiple facts that are all connected, leading to complexity and confusion. From a language perspective, reporting is describing outcomes arranged by elements across time with associated detail and therefore the sentence to describe these can turnout to be inherently complex. This is applicable to defect reporting, reporting test cycle outcomes, to reporting final rest results and to describe learnings etc.
    To ensure clarity of thought, it is necessary to partition the description first in terms of summary and detail, then partition the detail into smaller elements, describing each element along various dimensions.
    In the case of defect report we describe a short synopsis of the problem and then then describe with multiple elements like ‘detailed observation’, ‘method to reproduce’, ‘environments observed in’ etc.
    In the case of test report, we commence with a summary and then describe the various each element in a section with different dimensions to describe in detail the elemental information as possible subsections.
  1. Document stuff so as not to forget “Remember”.
    This is really the free form part, the part that we jot down everything we observe, learn from past. This is one part that we cannot stick to one style of syntax. This is a mixture of all styles mentioned above and beautiful mixture of terseness with detail.
    The structure of sentence matters to the way we think, understand, perceive. Ultimately the content(semantics) matters, but the structure does matters too. Syntax is a great guide. A guide that shapes how you think, enabling you to stay on the path of clarity. Syntax used in a rote matter may be seen as restrictive, but clearly it marks the path of clarity. Use it. Use it wisely.
    It matters how you write/document. Clarity is truly a function how you describe. Remember language shapes the way you think or how it makes others think!


A similar article was published in “Musings Over Tea Time – Anthology of T Talks by T Ashok” in May 2014. The full article is available on page 15 under the title “Language shapes the way we think” and can be accessed here.

HSTC 2013 conference

Think better using “Descriptive-Prescriptive” approach (Conference)

Testing is interesting as it is unbounded. Customer expectations constantly expand, overall development effort/time is expected to shrink and quality constantly increase!

HSTC 2013 conference This requires good problem analysis and solution synthesis skills. At the HSTC 2013: “Think Testing” Conference held on Nov 21/22 at Hyderabad, T Ashok presented a talk that outlines an interesting thinking approach where analysis is done via “structured description” and solution synthesized via “prescription formulation”. Applications of this approach in test base lining, strategy formulation, test design, intelligent reporting is discussed in the presentation below:


Silence is Golden – The Power of Test Case Immunity

“It is not enough if we analyse defects, it is even more necessary to analyze the white space “non-defects” to understand what parts of the software have become hardened”. “Analyze defect types of “non-defects” to extract information from the silence i.e. correlate test cases to potential defect types and analyze what of these passed”.

Over time, we all know the same set of test cases do not yield defects. This is an indication of how a system becomes “immune” to test cases. However we typically use presence of defects to make judgements about quality of the system.

At the SofTec 2012 Conference in Bangalore on July 14, 2012, T Ashok outlines an interesting concept that focuses on the “absence of defect types ” and how we can exploit this to optimize test effort. Can we interpret the “silence of test cases” by exploiting the “power of test case immunity” to do less?

View the slide presentation to know more on this:

View other prsentations from STAG Software Pvt Ltd

“Roadmap to Quality” – Panel discussion at SofTec 2012 Conference

SofTec 2012, Bangalore, July 14, 2012

The panel discussion on “Roadmap to Quality” was brilliant due to the cross-pollination of interesting ideas from non-software domains. Three of the four panelists were from non-software domains – Mehul@Arvind Retail, Soumen@GM, Raghavendra@Trellborg with lone exception of Murthy from Samsung, with moderation done by Ashok@STAG.

The key take ways from the panel discussion are:

  1. Continuous monitoring helps greatly as this is like a mirror that reflects what you do constantly, this is what Mehul@Arvind highlighted as being important in his domain of apparel/retail business. Ashok connected this to dashboards that are becoming vogue in our workplace, more in the Agile context
  2. Soumen@GM stated the importance of early stage validation like Simulation, Behavior modelling in the Automotive industry, as the cost of fix at the later stage is very expensive. The moderator connected this to “Shift Left”, the new term in our SW industry- how can we move validation to earlier stage(s)?
  3. Raghav@Trellborg a component manufacturer of high technology sealing systems stated need of understand of understanding the final context of usage of the component as being very important to know to ensure high quality. He also stated testing is deeply integrated into the “shop floor” i.e. daily work and the most important aspect of quality is not QA or QC but the underlying the Quality Systems in place. How do Q systems ensure that quality is deeply entrenched into the daily life. The moderator highlighted the fact the in software industry we have implemented systems, but these are still at an organizational level and the need of the hour in SW industry is to institutionalize these at a personal level
  4. Finally Murthy stated level of quality needed is not the same in all domains, in certain domains (like mobile) that have disruptive innovation and short life cycles, “we need just enough quality”. He highlighted the need to understand “technical debt” that we can tolerate as a driver for deciding “how much to test”

You can also read the special news on the panel discussion on Silicon India website.

Relavent topics:
a. Software testing lacking serious effort


You are only as good as your team

A semiconductor company is considered a pioneer in the 4G-WiMAX, dreams of being among the first companies to launch WiMax solutions. On the verge of launching their product, the only challenge in the un-treaded path, was imagination.
Their QA requirements was unique as the product being developing. They were looking for a partner who would be as spirited as they were. Can STAG prove its mettle? Could we be the team they were hoping for?

One question we are asked almost immediately after saying Hello is “Do you have the domain expertise?” and then we speak about HBT. It couldn’t have happened this time. Pioneers can’t ask for experience. Soon we were working on conformance validation (which later on became the IEEE standard). Within a few weeks we understood why they were looking for someone beyond ‘I-can-provide-testing-resources-too’.

BuildBot is a system to automate the compile/test cycle required by most software projects to validate code changes. The buildbot watches a source code repository (CVS or other version control system) for interesting changes to occur, then triggers build with various steps (checkout, compile, test, etc). The

STAG setup a system to automate the build, compile & validate code changes in the source code repository. The builds are run on a variety of slave machines, to allow testing on different architectures, compilation against different libraries, kernel versions, etc. The results of the builds are collected and analyzed (compile succeeded / failed / had warnings, which tests passed or failed, memory footprint of generated executables, total tree size, etc) and are displayed on a central web page. The entire system was around 6000 Lines of Code in Python.

This resulted in quick validation of code changes in the repository leading to reduced rework time, thus increasing productivity of distributed development teams.


Surprises are always around the corner

An industry leader providing end-to-end solutions to automate Foreign Exchange trading, our customer provides innovative solutions for financial institutions. Their flagship product, the online FOREX trader, connects to the various Trading Gateways across the world using Adapters. That is no small task. We’re talking millions of transactions at sub 200 ms response time.
When we were called to develop a automation suite for one its components, we didn’t expect anything challenging. Boy, were we in for a surprise or what?

An important middleware Component called as Adapter that links the FX Inside to the Provider. Different providers have their own Adapters. The real work of the Adapter was to direct a homogeneous data sent by the Client, while trading into heterogonous environment to provider and vice versa. These Adapters have to be tested for every release of the core applications. This is a backend non-UI programs that requires scripts to be written to test the functionality at API level.

The objective was to develop automation suite which can be used to test multiple adapters on both Simulator and Live setup. Automation suite should be flexible enough to cater to test new adapters that will be added in the future with minimal changes.

For that we interacted with the developers to understand the functionality of the Adapters and finally we developed a framework which would cater to automating multiple adapters and also add new adapters in the future.

The team started the incremental approach towards Automation of the Adapters by first interacting with the development and QA team, gathering the necessary information by which the common scenarios across the adapters were identified. The critical part of Automation was to develop scripts that can Automatically Restart the Adapters residing on a remote Linux box, send across trading messages to the adapter component, receive them by listening to messaging broker and parse the necessary information.

The result was much better than we anticipated. The execution time of the test scenarios for one Adapter taking two days earlier was reduced to thirty minutes for both live and Simulator environments, which was phenomenal for the client.

STAG developed a test suite to automate tests for every adapter at API level, thus, bringing down System testing effort by 40%.


Smart Test Automation to check product functionality cuts test execution time enabling faster market release

STAG Software was working on a dashboard product aimed at the mobile telecommunications industry. It was being developed on the LAMP platform, which is a solution stack of free, open source comprising the Linux (operating system), Apache HTTP Server, MySQL (database software), and either Perl, PHP or Python.

The major user interface (UI) component of the product, which was the management UI, had the facility to configure key components, configure handsets, user management (create, modify and delete), upload audio/video clips for video on-demand and live viewing, pinning channels for streaming, display status of streaming servers, streaming sessions, assets as well as generating reports for asset inventory and streaming activity.

The scope of the project and range of features dictated that the project would not only be development intensive, but post-development there would also be an equally intensive testing and debugging stage.

STAG automated the execution of a number of product feature test cases.


As some of the product features reached stability, STAG automated the execution of their test cases. Validation of UI-based features was automated using IBM Rational Functional Tester (RFT). The non UI- based server-side features and the validation of the product installation process was automated using Perl.

RFT enabled to automate 400 functionality test cases out of a total of 600 test cases for the management UI. A data driven framework was developed with the ability to take input data for test cases from an Excel sheet. 400+ test cases were managed by developing a catalog of around 40 reusable library functions and 22 main test scripts. These same test scripts could be executed on multiple browsers i.e. Internet Explorer and Mozilla Firefox, which also enabled considerable time and effort savings. Moreover, some of the libraries developed could be used as project assets.


Benefits of automating the test cases were:

  • Test execution effort was brought down from 17 persons and machine hours to 7 machine hours
  • 42 person days effort was taken to design, develop and test the scripts, which was considerably shorter then anticipated
  • The testing team could focus more on other components/test cases, where manual intervention was essential
  • Cost savings
  • Faster time-to-market

This case study was published in the IBM’s “The Great Mind Challenge for Business, Vol 2, 2011”. . The book recognizes visionary clients who have successfully implemented IBM software (RFT) solutions to create exceptional business value.


HBT enables agility in understanding

A Fortune 100 healthcare company building applications for next generation of body scanners, uses many tools including OS, compilers, webservers, diagnostic tools, editors, SDKs, database, networking tools, Browsers, device drivers, project management tools and development libraries. Healthcare domain meant compliance to various government regulations including that of FDAs. One such compliance states that every tool used in the production, should be validated for ‘fitness of use’. This meant as many as 30 tools. How could one possibly test the entire range of applications, before it is used? Considering the diverse range of applications, how could they have one team do it?

STAG was the chosen partner not because we had expertise in healthcare applications, but because of HBT enables test teams to rapidly turnaround. For this job, STAG put together a team with sound knowledge of HBT.

The team relied on one of the most important stages of the SIX-staged HBT – “Understand Expectations” – A scientific approach to “the act of understanding the intentions or expectations” by identifying key elements in a requirement/specification and setting up a rapid personal process powered by scientific concepts to ensure that we quickly understand the intentions and identify the missing information. We look at each requirement and partition these into functional and non-functional aspects and probe into the key attributes to be satisfied for the requirement. We use a key core concept Landscaping that enables us to understand the Market place, end-users, Business flows, architecture and other attributes, and other information elements.

Once a tool is identified, the team gathers more information from the public domain. This ensured the demo from customer (of around 45 minutes) is easily absorbed. During the demo, the customer also shares the key features they intend to use. This information eventually morphs into requirements. The team then explores the application for around 2 days. During this period they come up with a list of good questions, clarify the missing elements and understand the intended behavior. Thus the effort spent to understand and learn the application is as less as 16 hours.


“Never look down” – not the best suggestion for a startup

A Talent Management Company delivering end-to-end learning solutions was on a rapid growth path. Customer base was growing, and they catered to every possible segment. With international awards and mentions in every possible listing, it was a dream growth. Each customer was special and of high priority. The sales team filled order books enough to keep the engineering busy with customization. Within short period, it became increasingly difficult to meet schedules and then instances of customers reporting defects started coming in. The management smartly decided to enough of act the signs before things got out of hand. It is wise to check if the rest of the team is keeping up with you, when you are climbing high.

After a detailed analysis we put down a list of things that need attention – With no formal QA practice in place, a makeshift testing team of few developers and product managers assessed the applications before being released to customers. The requirement documents for their products did not exist. There was no tracking of the defects done which eventually resulted in delayed releases to the clients.

The team applying HBT hypothesized what could possible go wrong in the product (applied HBT core concepts of ‘Negative Thinking’ and ‘EFF model’) and staged them over multiple quality levels. The test scenarios and test cases designed were unique to each of the quality levels formulated as the focus on the defects to be detected at each level is unique and different (HBT core concepts Box model was applied to understand the various behaviors of each requirement/feature/sub-feature and hence derive the test scenarios for each feature/sub-feature). With close support from the management, we put together a net so tight, that no defects slip through.

 A clear mapping of the requirements, potential defects, test scenarios and test cases was done after completing the test design activity to prove the adequacy of test cases.

The robust test design ensures the product quality. The percentage of High priority defects was significantly high (65%) and were detected in earlier test cycles The test scenarios and test cases were adequate as the defect escapes was brought down to 2% from 25% and the regression test cycles was reduced from 30 to 12. More importantly, the schedule variance dropped to normalcy.


Houston, we have a problem

A radio transmission by Lovell, “Houston, we’ve had a problem”, has become widely misquoted in popular culture as,”Houston, we have a problem”.

Apollo 13 was the third manned mission by NASA intended to land on the moon, but experienced a mid mission technical malfunction that forced the lunar landing to be aborted. The crew was commander James A. Lovell, Command Module pilot John L.
“Jack” Swigert, and Lunar Module pilot Fred W. Haise.

Using the analogy it is interesting to note that we see different problems at different levels! We are test professionals on ground and our customers far away from us in the business “space”. Remember the risk we put them into!

Click to view the presentation, that was presented recently by T Ashok, Founder & CEO, STAG Software, at SOFTEC Asia, July 2, 2011, in Bangalore.