Language shapes the way you think

Language is made of ‘syntax’ (the rules) and ‘semantics’ (the content). Syntax shapes how & how-well we understand the content. Syntax is a great guide. A guide who provides you the rules. Rules that enable you to stay on the path of clarity. Clarity of thought and clear communication. Language has a significant role in the former too.

T Ashok recently presented a talk on the same topic covering the above points to a gathering of enthusiastic test professionals at a Pune-based provider of software and services for communications, media and entertainment industry service providers. Below is the presentation:

A similar article was published in “Musings Over Tea Time – Anthology of T Talks by T Ashok” in May 2014. The full article is available on page 15 under the title “Language shapes the way we think” and can be accessed here.

HSTC 2013 conference

Think better using “Descriptive-Prescriptive” approach (Conference)

Testing is interesting as it is unbounded. Customer expectations constantly expand, overall development effort/time is expected to shrink and quality constantly increase!

HSTC 2013 conference This requires good problem analysis and solution synthesis skills. At the HSTC 2013: “Think Testing” Conference held on Nov 21/22 at Hyderabad, T Ashok presented a talk that outlines an interesting thinking approach where analysis is done via “structured description” and solution synthesized via “prescription formulation”. Applications of this approach in test base lining, strategy formulation, test design, intelligent reporting is discussed in the presentation below:

Silence is Golden – The Power of Test Case Immunity

“It is not enough if we analyse defects, it is even more necessary to analyze the white space “non-defects” to understand what parts of the software have become hardened”. “Analyze defect types of “non-defects” to extract information from the silence i.e. correlate test cases to potential defect types and analyze what of these passed”.

Over time, we all know the same set of test cases do not yield defects. This is an indication of how a system becomes “immune” to test cases. However we typically use presence of defects to make judgements about quality of the system.

At the SofTec 2012 Conference in Bangalore on July 14, 2012, T Ashok outlines an interesting concept that focuses on the “absence of defect types ” and how we can exploit this to optimize test effort. Can we interpret the “silence of test cases” by exploiting the “power of test case immunity” to do less?

View the slide presentation to know more on this:

View other prsentations from STAG Software Pvt Ltd

“Roadmap to Quality” – Panel discussion at SofTec 2012 Conference

SofTec 2012, Bangalore, July 14, 2012

The panel discussion on “Roadmap to Quality” was brilliant due to the cross-pollination of interesting ideas from non-software domains. Three of the four panelists were from non-software domains – Mehul@Arvind Retail, Soumen@GM, Raghavendra@Trellborg with lone exception of Murthy from Samsung, with moderation done by Ashok@STAG.

The key take ways from the panel discussion are:

  1. Continuous monitoring helps greatly as this is like a mirror that reflects what you do constantly, this is what Mehul@Arvind highlighted as being important in his domain of apparel/retail business. Ashok connected this to dashboards that are becoming vogue in our workplace, more in the Agile context
  2. Soumen@GM stated the importance of early stage validation like Simulation, Behavior modelling in the Automotive industry, as the cost of fix at the later stage is very expensive. The moderator connected this to “Shift Left”, the new term in our SW industry- how can we move validation to earlier stage(s)?
  3. Raghav@Trellborg a component manufacturer of high technology sealing systems stated need of understand of understanding the final context of usage of the component as being very important to know to ensure high quality. He also stated testing is deeply integrated into the “shop floor” i.e. daily work and the most important aspect of quality is not QA or QC but the underlying the Quality Systems in place. How do Q systems ensure that quality is deeply entrenched into the daily life. The moderator highlighted the fact the in software industry we have implemented systems, but these are still at an organizational level and the need of the hour in SW industry is to institutionalize these at a personal level
  4. Finally Murthy stated level of quality needed is not the same in all domains, in certain domains (like mobile) that have disruptive innovation and short life cycles, “we need just enough quality”. He highlighted the need to understand “technical debt” that we can tolerate as a driver for deciding “how much to test”

You can also read the special news on the panel discussion on Silicon India website.

Relavent topics:
a. Software testing lacking serious effort

You are only as good as your team

A semiconductor company is considered a pioneer in the 4G-WiMAX, dreams of being among the first companies to launch WiMax solutions. On the verge of launching their product, the only challenge in the un-treaded path, was imagination.
Their QA requirements was unique as the product being developing. They were looking for a partner who would be as spirited as they were. Can STAG prove its mettle? Could we be the team they were hoping for?

One question we are asked almost immediately after saying Hello is “Do you have the domain expertise?” and then we speak about HBT. It couldn’t have happened this time. Pioneers can’t ask for experience. Soon we were working on conformance validation (which later on became the IEEE standard). Within a few weeks we understood why they were looking for someone beyond ‘I-can-provide-testing-resources-too’.

BuildBot is a system to automate the compile/test cycle required by most software projects to validate code changes. The buildbot watches a source code repository (CVS or other version control system) for interesting changes to occur, then triggers build with various steps (checkout, compile, test, etc). The

STAG setup a system to automate the build, compile & validate code changes in the source code repository. The builds are run on a variety of slave machines, to allow testing on different architectures, compilation against different libraries, kernel versions, etc. The results of the builds are collected and analyzed (compile succeeded / failed / had warnings, which tests passed or failed, memory footprint of generated executables, total tree size, etc) and are displayed on a central web page. The entire system was around 6000 Lines of Code in Python.

This resulted in quick validation of code changes in the repository leading to reduced rework time, thus increasing productivity of distributed development teams.

Surprises are always around the corner

An industry leader providing end-to-end solutions to automate Foreign Exchange trading, our customer provides innovative solutions for financial institutions. Their flagship product, the online FOREX trader, connects to the various Trading Gateways across the world using Adapters. That is no small task. We’re talking millions of transactions at sub 200 ms response time.
When we were called to develop a automation suite for one its components, we didn’t expect anything challenging. Boy, were we in for a surprise or what?

An important middleware Component called as Adapter that links the FX Inside to the Provider. Different providers have their own Adapters. The real work of the Adapter was to direct a homogeneous data sent by the Client, while trading into heterogonous environment to provider and vice versa. These Adapters have to be tested for every release of the core applications. This is a backend non-UI programs that requires scripts to be written to test the functionality at API level.

The objective was to develop automation suite which can be used to test multiple adapters on both Simulator and Live setup. Automation suite should be flexible enough to cater to test new adapters that will be added in the future with minimal changes.

For that we interacted with the developers to understand the functionality of the Adapters and finally we developed a framework which would cater to automating multiple adapters and also add new adapters in the future.

The team started the incremental approach towards Automation of the Adapters by first interacting with the development and QA team, gathering the necessary information by which the common scenarios across the adapters were identified. The critical part of Automation was to develop scripts that can Automatically Restart the Adapters residing on a remote Linux box, send across trading messages to the adapter component, receive them by listening to messaging broker and parse the necessary information.

The result was much better than we anticipated. The execution time of the test scenarios for one Adapter taking two days earlier was reduced to thirty minutes for both live and Simulator environments, which was phenomenal for the client.

STAG developed a test suite to automate tests for every adapter at API level, thus, bringing down System testing effort by 40%.

Smart Test Automation to check product functionality cuts test execution time enabling faster market release

STAG Software was working on a dashboard product aimed at the mobile telecommunications industry. It was being developed on the LAMP platform, which is a solution stack of free, open source comprising the Linux (operating system), Apache HTTP Server, MySQL (database software), and either Perl, PHP or Python.

The major user interface (UI) component of the product, which was the management UI, had the facility to configure key components, configure handsets, user management (create, modify and delete), upload audio/video clips for video on-demand and live viewing, pinning channels for streaming, display status of streaming servers, streaming sessions, assets as well as generating reports for asset inventory and streaming activity.

The scope of the project and range of features dictated that the project would not only be development intensive, but post-development there would also be an equally intensive testing and debugging stage.

STAG automated the execution of a number of product feature test cases.

400 FUNCTIONALITY TEST CASES AUTOMATED

As some of the product features reached stability, STAG automated the execution of their test cases. Validation of UI-based features was automated using IBM Rational Functional Tester (RFT). The non UI- based server-side features and the validation of the product installation process was automated using Perl.

RFT enabled to automate 400 functionality test cases out of a total of 600 test cases for the management UI. A data driven framework was developed with the ability to take input data for test cases from an Excel sheet. 400+ test cases were managed by developing a catalog of around 40 reusable library functions and 22 main test scripts. These same test scripts could be executed on multiple browsers i.e. Internet Explorer and Mozilla Firefox, which also enabled considerable time and effort savings. Moreover, some of the libraries developed could be used as project assets.

FASTER TIME-TO-MARKET, COST SAVINGS

Benefits of automating the test cases were:

  • Test execution effort was brought down from 17 persons and machine hours to 7 machine hours
  • 42 person days effort was taken to design, develop and test the scripts, which was considerably shorter then anticipated
  • The testing team could focus more on other components/test cases, where manual intervention was essential
  • Cost savings
  • Faster time-to-market

——————
This case study was published in the IBM’s “The Great Mind Challenge for Business, Vol 2, 2011”. . The book recognizes visionary clients who have successfully implemented IBM software (RFT) solutions to create exceptional business value.

HBT enables agility in understanding

A Fortune 100 healthcare company building applications for next generation of body scanners, uses many tools including OS, compilers, webservers, diagnostic tools, editors, SDKs, database, networking tools, Browsers, device drivers, project management tools and development libraries. Healthcare domain meant compliance to various government regulations including that of FDAs. One such compliance states that every tool used in the production, should be validated for ‘fitness of use’. This meant as many as 30 tools. How could one possibly test the entire range of applications, before it is used? Considering the diverse range of applications, how could they have one team do it?

STAG was the chosen partner not because we had expertise in healthcare applications, but because of HBT enables test teams to rapidly turnaround. For this job, STAG put together a team with sound knowledge of HBT.

The team relied on one of the most important stages of the SIX-staged HBT – “Understand Expectations” – A scientific approach to “the act of understanding the intentions or expectations” by identifying key elements in a requirement/specification and setting up a rapid personal process powered by scientific concepts to ensure that we quickly understand the intentions and identify the missing information. We look at each requirement and partition these into functional and non-functional aspects and probe into the key attributes to be satisfied for the requirement. We use a key core concept Landscaping that enables us to understand the Market place, end-users, Business flows, architecture and other attributes, and other information elements.

Once a tool is identified, the team gathers more information from the public domain. This ensured the demo from customer (of around 45 minutes) is easily absorbed. During the demo, the customer also shares the key features they intend to use. This information eventually morphs into requirements. The team then explores the application for around 2 days. During this period they come up with a list of good questions, clarify the missing elements and understand the intended behavior. Thus the effort spent to understand and learn the application is as less as 16 hours.

“Never look down” – not the best suggestion for a startup

A Talent Management Company delivering end-to-end learning solutions was on a rapid growth path. Customer base was growing, and they catered to every possible segment. With international awards and mentions in every possible listing, it was a dream growth. Each customer was special and of high priority. The sales team filled order books enough to keep the engineering busy with customization. Within short period, it became increasingly difficult to meet schedules and then instances of customers reporting defects started coming in. The management smartly decided to enough of act the signs before things got out of hand. It is wise to check if the rest of the team is keeping up with you, when you are climbing high.

After a detailed analysis we put down a list of things that need attention – With no formal QA practice in place, a makeshift testing team of few developers and product managers assessed the applications before being released to customers. The requirement documents for their products did not exist. There was no tracking of the defects done which eventually resulted in delayed releases to the clients.

The team applying HBT hypothesized what could possible go wrong in the product (applied HBT core concepts of ‘Negative Thinking’ and ‘EFF model’) and staged them over multiple quality levels. The test scenarios and test cases designed were unique to each of the quality levels formulated as the focus on the defects to be detected at each level is unique and different (HBT core concepts Box model was applied to understand the various behaviors of each requirement/feature/sub-feature and hence derive the test scenarios for each feature/sub-feature). With close support from the management, we put together a net so tight, that no defects slip through.

 A clear mapping of the requirements, potential defects, test scenarios and test cases was done after completing the test design activity to prove the adequacy of test cases.

The robust test design ensures the product quality. The percentage of High priority defects was significantly high (65%) and were detected in earlier test cycles The test scenarios and test cases were adequate as the defect escapes was brought down to 2% from 25% and the regression test cycles was reduced from 30 to 12. More importantly, the schedule variance dropped to normalcy.

Houston, we have a problem

A radio transmission by Lovell, “Houston, we’ve had a problem”, has become widely misquoted in popular culture as,”Houston, we have a problem”.

Apollo 13 was the third manned mission by NASA intended to land on the moon, but experienced a mid mission technical malfunction that forced the lunar landing to be aborted. The crew was commander James A. Lovell, Command Module pilot John L.
“Jack” Swigert, and Lunar Module pilot Fred W. Haise.

Using the analogy it is interesting to note that we see different problems at different levels! We are test professionals on ground and our customers far away from us in the business “space”. Remember the risk we put them into!

Click to view the presentation, that was presented recently by T Ashok, Founder & CEO, STAG Software, at SOFTEC Asia, July 2, 2011, in Bangalore.