“The Ugly Duckling” – The changing face of testing

The discipline of testing has metamorphosed significantly over the last decade. This is illustrated as light-hearted tale inspired by Hans Christian Anderson’s tale of “The ugly duckling”.

Click the presentation below to read how Joe the tester evolves from a ugly duckling to a beautiful swan.

Perspectives on “Improving productivity in test engineering”

We live in interesting times. Our customers want more and quicker, while organizations are pressured to cut costs. Over the years, we have matured significantly, setting up systems (process) to cut waste, embracing technology to do faster. The greed to do more, quicker & cheaper is relentless! What more can we do?

In the presentation “Perspectives on improving productivity in test engineering”, we have looked at the problem of improving productivity from a different view. We have viewed this from a “system perspective” by identifying key properties that any system should have ( in this case a “test engineering system”). The key properties of a system are: effectiveness (how well are we doing), consistency (anyone can do well) and efficiency (how quickly and cheaper). What is the bearing of these properties on productivity improvement and what can be done to enhance these properties?

View the presentation below:

Becoming a Test Craftsman – Inspiration from other disciplines

Craftsmanship is the ultimate goal that all good engineers crave for. Craftsmanship delivers a “high”, it goes beyond “just doing the work”. Becoming a craftsman is a *very personal aspect* that requires one to reflect deeply into one’s work, be continually inspired by *other* disciplines. As a test professional, I have been constantly striving to become a craftsman and it has been a never ending journey. Nevertheless the journey is rewarding and gives me the “high”.

Over the years, I have been inspired from other disciplines – people from other walks of life, and other interesting non-engineering activities. Some of the inspirations have come from doctors, my dog, Sherlock Holmes, Japanese etc.

Click here to read more about this

Would love to hear your comments. You can also follow us on twitter@stagsoft.

Rapid action team – building a team from scratch

Customer is a major technology innovator and global leader in semiconductors for wired and wireless communications. Their products enable the delivery of voice, video, data and multimedia to and throughout the home, the office and the mobile environment.

The principal in US wanted to explore the possibility of moving core product development to their captive center in India with business case analyzed and approved. The challenge however was that they were unsure on the time required to build the engineers with domain knowledge and relevant experience. The impact of such delay on road map and associated planned revenue was identified as major risk. Could STAG mitigate this risk? Read more.

One Manager responsible for development was relocated to India and given this responsibility to build the initial team and show success. Offer for full time senior person to manage QA was made and they were waiting for him to join. With the market going through some turbulence, getting a person full-time on board to take over QA responsibility was also taking its own time.

The management was aware STAG took up a challenge in the past to arrest their defect escape. So they threw in the new one – to build an effective QA team with the following goals:

  • Build initial set of QA team in 3 different sub-groups
  • Complete knowledge transfer or ramp-up time will go as per business plan
  • Build complete test lab on-time
  • Commit deliverables and adhere to the plan
  • Show the improvement in productivity and quality over time
  • Transition core team to be part of customer organization if all set goals are met and partner with them to build temp staff required to achieve the new set of goals for product road map

We identified a large team – some with knowledge in HBT & STEM™ but new to domain, AND the rest with experienced in testing from the same industry. Both were then given a clear definition of quarterly goals under focus and STAG way of tracking and ensuring that how we measure customer expectation. Entire team worked closely with QA Manager to setup complete lab and commit release dates for some customer key releases and delivered on-time with acceptable quality defined.

Having the complete lab and no constraints to skip any type of tests, team started enhancing their scope, improving test assets thereby increasing stake holder’s confidence further. Certain area for automation identified and new members were added to support this initiative. With multiple releases experience team understood the dependencies and started defining right scope regression and release cycle time reduced wherever business situation demanded.

Typical success factors like good planning, effective tracking, timely release with good quality, team flexibility and attitude towards business impact was seen in every subsequent release. Both Development and QA got the required approval to take the core team on board and define the temp staff requirement and duration to manage the rest of the releases in road map. Journey continuing with STAG being a QA partner, some members smoothly transitioned to customer organization as core team and additional extended team requirement is still supported by STAG.

  1. What was thought as tough constrains to meet and build the team on-time was achieved with our approach which had high impact on revenue plans defined for that product line
  2. Full-time core team formation and extended team as contractors working fine
  3. Smooth transition ownership of a product line was achieved as per plan

Staying on top more difficult than getting there

Business is good when you are alone on top. If you are not prudent, however mightiest one may be, a small nudge by somebody could pull you all the way down. Our customer is the market leader in providing learning solutions to Universities and school. Apart from innovative solutions, they proved to be business smart by outsourcing to India not just for cost benefits, but also quality equivalent to their team.

Senior management of the organization had apprehensions of its success, owing to Competency, Process, Training, Communication and Cultural differences. Did STAG manage to allay their fears?  Read on.

With headquarter in Washington DC, US; the customer is the largest E-Learning provider in the School and University space. Growing competition was driving the product team for newer innovation, quicker concept-to-market and higher quality. As the development team shift gear into an agile mode, the pressure was on the QA to cut down the test cycle time and qualify the products quicker with the same effectiveness. The small team was found wanting for additional people. Aggressive deadlines convinced the management to outsource some parts of testing. We realized the initial challenge would be to reinforce the customer confidence on their decision to outsource. This would mean total transparency and a continuous communication channel to know the team’s day-to-day activities. It was also necessary that we adapt their QA process and the terminologies they use. The time involved in setting up the team and ramp-up was very short. This meant we would need someone travel to US on a short notice and undergo product training and be back and transfer the knowledge to the team.

The team was chosen based on previous experience with testing web applications and previous teaching experiences. Having functional experts meant resolving many of the issues internally.  On return to India after product and process training, the test lead initiated knowledge transfer to team. The training incorporated various topics including setup, features and test process. Once the training was complete, the team started executing dry runs of the test. Though the main objective was to understand the client’s test process, it was also the quickest way to learn the product.

The setup activity happened in close coordination with the onsite analysts and support team. This also helped verify the installation manual. The setup involved multi-platform & multi database test lab and was organized within a week of the engagement.

  • Significant reduction in test execution cycle time (66%)
  • Created excellent knowledge base so that subsequent ramp-up of team was done with just one week notice
  • The depth of knowledge of testing demonstrated by team gave confidence and team was allowed to update test cases. This helped in reducing defect escapes to the field by 13% to 4%
  • Started with four engineers as experiment in outsourcing helped them to outsource major part of test execution (22 engineers) in eight weeks time

Achievements

  • We managed short-notice deadlines (as less as 2 days) by stretching ourselves when needed
  • We executed more than 60 cycles of testing on three major releases, and six minor releases(hot-fixes and service packs)
  • We helped the client to stop many critical issues from escaping
  • They seek and trust our ‘Quality’ advice in a ‘go-no-go’ situation
  • We accepted the challenge of simultaneously testing multiple versions
  • We work with their support team to isolate field defects
  • Our test lab has been flexible to the changing system requirements
  • We add and maintain their test documents
  • We are able to reset the test lab in 4 hrs to a newer configuration
  • We are able to increase the team size with only one week notice(whenever needed)
  • We have the ability to make a new engineer productive in 3 days!
  • We internally developed the training material to create a strong knowledge transfer
  • We have completed testing on/ahead of the schedule 95% of the cycles
  • We have brought down the test cycle time by over 60%
  • We provide status update daily and we do a rain check every alternative week over a T-Con
  • We internally resolve 4 out of every 5 clarifications needed by our engineers, before we approach the client – We ‘disturb’ the customer less!

On-time release to market helped the company stay afloat

A pharmaceutical company decides to ride the IT bandwagon. They establish a company to develop Enterprise Resource Planning (ERP) solutions to meet the demands of small and medium scales Pharmaceutical, Chemical and Food processing industries. The challenge at hand was to build a software development organization and release the first version of the product in six months to the market. With all the functional experts it had, the solution specially designed and developed for pharmaceutical Industry specific best practices, could not fail. Or did it?

The ERP solution complies with Current Good Manufacturing Practices (cGMP) and requirements of International Regulatory Bodies such as US FDA, EDQM, TGA, MHRA, MCC, etc. The challenge at hand was to build a software development organization and release the first version of the product in six months to the market. Considering the tight timeline for product release the company preferred to jumpstart its QA process by partnering with a third party testing organization specialized in test engineering. They expected this organization to provide the required software testing expertise to deliver high quality product to market in time. At the same time to work within the constraints of the company.

The first step involved Knowledge transfer from the customer. Using flow-charts and use-cases we got customer’s concurrence on our understanding of all modules and the interfaces of each module with others. Next, we used STEM™ Behavior Stimuli (BeST) technique to design test cases, module-wise. To increase the depth in testing we applied boundary value analysis technique, Equivalence class technique and Domain specific special value techniques. We also increased the breadth of testing by adding scenarios for different type of tests based on requirements under focus. We also identified module level interfaces to other modules to design end-to-end test scenarios.

We jumpstarted customer’s QA. Institutionalizing our test engineering practices within the organization, led to on-time launch of product thereby boosting the stakeholder confidence in the quality of the product and hence the investment.

Perfect software to stop perfect crime

The Intelligence department of Karnataka Police decided to implement a solution to analyze the Call Detail Record (CDR) from telecom service providers. The tool was to be deployed across the state. The solution can provide critical information about subscribers whose CDR is analyzed location, geographic movement, calls to other monitored suspects etc. This information is very critical for any case in the present times.  The head of this initiative, a very IT savvy Officer, decided that the solution needs to be validated by a specialist organization, if it has to be defect free. In came STAG.

The solution is a Call Detail Record analyzer intended for Law Enforcement or Intelligence Analysts who have to, need to, want to, or are expected to, work with telephone call detail records. A CDR is composed by fields that describe the exchange, i.e. the number making the call, number receiving the call, start time, duration, end time, route taken, etc. The tool also integrates with mapping server, enabling a visual display of the routes and locations of the suspect.

The solution was being developed by a small but very inventive team from a small town. However, being a small team meant the code was self-validated. This made the customer a little jittery. They sought STAG’s services to validate the solution end-to-end.

STAG assessed the development process of the organization, to understand how well it was built. The application then had to be put through thorough multi-level evaluation, starting from field level to load testing. The tool inched its way slowly through these gates, and required structural modification to clear some. STAG worked closely with the department during the training and saw through a successful release. The tool immediately started cracking some pending cases and is now sought after in the other states.

The software Hara-kiri

The Japanese subsidiary of a global system integrator is required to outsource a part of their projects to the subsidiary in a ‘friendly’ country due to political reasons. However, this ‘Global’ delivery model has its banes. The pieces of code don’t necessarily integrate as expected.

After couple of times of burning their hands and re-code there was a noticeable pattern. Since freezing outsourcing was not an option, they enforced delivery of ‘Unit’ test cases along with the results hoping to improve the quality of the delivery. A whole lot of test cases and the code passed it all. This is where we entered the scene.

When a large System integrator having the third largest market share in Japan, faced quality issues in the code delivered by their outsourcing partner, they asked STAG’s involvement. We decided the best way is to assess the vast Unit test cases and reports that came along with the delivery. The assessment was to be done by comparing the available artifacts (Test Case, Data Definition Language (DDL), Screen Transitions and Bean Specifications) with those defined per HBT. The assets were assessed to understand

  1. Quality of the test cases
  2. Test Completeness
  3. Test Coverage
  4. Comparison with Ideal Unit Testing

For good Unit Testing, the unit should be validated from an external (using Black-box testing techniques) and internal/structural (using white-box testing techniques) view. In this case all test cases provided were designed using Black-box techniques per the specification and not using the code structure.

The results astounded the client – Apart from issues like poor test data, incomplete steps and insufficient negative tests, the tests were found to be designed applying only Blackbox techniques  i.e., the structural aspects were not evaluated at all. The results were used to confront the partner and renegotiate all future engagement contracts and deliverables.

Back to the future >> preparing for an avalanche

When a bank implements major solutions, you need to watch like a hawk. The smallest glitch can set off an avalanche. When we were asked to validate the performance of an integrated financing solution for leading commercial bank, we assumed it was like any other project. This wasn’t the case. The challenge thrown at us was ensure the system is future proof for 3 years! From our experience, we knew scripting and simulating large user load was the easier part. Banks run on data and documentation and this product is intended to cater to the agri-commodity business of the bank. We foresaw an avalanche of data thundering down!

The product is intended to enable financing for farmers for the commodity they have produced. Bank offers loan against the commodity that is being stored in warehouses. With a focus on commodity finance, the solution encompasses various modules of commercial operations right from sourcing of the account, operations, monitoring and control, recovery management, audit and closure through repayment. Each of the process that is initiated has to go through approval process and most of the processes have initiation and approval stages for the completion of the process!

Based on the understanding and post having some initial discussion with the bank, detailed operational profile was derived. 40+ scenarios were identified for the test with concurrency of 600 users.

The plan was to conduct the load test for 3 different combinations where peak concurrency of each module defined is achieved during the different combinations of load test.

Considering the key requirement was to conduct the test simulating 3 years of usage of the system, the only success factor was the test data creation. Hence it was required to create the huge test data before doing the actual test. The system was heavily loaded with data – 2000 users, 5000 borrowers 300 warehouses (100 Govt, 200 Private/Godown warehouses), 44,000 Loans, 50,000 liquidation, 10 image uploads per borrower and every warehouse creation and so on…

The scripts were developed to populate the required test data in the system to replicate three year usage. The first step towards that was to create 2000 users in the system. User creation meant creating more data for every user required role, and branch for which it needs to be created. After creation of the users, we started creating the warehouses and the borrowers required for the test. The next major activity was loan bookings and liquidations. 40,000 loan bookings and 50000 liquidation records were created by running JMeter scripts.

The interesting part was the interesting set of functional issues that surfaced during the data creation. The customer couldn’t be happier. The product was supposed to have been tested thoroughly for functionality. Once these were fixed, we were prepared for the next set of performance related issues. Steadily, with one step at a time, we ensured the avalanche would not occur for the next three years.

HBT implementation benefits more than just testing

We have been talking and advocating on various platforms, how scientific approach of HBT and the method, STEM, delivers key business value propositions. This time we thought it would be prudent to share our experience of implementing this in projects and convey the results as well as interesting benefits that it entails while delivering clean software to our customers.

HBT was applied on various projects that were executed on diverse technologies in variety of domains across different phases of product life cycle. The people involved where of mix from no experience to 5 years of experience.

We observed that HBT can be plugged into any stage or situation of the project for a specific need and one can quickly see the results and get desired benefits as required at that stage.

Our experience and analysis showed that there were varied benefits like rapid reduction in ramp up time, creation of assets for learning, consistent way of delivering quality results even with less experienced team members, increased test coverage, scientific estimation of test effort, optimization of regression test effort/time/cost, selection of right and minimal test cases for automation and hence reduction in its development effort/time/cost.

Following are the key metrics and the results/benefits achieved in some of the projects were HBT was implemented –

Project 1:

Domain: SAAS / Bidding Software

Technology – Java, Apache (Web server), Jboss (App server), Oracle 9i with cluster server on Linux

Lines of Code – 308786

Project Duration – 4 months

D1, D2 and D4 were done almost in parallel due to time constraints for this complete application that was developed from scratch.

  • D1 – Business Value Understanding (Total effort of 180 person hours)
    • 3 persons with 3+yrs experience were involved (had no prior experience in this particular domain)
    • 4 main modules with 25 features listed.
    • Landscaping, Viewpoints, Use cases, Interaction matrix (IM) were done.
    • D1 evolved and developed by asking lot of questions to the design/dev team.
  • D2 – Defect Hypothesis (Total effort of 48 person hours)
    • 3 persons with 3+yrs experience were involved.
    • 255 potential defects were listed.
  • D4 – Test Design (Total effort of 1080 person hours)
    • 3 persons with 3+yrs experience were involved.
    • Applied Decision tables (DT) for designing test scenarios.
    • Totally 10750 test cases were designed and documented.
    • Out of which 7468 (69%) are positive test cases and 3282 (31%) are negative test cases.
    • Requirement Traceability Matrix (RTM) and Fault Traceability Matrix (FTM) were prepared.
  • D8 – Test Execution (Total effort of 3240 person hours)
    • 9 persons were involved in test execution and bug reporting/bug fixes verification (3 persons with 3+ yrs experience and 6 persons with 2+ yrs experience).
    • 12 builds were tested in 3 iterations and 4 cycles.
    • Totally 2500 bugs were logged, out of which 500 bugs were of high severity.

Key benefits:

  • No bugs were found in UAT.
  • All change requests raised by QA team was accepted by Customer & Dev Team.
  • Interaction matrix was very useful for selecting test cases for regression testing and also for selecting right and minimal test cases for automating sanity testing.
  • Regression testing was for shorter periods like 2 to 3 days, interaction matrix was quite useful to do optimal and effective regression testing.
  • The method, structure, templates (IM, DT, RTM, FTM, Test case, Reporting) used and developed in this project is being used as reference model for other projects at this customer place.

Project 2:

A web service with 5 features that had frequent enhancements and bug fixes (Maintenance)

Technology – Java, Apache Web Server

Project Duration – 4 weeks

  • D1 – Business Value Understanding (Effort of 6 hours)

Mind mapping of the product and also the impact of other services & usage on this service

  • D2 – Defect Hypothesis (Effort of 5 hours)

Listed 118 potential defects

Key Benefits:

  • Preparation of D1 document enabled ramp up time for new members (Developers/Testers) to understand the product, to come down from earlier 16 hours to 4 hours now.
  • Any member added to this team was productive from day one and could start testing for any regression testing cycles for enhancements and bug fixes.
  • Listing of potential defects enabled adding missing test cases from the existing test case set.

Project 3:

Domain – E-learning

Technology – ASP.Net, IIS, SQL Server, Windows

Validation of a new feature added to the product

Duration – 2 weeks

  • D1 – Business Value Understanding (Effort of 5 hours)

Understood the feature by asking questions and interacting with development team over emails/conf calls

  • D2 – Defect Hypothesis (Effort of 2 hours)

Listed 130 Potential defects by thinking from various perspectives

  • D4 – Test Design (Effort of 16 hours)

Designed and documented 129 test cases

  • D8 – Test Execution (Effort for Test Execution – 626 Person hours,

Effort for bugs reporting/bug fixes Verification – 144 Person hours)

Executed test cases by performing 2 cycles of testing and 2 regression cycles

8 new test cases were added while executing the test cases

31 bugs were found in test execution of which 23 bugs were of high severity. 29 of the bugs can be linked to potential defects visualized and listed before. 2 of the bugs found were not linked to any documented test cases.

Key Benefits:

  • Arrived at a consistent way of understanding the feature and designing test cases for new features irrespective of the experience of the team member involved

Project 4:

Domain – Video Streaming

Technology – C++, PHP, Apache, MySQL, Linux

An evolving new product in very initial cycles of development/testing

Duration – 4 weeks

People Involved – 2 Fresh test engineers (No previous work experience but trained in HBT/STEM)

  • D1 – Business Value Understanding (Effort of 32 hours)

The understanding of the product in the form of listing features/sub features, Landscaping, Critical quality attributes, Usage environment/Use cases, by questioning

  • D2 – Defect Hypothesis (Effort of 40 hours)

Listing of over 150 potential defects

Key Benefits:

  • 2 fresh engineers could understand and comprehend the product features, the business flow and its usage in a scientific manner and also document it. They could also think and visualize possible defects to enable them to come up with needed test cases to identify and eliminate defects.
  • The process of doing D1 and D2 by the fresh engineers generated lot of useful questions that enabled better thinking, understanding and different perspectives of the product behavior to the senior engineers. This helped them to design more interesting test cases to capture the defects during test execution.
  • The assets created as D1 and D2 is helping the other members of the team to quickly ramp up on the product features and get the detailed understanding in 50 % lesser time.

Project 5:

Domain – Telecom protocol

3GPP TS 25.322 V9.1.0 Standards

Estimate effort for complete test design of RLC protocol by going through existing very high level test specifications and designing test cases for 2 sample functions

Duration – 3 weeks

None of the persons involved had any previous experience in testing protocol stack.

  • D1 – Business Value Understanding (Effort of 40 person hours)

Went through the generic RLC standards and in specific understood the 2 functions, Sequence number check and Single side re-establishment in AM mode

Prepared flow charts with data/message flows between different layers

Prepared box model illustrating various inputs, actions, outputs and external parameters

  • D2 – Defect Hypothesis (Effort of 16 person hours)

Listed 28 generic potential defects and the defect types

  • D4 – Test Design (Effort of 48 person hours)

Prepared 2 input tables and 2 decision tables

Designed 26 test scenarios (6 positive, 20 negative) and 44 test cases (6 positive, 38 negative)

  • Performed gap analysis of missing test cases in the customer’s test specification document for the 2 functions (Effort of 12 person hours)
  • Estimated time and effort for the complete RLC test case design, based on the above data (Effort of 4 person hours)

Key benefits:

  • Performed gap analysis in the existing high level test specs
  • 20 times more test cases designed for 2 functions covered
  • 86% of the test cases added were negative type
  • Test cases developed were detailed and covered various combinations of inputs, parameters and intended/unintended behaviors
  • Test cases developed were suitable in order to help easy conversion to test scripts using any tool
  • Performed estimation for RLC test design covering 22 functions
  • 256 Test scenarios and 1056 test cases to be designed with effort of 446 person hours for RLC

Project 6:

Domain – Retail

Validate railway booking software on point of sale device

Technology – Java

Duration – 4 weeks

  • D1 – Business value understanding (Effort of 16 person hours)

Documented software overview, features list, use cases list, features interaction matrix, value prioritization and cleanliness criteria

  • D2 – Defect Hypothesis (Effort of 18 person hours)

Listed 20 potential defects by applying negative thinking, 54 potential defects by applying Error Fault Failure model. The potential defects were categorized into 46 defect types and were mapped to the features listed.

  • D3 – Test Strategy (Effort of 6 person hours)

Based on the listed potential defects types, arrived at test types, levels of quality and test design techniques needed as part of test strategy/planning. Quality level 1(Input validation and GUI validation), Quality level 2(Feature correctness), Quality level 3(Stated quality attributes) and Quality level 4(Use case correctness).

  • D4 – Test Design (Effort of 24 person hours)

Designed and documented 30 test scenarios (15 positive, 15 negative) and 268 test cases (197 positive, 71 negative) for quality level 1, 70 test scenarios (21 positive, 49 negative) and 123 test cases (55 positive, 68 negative) for quality level 2 and 8 test scenarios for quality level 4. Created box models and decision tables to arrive at test scenarios

Prepared requirement traceability and fault traceability matrices

  • D8 – Test Execution (Effort of 32 person hours)

Out of 293 test cases designed, 271 were executed and 22 were unable to be executed.

52 defects (27 high, 12 medium, 13 low) and 8 suggestions were logged.

Quality level 1 – 23 defects (2 high, 11 medium, 10 low) and 2 suggestions

Quality level 2 – 27 defects (25 high, 2 low) and 6 suggestions

Quality level 4 – 2 defects (2 medium)

Key Benefits:

  • Complete validation of the product was performed successfully by one senior engineer guiding 2 fresh test engineers without any previous work experience and none of them having experience in this particular domain
  • All the suggestions logged were accepted and valued

Project 7:

Domain – Mobile gaming

Technology – Java, Symbian

Duration – 3 weeks

  • D1 – Business Value Understanding (Effort of 16 person hours)

Achieved product understanding by documenting software overview, technology, environment of usage, features list, use cases list, mapping of use cases to features, features interaction matrix, value prioritization and cleanliness criteria.

  • D2 – Defect Hypothesis (Effort of 16 person hours)

Listed 96 potential defects by categorizing issues related to installation, download, invoking application, connectivity, input validation, Search, subscription, authorization, configuration, control, dependency, pause/resumption, performance, memory.

Mapped the features to the potential defects

  • D3 – Test Strategy (Effort of 4 hours)

Based on the listed potential defects types, arrived at 4 levels of quality (Game initialization & invoking correctness, Game subscription correctness, Game download correctness, Dependency correctness). The different test types were mapped to the 4 quality levels.

  • D4 – Test Design (Effort of 20 person hours)

Box models and decision tables were created

Designed and documented 37 test scenarios and 66 test cases

Key Benefits:

  • Complete validation of the product was performed successfully by one senior engineer guiding 2 fresh test engineers without any previous work experience and none of them having experience in this particular domain
  • The assets created in this became a useful reference for other projects in mobile gaming software for its understanding and validation