The Japanese subsidiary of a global system integrator is required to outsource a part of their projects to the subsidiary in a ‘friendly’ country due to political reasons. However, this ‘Global’ delivery model has its banes. The pieces of code don’t necessarily integrate as expected.
After couple of times of burning their hands and re-code there was a noticeable pattern. Since freezing outsourcing was not an option, they enforced delivery of ‘Unit’ test cases along with the results hoping to improve the quality of the delivery. A whole lot of test cases and the code passed it all. This is where we entered the scene.
When a large System integrator having the third largest market share in Japan, faced quality issues in the code delivered by their outsourcing partner, they asked STAG’s involvement. We decided the best way is to assess the vast Unit test cases and reports that came along with the delivery. The assessment was to be done by comparing the available artifacts (Test Case, Data Definition Language (DDL), Screen Transitions and Bean Specifications) with those defined per HyBIST. The assets were assessed to understand
- Quality of the test cases
- Test Completeness
- Test Coverage
- Comparison with Ideal Unit Testing
For good Unit Testing, the unit should be validated from an external (using Black-box testing techniques) and internal/structural (using white-box testing techniques) view. In this case all test cases provided were designed using Black-box techniques per the specification and not using the code structure.
The results astounded the client – Apart from issues like poor test data, incomplete steps and insufficient negative tests, the tests were found to be designed applying only Blackbox techniques i.e., the structural aspects were not evaluated at all. The results were used to confront the partner and renegotiate all future engagement contracts and deliverables.