This tutorial, explains test Verification and Validation (V&V) with their specific definitions. Verification and validation encompasses a wide array of SQA activities that include technical reviews, quality and configuration audits, performance monitoring, simulation, feasibility study, documentation review, database review, algorithm analysis, development testing, qualification testing, and installation testing.
The Test plan section describes the overall strategy for integration. Testing is divided into phases and builds that address specific functional and behavioral characteristics of the software.
Different types of test strategies like
- Top-down testing
- Bottom-up Testing
- Thread testing
- Stress testing
- Back-to-back testing
Verification and Validation
Software testing is one element of a broader topic that is often referred to as verification and validation.
- Verification refers to the set of activities that ensure that software correctly implements a specific function. “Are we building the product right?”
- Validation refers to a different set of activities that ensure that the software that has been built is traceable to customer requirements. “Are we building the right product?”
Once software integration and integration testing is done the software is said to be completely assembled as a package. And a final series of software tests validation testing may begin.
Validation can be defined in many ways, but a simple definition is that validation succeeds when the software functions in a manner that can be reasonably expected by the customer. Expectations are defined in the Software Requirements Specification (SRS). SRS is a document that describes all user-visible attributes of the software. The Specification contains a section called Validation criteria.
Validation test criteria
Software validation is achieved through a series of black box tests that demonstrate conformity with requirements. A test plan outlines the classes of tests to be conducted and a test procedure defines specific test cases that will be used to demonstrate conformity with requirements. Both the plan and the procedure are designed to ensure that all functional requirements are satisfied, all performance requirements are achieved, documentation is correct and human-engineered, and other requirements are met (e.g., transportability, compatibility, error recovery, maintainability).
An overall plan for integration of the software and a description of specific tests are documented in a Test Specification. The specification is deliverable in the software engineering process and becomes part of the software configuration. Scope of testing summarizes the specific functional, performance, and internal design characteristics that are to be tested. Testing effort is bounded, criteria for completion of each test phase are described, and schedule constraints are documented. The Test plan section describes the overall strategy for integration. Testing is divided into phases and builds that address specific functional and behavioral characteristics of the software. Each of these phases and sub phases gives a broad functional category within the software and can generally be related to a specific domain of the program structure. Therefore, program builds are created to correspond to each phase. The following criteria and corresponding tests are applied for all test phases:
Interface integrity: Internal and external interfaces are tested as each module (or cluster) is incorporated into the structure.
- Functional validity: Tests designed to uncover functional errors are conducted. Information content: Tests designed to uncover errors associated with local or global data structures are conducted.
- Performance:Tests designed to verify performance bounds established during software design are conducted.
- Top-down testing tests the high levels of a system before testing its detailed components. T
- A program is represented as a single abstract component with sub components represented by stubs.
- Stubs have the same interface as the component but very limited functionality. After the top-level component has been tested, its stub components are implemented and tested in the same way.
- This process continues recursively until the bottom level components are implemented. The whole system may then be completely tested.
- Strict top-down testing is difficult to implement because of the requirement that program stubs, simulating lower levels of the system, must be produced.
- The main disadvantage of top-down testing is that test output may be difficult to observe.
- Bottom-Up Testing
- Bottom –up testing is the converse of top down testing. It involves testing the modules at the lower levels in the hierarchy, and then working up the hierarchy of modules until the final module is tested.
- The advantage of bottom-up testing is the disadvantages of the top-down testing and vice versa.
- Bottom-up testing is appropriate for object-oriented systems in that individual objects may be tested using their own test drivers they are then integrated and the object collection is tested. The testing of these collections should focus on object interactions.
- It is an event-based approach where tests are based on the events, which trigger system actions.
- Thread testing is a testing strategy, which may be used after processes, or objects have been individually tested and integrated in to sub-systems.
- The processing of each possible external event ‘threads’ its way through the system processes or objects with some processing carried out at each stage. Thread testing involves identifying and executing each possible processing ‘thread’.
- Complete thread testing may be impossible because of the number of possible input and output combinations. In such cases, the most commonly exercised threads should be identified and selected for testing.
Some classes of system are designed to handle specified load. Tests have to be designed to ensure that the system can process its intended load. This usually involves planning a series of tests where the load is steadily increased.
Back-to-back testing may be used when more than one version of a system is available for testing. The same tests are presented to both versions of the system and the test results compared. Back-to-back testing is only usually possible in the following situations:
- When a system prototype is available
- When reliable systems are developed using N-version programming
- When different versions of a system have been developed for different types of computers
Steps involved in back-to-back testing includes
- Prepare a general-purpose set of test case.
- Run one version of the program with these test cases and save the results in more than one files
- Run another version of the program with the same test cases, saving the results to a different file
- Automatically compare the files produced by the modified and unmodified program versions.