One key aspect of any project is the testing that is performed during Development by the delivery team and during Testing by the delivery team and the customer User Acceptance Testing (UAT) team. Testing, of course, is that last critical step to ensure both the requirements are met and the system is a working, functioning system and is ready for deployment.

 

 

 

 

Therefore, I thought it appropriate to present at this time some testing documentation I’ve been carrying around for quite some time on the subject. This information is from the Carnegie Mellon Software Engineering Institute (SEI) and describes the various types of testing that are commonly carried out during the development process up to deployment.

 

 

 

 

The following is an overview of each testing approach.

 

 

 

 

Design model validation: Each phase in the development process that creates a model of the product or some portion of it should include testing activities that verify the syntax of the model and validate it against the required system. The test can serve as the exit criteria for that phase. We use "model" in a broad sense, to refer to non-software assets that represent a product for the purpose of either making predictions about the product implementation or prescribing constraints for other assets. A business case for a product line is a model; it predicts how profitable the product line will be. Software designs are models; they predict behavior and also impose constraints on implementations.

 

 

 

 

Unit testing: Testing for implementation defects begins with the most basic unit of code development. This unit may be a function, class, or component. This kind of testing occurs during coding; therefore, the intention is to direct the testing search to those portions of the code that are most likely to contain faults–complex control structures, for example. As each unit is constructed, it is tested to ensure that it (1) does everything that its specification claims and (2) does not do anything it should not. A test case associates a set of input values with the result that should be produced by a correctly functioning system. The functional testing strategy uses the specification of the unit to determine which inputs to use in the testing. This strategy provides evidence that the unit does everything it is supposed to. A second strategy, termed structural testing, selects test inputs on the basis of the structure of the code that implements the functionality of the unit. This strategy provides evidence that the unit does not do anything it is not supposed to.

 

 

 

 

Subsystem integration testing: The integration of basic units, even those that have been adequately unit tested, may produce failures resulting from the interaction of the units. Timing discrepancies and type/subtype relationships can be the source of these errors. The tests are constructed from the use cases used to represent the full product's requirements. The integration test plan should describe tests that have been systematically selected from the interactions among the units being integrated. Protocol descriptions between pairs of units or flows through sets of units that implement a specific pattern of behavior can be used to select the test cases. Test cases should include instances in which the error-handling capability of the units is evaluated, such as when one unit throws an exception that should be caught by another unit.

 

 

 

 

System integration testing: When some critical mass of subsystems has been fully developed and tested, the focus shifts to representative tests of the completed application as a whole to determine whether a product does what it is supposed to do. These representative tests are selected to cover the complete specification for the portion of functionality that has been produced. The amount of testing a specific function receives is based either on its frequency of use (operational profiles) or on the criticality of the function (risk-based testing). Special forms of system testing include load testing (to determine if the software can handle the anticipated amount of work), stress testing (to determine if the software can handle an unanticipated amount of work), and performance testing (to determine if the software can handle the anticipated amount of work in the required time).

 

 

 

 

In addition to testing as the exit criteria for process phases, the next five types of tests described are applied to verify certain product properties.

 

 

 

 

Regression testing: Regression testing is used to ascertain that the software under test that exhibited the expected behavior prior to a change continues to exhibit that behavior after the change. Regression tests are constructed, and periodically applied, to determine whether the software under test remains correct and consistent over time. Regression testing is triggered by changes that affect a predefined scope of assets or that affect certain critical assets. The actual test cases used in regression testing are no different from any other test cases. The regression test suite is a sample of the functional tests from the original test suites administered prior to any changes.

 

 

 

 

Conformance testing: Conformance testing determines whether the software under test can be used in a specific role in an application. The conformance test set should cover all the required interactions between all the components that will participate in the application.

 

 

 

 

Acceptance testing: To validate the claims of the manufacturer or provider, the consumer performs acceptance testing. The acceptance test is more realistic than the system test, since the application being tested is sited in the consumer's actual environment.

 

 

 

 

Deployment testing: Deployment testing is conducted by the development organization prior to releasing the software to customers for acceptance testing. Where acceptance testing focuses on the functionality of the delivered product, deployment testing covers all the unique system configurations on which the product is to be deployed. This testing focuses on the interaction between the product and platform-specific libraries, device drivers, and operating systems. During the deployment testing phase, the application's ability to deploy or install itself is also tested.

 

 

 

 

Reliability models: Testing is used to estimate the reliability of a software component or system; however, establishing the reliability of a piece of software through testing is a costly process. The test cases are selected based on the expected frequency of use of each product feature.