Monday, August 27, 2012

Software Testing Techniques and Levels

In this post, I'm going to describe techniques and strategies for software testing. Techniques cover different ways testing can be accomplished. Testing techniques can be defined in three ways: Preparation, Execution and Approach.

Preparation: From preparation point of view there are two testing techniques: Formal Testing and Informal Testing.

Formal Testing: Testing performed with a plan, documented set of test cases, etc that outline the methodology and test objectives. Test documentation can be developed from requirements, design, equivalence partitioning, domain coverage, error guessing, etc. The level of formality and thoroughness of test cases will depend upon the needs of the project. Some projects can have rather informal ‘formal test cases’, while others will require a highly refined test process. Some projects will require light testing of nominal paths while others will need rigorous testing of exceptional cases.

Informal Testing: Ad hoc testing performed without a documented set of objectives or plans. Informal testing relies on the intuition and skills of the individual performing the testing. Experienced engineers can be productive in this mode by mentally performing test cases for the scenarios being exercised.

From the execution point of view, the two testings types are: Manual Testing and Automated Testing.

Manual Testing: Manual testing involves direct human interaction to exercise software functionality and note behavior and deviations from expected behavior.

Automated Testing: Testing that relies on a tool, built-in test harness, test framework, or other automatic mechanism to exercise software functionality, record output, and possibly detect deviations. The test cases performed by automated testing are usually defined as software code or script that drives the automatic execution.

From the testing approach point of view, the two testings types are: Structural Testing and Functional Testing.

Structural Testing: Structural testing depends upon knowledge of the internal structure of the software. Structural testing is also referred to as white-box testing.

Data-flow Coverage: Data-flow coverage tests paths from the definition of a variable to its use.



Control-flow Coverage


Statement Coverage: Statement coverage requires that every statement in the code under test has been executed.

Branch Coverage: Branch coverage requires that every point of entry and exit in the program has been executed at least once, and every decision in the program has taken all possible outcomes at least one.

Condition Coverage: Condition coverage is branch coverage with the additional requirement that “every condition in a decision in the program has taken all possible outcomes at least once.” Multiple condition coverage requires that all possible combinations of the possible outcomes of each condition have been tested. Modified condition coverage requires that each condition has been tested independently.

Functional Testing: Functional testing compares the behavior of the test item to its specification without knowledge of the item’s internal structure. Functional testing is also referred to as black box testing.

Requirements Coverage: Requirements coverage requires at least one test case for each specified requirement. A traceability matrix can be used to insure that requirements coverage has been satisfied.

Input Domain Coverage: Input domain coverage executes a function with a sufficient set of input values from the function’s input domain. The notion of a sufficient set is not completely definable, and complete coverage of the input domain is typically impossible. Therefore the input domain is broken into subsets, or equivalence classes, such that all values within a subset are likely to reveal the same defects. Any one value within an equivalence class can be used to represent the whole equivalence class. In addition to a generic representative, each extreme value within an equivalence class should be covered by a test case. Testing the extreme values of the equivalence classes is referred to as boundary value testing.

Output Domain Coverage: Output domain coverage executes a function in such a way that a sufficient set of output values from the function’s output domain is produced. Equivalence classes and boundary values are used to provide coverage of the output domain. A set of test cases that “reach” the boundary values and a typical value for each equivalence class is considered to have achieved output domain coverage.

Various Software Testing Levels

Although many testing levels tend to be combined with certain techniques, there are no hard and fast rules. Some types of testing imply certain lifecycle stages, software deliverables, or other project context. Other types of testing are general enough to be done almost any time on any part of the system. Some require a particular methodology. When appropriate common utilizations of a particular testing type will be described. The project’s test plan will normally define the types of testing that will be used on the project, when they will be used, and the strategies they will be used with. Test cases are then created for each testing type.

Unit Testing: A unit is an abstract term for the smallest thing that can be conveniently tested. This will vary based on the nature of a project and its technology but usually focuses at the subroutine level. Unit testing is the testing of these units. Unit testing is often automated and may require creation of a harness, stubs, or drivers.

Component Testing: A component is an aggregate of one or more components. Component testing expands unit testing to include called components and data types. Component testing is often automated and may require creation of harness, stubs, or drivers.

Single Step Testing: Single step testing is performed by stepping through new or modified statements of code with a debugger. Single step testing is normally manual and informal.

Bench Testing: Bench testing is functional testing of a component after the system has been built in a local environment. Bench testing is often manual and informal.

Developer Integration Testing: Developer integration testing is functional testing of a component after the component has been released and the system has been deployed in a standard testing environment. Special attention is given to the flow of data between the new component and the rest of the system.

Smoke Testing: Smoke testing determines whether the system is sufficiently stable and functional to warrant the cost of further, more rigorous testing. Smoke testing may also communicate the general disposition of the current code base to the project team. Specific standards for the scope or format of smoke test cases and for their success criteria may vary widely among projects.

Feature Testing: Feature testing is functional testing directed at a specific feature of the system. The feature is tested for correctness and proper integration into the system. Feature testing occurs after all components of a feature have been completed and released by development.

Integration Testing: Integration testing focuses on verifying the functionality and stability of the overall system when it is integrated with external systems, subsystems, third party components, or other external interfaces.

System Testing: System testing occurs when all necessary components have been released internally and the system has been deployed onto a standard environment. System testing is concerned with the behavior of the whole system. When appropriate, system testing encompasses all external software, hardware, operating environments, etc. that will make up the final system.

Release Testing: Release tests ensure that interim builds can successfully deployed by the customer. This includes product deployment, installation, and a pass through the primary functionality. This test is done immediately before releasing to the customer.

Beta Testing: Beta testing consists of deploying the system to many external users who have agreed to provide feedback about the system. Beta testing may also provide the opportunity to explore release and deployment issues.

Acceptance Testing: Acceptance testing compares the system to a predefined set of acceptance criteria. If the acceptance criteria are satisfied by the system, the customer will accept delivery of the system.

Regression Testing: Exercises functionality that has stabilized. Once high confidence has been established for certain parts of the system, it is generally wasted effort to continue rigorous, detailed testing of those parts. However, it is possible that continued evolution of the system will have negative effects on previously stable and reliable parts of the system. Regression testing offers a low-cost method of detecting such side effects. Regression testing is often automated and focused on critical functionality.

Performance Testing: Performance testing measures the efficiency with respect to time and hardware resources of the test item under typical usage. This assumes that a set of non-functional requirements regarding performance exist in the item’s specification.

Stress Testing: Stress testing evaluates the performance of the test item during extreme usage patterns. Typical examples of “extreme usage patterns” are large data sets, complex calculations, extended operation, limited system resources, etc.

Configuration Testing: Configuration testing evaluates the performance of the test item under a range of system configurations. Relevant configuration issues depend upon the particular product and may include peripherals, network patterns, operating systems, hardware devices and drivers, user settings.

No comments:

Post a Comment