Monday, August 27, 2012

V Model to W Model | W Model in SDLC Simplified

We already discuss that V-model is the basis of structured testing. However there are few problem with V Model. V Model Represents one-to-one relationship between the documents on the left hand side and the test activities on the right. This is not always correct. System testing not only depends on Function requirements but also depends on technical design, architecture also. Couple of testing activities are not explained in V model. This is a major exception and the V-Model does not support the broader view of testing as a continuously major activity throughout the Software development lifecycle.
Paul Herzlich introduced the W-Model. In W Model, those testing activities are covered which are skipped in V Model.
The ‘W’ model illustrates that the Testing starts from day one of the of the project initiation.
If you see the below picture, 1st “V” shows all the phases of SDLC and 2nd “V” validates the each phase. In 1st “V”, every activity is shadowed by a test activity. The purpose of the test activity specifically is to determine whether the objectives of that activity have been met and the deliverable meets its requirements. W-Model presents a standard development lifecycle with every development stage mirrored by a test activity. On the left hand side, typically, the deliverables of a development activity (for example, write requirements) is accompanied by a test activity test the requirements and so on.
W Model final
Fig 1: W Model
W Model 2
Fig 2: Each phase is verified/validated. Dotted arrow shows that every phase in brown is validated/tested through every phase in sky blue.
Now, in the above figure,
  • Point 1 refers to - Build Test Plan & Test Strategy.
  • Point 2 refers to - Scenario Identification.
  • Point 3, 4 refers to – Test case preparation from Specification document and design documents
  • Point 5 refers to – review of test cases and update as per the review comments.
So if you see, the above 5 points covers static testing.
  • Point 6 refers to – Various testing methodologies (i.e. Unit/integration testing, path testing, equivalence partition, boundary value, specification based testing, security testing, usability testing, performance testing).
  • After this, there are regression test cycles and then User acceptance testing.
Conclusion - V model only shows dynamic test cycles, but W models gives a broader view of testing. the connection between the various test stages and the basis for the test is clear with W Model (which is not clear in V model).
You can find more comparison of W Model with other SDLC models Here.

Fault, Error & Failure

Fault : It is a condition that causes the software to fail to perform its required function.
Error : Refers to difference between Actual Output and Expected output.
Failure : It is the inability of a system or component to perform required function according to its specification.
IEEE Definitions
  • Failure: External behavior is incorrect
  • Fault: Discrepancy in code that causes a failure.
  • Error: Human mistake that caused fault
Note:
  • Error is terminology of Developer.
  • Bug is terminology of Tester

Functional Testing Vs Non-Functional Testing

Functional Testing: Testing the application against business requirements. Functional testing is done using the functional specifications provided by the client or by using the design specifications like use cases provided by the design team.


Functional Testing covers:
  • Unit Testing
  • Smoke testing / Sanity testing
  • Integration Testing (Top Down,Bottom up Testing)
  • Interface & Usability Testing
  • System Testing
  • Regression Testing
  • Pre User Acceptance Testing(Alpha & Beta)
  • User Acceptance Testing
  • White Box & Black Box Testing
  • Globalization & LocalizationTesting
Non-Functional Testing: Testing the application against client's and performance requirement. Non-Functioning testing is done based on the requirements and test scenarios defined by the client.
Non-Functional Testing covers:
  • Load and Performance Testing
  • Ergonomics Testing
  • Stress & Volume Testing
  • Compatibility & Migration Testing
  • Data Conversion Testing
  • Security / Penetration Testing
  • Operational Readiness Testing
  • Installation Testing
  • Security Testing (ApplicationSecurity, Network Security, System Security)

Software Testing

Software Testing is the process of evaluating the features of software and finding the differences between existing and expected requirements. In today’s scenario following are some of the major problems:


  1. Some of the new development methodologies are developed through trial. Generally, these methods typically don't produce specifications for the tester to test against. So, testers have to find bugs by hit and trial.
  2. In today’s tough competition in the market, software development has been driven by entrepreneurial pressures, tight schedule, and constantly evolving product definition. For these reasons, sometimes, it is difficult to convince management that testing is necessary or worthwhile.
  3. There are only few trained testers using formal methods and metrics. Most of the software testers out there are just passing through testing on their way to some other career. So, overall the testing effort is not giving the high-quality results that may help demonstrate how the testing effort improves the quality of the product.
  4. In the past few years, the quality of standards of software development is improved drastically. It has a profound effect on the quality of the final product. It has also removed the need for extensive low-level testing in these areas, so the demand for white box tester is decreased.
Also, please remember that “A test effort that just finds bugs is not enough.” As a software tester, you must be able to demonstrate that your effort is adding value to the quality of the software. Measure that value in order to demonstrate the value added.

Management would also take interest in knowing that which parts of the software development life cycle is contributing in achieving the product quality. So, add the value as you can.

Software Testing Techniques and Levels

In this post, I'm going to describe techniques and strategies for software testing. Techniques cover different ways testing can be accomplished. Testing techniques can be defined in three ways: Preparation, Execution and Approach.

Preparation: From preparation point of view there are two testing techniques: Formal Testing and Informal Testing.

Formal Testing: Testing performed with a plan, documented set of test cases, etc that outline the methodology and test objectives. Test documentation can be developed from requirements, design, equivalence partitioning, domain coverage, error guessing, etc. The level of formality and thoroughness of test cases will depend upon the needs of the project. Some projects can have rather informal ‘formal test cases’, while others will require a highly refined test process. Some projects will require light testing of nominal paths while others will need rigorous testing of exceptional cases.

Informal Testing: Ad hoc testing performed without a documented set of objectives or plans. Informal testing relies on the intuition and skills of the individual performing the testing. Experienced engineers can be productive in this mode by mentally performing test cases for the scenarios being exercised.

From the execution point of view, the two testings types are: Manual Testing and Automated Testing.

Manual Testing: Manual testing involves direct human interaction to exercise software functionality and note behavior and deviations from expected behavior.

Automated Testing: Testing that relies on a tool, built-in test harness, test framework, or other automatic mechanism to exercise software functionality, record output, and possibly detect deviations. The test cases performed by automated testing are usually defined as software code or script that drives the automatic execution.

From the testing approach point of view, the two testings types are: Structural Testing and Functional Testing.

Structural Testing: Structural testing depends upon knowledge of the internal structure of the software. Structural testing is also referred to as white-box testing.

Data-flow Coverage: Data-flow coverage tests paths from the definition of a variable to its use.



Control-flow Coverage


Statement Coverage: Statement coverage requires that every statement in the code under test has been executed.

Branch Coverage: Branch coverage requires that every point of entry and exit in the program has been executed at least once, and every decision in the program has taken all possible outcomes at least one.

Condition Coverage: Condition coverage is branch coverage with the additional requirement that “every condition in a decision in the program has taken all possible outcomes at least once.” Multiple condition coverage requires that all possible combinations of the possible outcomes of each condition have been tested. Modified condition coverage requires that each condition has been tested independently.

Functional Testing: Functional testing compares the behavior of the test item to its specification without knowledge of the item’s internal structure. Functional testing is also referred to as black box testing.

Requirements Coverage: Requirements coverage requires at least one test case for each specified requirement. A traceability matrix can be used to insure that requirements coverage has been satisfied.

Input Domain Coverage: Input domain coverage executes a function with a sufficient set of input values from the function’s input domain. The notion of a sufficient set is not completely definable, and complete coverage of the input domain is typically impossible. Therefore the input domain is broken into subsets, or equivalence classes, such that all values within a subset are likely to reveal the same defects. Any one value within an equivalence class can be used to represent the whole equivalence class. In addition to a generic representative, each extreme value within an equivalence class should be covered by a test case. Testing the extreme values of the equivalence classes is referred to as boundary value testing.

Output Domain Coverage: Output domain coverage executes a function in such a way that a sufficient set of output values from the function’s output domain is produced. Equivalence classes and boundary values are used to provide coverage of the output domain. A set of test cases that “reach” the boundary values and a typical value for each equivalence class is considered to have achieved output domain coverage.

Various Software Testing Levels

Although many testing levels tend to be combined with certain techniques, there are no hard and fast rules. Some types of testing imply certain lifecycle stages, software deliverables, or other project context. Other types of testing are general enough to be done almost any time on any part of the system. Some require a particular methodology. When appropriate common utilizations of a particular testing type will be described. The project’s test plan will normally define the types of testing that will be used on the project, when they will be used, and the strategies they will be used with. Test cases are then created for each testing type.

Unit Testing: A unit is an abstract term for the smallest thing that can be conveniently tested. This will vary based on the nature of a project and its technology but usually focuses at the subroutine level. Unit testing is the testing of these units. Unit testing is often automated and may require creation of a harness, stubs, or drivers.

Component Testing: A component is an aggregate of one or more components. Component testing expands unit testing to include called components and data types. Component testing is often automated and may require creation of harness, stubs, or drivers.

Single Step Testing: Single step testing is performed by stepping through new or modified statements of code with a debugger. Single step testing is normally manual and informal.

Bench Testing: Bench testing is functional testing of a component after the system has been built in a local environment. Bench testing is often manual and informal.

Developer Integration Testing: Developer integration testing is functional testing of a component after the component has been released and the system has been deployed in a standard testing environment. Special attention is given to the flow of data between the new component and the rest of the system.

Smoke Testing: Smoke testing determines whether the system is sufficiently stable and functional to warrant the cost of further, more rigorous testing. Smoke testing may also communicate the general disposition of the current code base to the project team. Specific standards for the scope or format of smoke test cases and for their success criteria may vary widely among projects.

Feature Testing: Feature testing is functional testing directed at a specific feature of the system. The feature is tested for correctness and proper integration into the system. Feature testing occurs after all components of a feature have been completed and released by development.

Integration Testing: Integration testing focuses on verifying the functionality and stability of the overall system when it is integrated with external systems, subsystems, third party components, or other external interfaces.

System Testing: System testing occurs when all necessary components have been released internally and the system has been deployed onto a standard environment. System testing is concerned with the behavior of the whole system. When appropriate, system testing encompasses all external software, hardware, operating environments, etc. that will make up the final system.

Release Testing: Release tests ensure that interim builds can successfully deployed by the customer. This includes product deployment, installation, and a pass through the primary functionality. This test is done immediately before releasing to the customer.

Beta Testing: Beta testing consists of deploying the system to many external users who have agreed to provide feedback about the system. Beta testing may also provide the opportunity to explore release and deployment issues.

Acceptance Testing: Acceptance testing compares the system to a predefined set of acceptance criteria. If the acceptance criteria are satisfied by the system, the customer will accept delivery of the system.

Regression Testing: Exercises functionality that has stabilized. Once high confidence has been established for certain parts of the system, it is generally wasted effort to continue rigorous, detailed testing of those parts. However, it is possible that continued evolution of the system will have negative effects on previously stable and reliable parts of the system. Regression testing offers a low-cost method of detecting such side effects. Regression testing is often automated and focused on critical functionality.

Performance Testing: Performance testing measures the efficiency with respect to time and hardware resources of the test item under typical usage. This assumes that a set of non-functional requirements regarding performance exist in the item’s specification.

Stress Testing: Stress testing evaluates the performance of the test item during extreme usage patterns. Typical examples of “extreme usage patterns” are large data sets, complex calculations, extended operation, limited system resources, etc.

Configuration Testing: Configuration testing evaluates the performance of the test item under a range of system configurations. Relevant configuration issues depend upon the particular product and may include peripherals, network patterns, operating systems, hardware devices and drivers, user settings.

Challenges in Software Testing

All software engineering areas face lot of challenges during execution. So, I would say, as a tester, never ever get surprised when you face challenges in software testing. But, it is also a hard fact for tester community that most of the companies are not testing oriented. Management always do a good appraisal of development teams. Sometimes, appreciation goes to development teams only.
Few points that I want highlight to management / project managers & development teams:
  1. If development teams are so good / intelligent, then why you need testers to test the application.
  2. Remember - by testing a build / release, testers evaluate the Quality of work done by developers.
  3. During every regression testing, lot of bugs got re-opened (what quality of work was done by developers)
  4. Sometimes, a customer release requires 5-7 regression testing cycles. Think Why?
  5. During these 5 to 7 regression cycles, developers made so many mistakes. Testers help to prevent the defect leakage to customer. But, at the end, if even if 2-3 minor bugs goes to customer, everybody from top to bottom will catch testers. Just imagine, with one mistake of a tester, everybody will catch testers from the neck. Are testers bound to make not even a single mistake? But, developers can repeat the same mistakes multiple times.
  6. Most of the times, developers eat tester's time.
  7. I never understand why management always keep a difference between developers and testers. You might be having answer to this question, but, I'm not.

How to do System Testing

Testing the software system or software application as a whole is referred to as System Testing of the software. System testing of the application is done on complete application software to evaluate software's overall compliance with the business / functional / end-user requirements. The system testing comes under black box software testing. So, the knowledge of internal design or structure or code is not required for this type of software testing.

In system testing a software test professional aims to detect defects or bugs both within the interfaces and also within the software as a whole. However, the during integration testing of the application or software, the software test professional aims to detect the bugs / defects between the individual units that are integrated together.

During system testing, the focus is on the software design, behavior and even the believed expectations of the customer. So, we can also refer the system testing phase of software testing as investigatory testing phase of the software development life cycle.

At what stage of SDLC the System Testing comes into picture:

After the integration of all components of the software being developed, the whole software system is rigorously tested to ensure that it meets the specified business, functional & non-functional requirements. System Testing is build on the unit testing and integration testing levels. Generally, a separate and dedicated team is responsible for system testing. And, system testing is performed on stagging server.

Why system testing is required:

  • It is the first level of software testing where the software / application is tested as a whole.
  • It is done to verify and validate the technical, business, functional and non-functional requirements of the software. It also includes the verification & validation of software application architecture.
  • System testing is done on stagging environment that closely resembles the production environment where the final software will be deployed.
Entry Criteria for System Testing:
  • Unit Testing must be completed
  • Integration Testing must be completed
  • Complete software system should be developed
  • A software testing environment that closely resembling the production environment must be available (stagging environment).
System Testing in seven steps:
  1. Creation of System Test Plan
  2. Creation of system test cases
  3. Selection / creation of test data for system testing
  4. Software Test Automation of execution of automated test cases (if required)
  5. Execution of test cases
  6. Bug fixing and regression testing
  7. Repeat the software test cycle (if required on multiple environments)  
Contents of a system test plan: The contents of a software system test plan may vary from organization to organization or project to project. It depends how we have created the software test strategy, project plan and master test plan of the project. However, the basic contents of a software system test plan should be:

- Scope
- Goals & Objective
- Area of focus (Critical areas)
- Deliverables
- System testing strategy
- Schedule
- Entry and exit criteria
- Suspension & resumption criteria for software testing
- Test Environment
- Assumptions
- Staffing and Training Plan
- Roles and Responsibilities
- Glossary

How to write system test cases: The system test cases are written in a similar way as we write functional test cases. However, while creating system test cases following two points needs to be kept in mind:

- System test cases must cover the use cases and scenarios
- They must validate the all types of requirements - technical, UI, functional, non-functional, performance etc.

As per Wikipedia, there are total of 24 types of testings that needs to be considered during system testing. These are:

GUI software testing, Usability testing, Performance testing, Compatibility testing, Error handling testing, Load testing, Volume testing, Stress testing, User help testing, Security testing, Scalability testing, Capacity testing, Sanity testing, Smoke testing, Exploratory testing, Ad hoc testing, Regression testing, Reliability testing, Recovery testing, Installation testing, Idem potency testing, Maintenance testing, Recovery testing and failover testing, Accessibility testing

The format of system test cases contains:
  • Test Case ID - a unique number
  • Test Suite Name
  • Tester - name of tester who execute of write test cases
  • Requirement - Requirement Id or brief description of the functionality / requirement
  • How to Test - Steps to follow for execution of the test case
  • Test Data - Input Data
  • Expected Result
  • Actual Result
  • Pass / Fail
  • Test Iteration

Difference between Smoke & Sanity Software Testing

Smoke Testing: Software Testing done to ensure that whether the build can be accepted for through software testing or not. Basically, it is done to check the stability of the build received for software testing.

Sanity testing: After receiving a build with minor changes in the code or functionality, a subset of regression test cases are executed that to check whether it rectified the software bugs or issues and no other software bug is introduced by the changes. Sometimes, when multiple cycles of regression testing are executed, sanity testing of the software can be done at later cycles after through regression test cycles. If we are moving a build from staging / testing server to production server, sanity testing of the software application can be done to check that whether the build is sane enough to move to further at production server or not.

Difference between Smoke & Sanity Software Testing:

  • Smoke testing is a wide approach where all areas of the software application are tested without getting into too deep. However, a sanity software testing is a narrow regression testing with a focus on one or a small set of areas of functionality of the software application.
  • The test cases for smoke testing of the software can be either manual or automated. However, a sanity test is generally without test scripts or test cases.
  • Smoke testing is done to ensure whether the main functions of the software application are working or not. During smoke testing of the software, we do not go into finer details. However, sanity testing is a cursory software testing type. It is done whenever a quick round of software testing can prove that the software application is functioning according to business / functional requirements.
  • Smoke testing of the software application is done to check whether the build can be accepted for through software testing. Sanity testing of the software is to ensure whether the requirements are met or not.