Flights Reservation 1.0
Project Document
Master Test Plan
Author: Web Group Test Manager
Creation Date: August 17, 1999
Last Updated:
Version: 1.0
Approvals:
______
Quality Control Date
Test Manager
______
WinRunner Core Date
R&D Manager
Change Record
Date / Author / Version / Change Reference17-Aug-99 / Web Group Test Manager / 1.0 / No previous document
Introduction
Disclaimer
This document does not enforce a testing methodology or recommend specific hardware or software. It is an example of a typical master test plan implementation for a Web application testing project. The example in this document is based on the experience of the Mercury Interactive Quality Control team and Mercury Interactive's large customer base. The purpose of this example is to help new Web testing teams quickly jump into the testing process.
System Overview
“Flights” is a sample application used to demonstrate the capabilities of Mercury Interactive's testing tools. It provides a simple model of a flights reservation system. This application is used for pre-sales demonstrations, tutorials and training programs. It should be installed on the Web server. All data processed by the application is maintained in the ASCII files in the Web server file system.
Flights is installed at the corporate Web server and accessible via Internet by the application engineers. Note that Flights periodically undergoes changes in order to demonstrate the new capabilities of Mercury Interactive's testing tools.
Purpose
This document describes a framework for testing the Flights application. It defines the stages of the testing process and a schedule. It also includes a methodology and techniques used to develop and execute tests. A Master Test Plan will also be used as a baseline for verifying and auditing the testing process
Tested Features
The testing process will cover the following functional areas:
- Installation
- User interface
- Order management
The following operational aspects will be addressed:
- Simultaneous users support
- Security
- Performance
Features Not Tested
Recovery will not be covered in the testing process. The reason for skipping this feature is in the intended scope of use of the application.
Testing Strategy and Approach
Levels
The first step in testing the Flights application is to consider the functionality available to the user, the application's accessibility via the Web, and its presentation in the Web browser.
The functionality of the Flights application will be tested at the subsystem (features) and system levels. Testing at the features level should be extensive because of the frequent low level changes caused by Web technology updates reflected in the application.
The purpose of the system test in the document is to ensure that the application fits WinRunner and LoadRunner tutorial and training materials. This means that the flows in the application described in these materials are highly reliable.
Web aspects of the Flights application should be covered by the operational testing and Web specific tests.
Testing Types
A detailed test plan at subsystem and system levels will be developed using functional testing techniques.
1. Specification-based testing
Subsystem testing starts from the features evaluation. Its purpose is to explore the system in order to learn its functions via the user interface. The features evaluation enables you to sanity check the conceptual design of the system. It also substitutes formal usability testing allowing a test engineer to provide feedback on the usability issues.
The major approach for the subsystem testing is specification-based testing. The specification should be decomposed to the functions level. Domains of the function parameters values should be identified. Environmental dependencies of the application should be analyzed. Decomposition and analysis should take into account experience gathered in the feature evaluation stage. Test requirements should be derived as a result of the functional decomposition and environmental analysis. The test cases should be designed following the test requirements. The general rule for test case design is to combine 2-3 functions and 6-7 requirements. Test data and test procedures are developed according to the test design.
The subsystem tests should be conducted for the following initial conditions:
- initial use of the application when the orders database is still empty.
- a certain volume of data is achieved.
2. User scenarios testing
System testing will be based on the user scenario's technique. User scenarios will be derived from the WinRunner and LoadRunner tutorial and training materials. Test data for the user scenarios should be selected according to the users profiles where major profiles are a computer specialist and a business user.
3. Operational testing
Operational testing should address two areas: performance and security.
3.1. Performance testing
Current assumption is that a maximum of 20 field application engineers can approach the Flights application concurrently for the demos. The purpose of the application performance test and the site is to ensure that all application responses lay in the range of 3 seconds when demo scenarios are executed concurrently.
3.2. Security testing
Security tests should ensure that access to the Flights application is limited to the application only and prevents access to the corporate Web site and corporate server resources. The test environment should be defined to precisely emulate the configuration of the production Web server. It should include the firewall configured according to the specification defined in the Security Requirements document that is part of the development documentation.
An IP spoofing technique should be used to validate the firewall configuration.
4. Browser Matrix
Browser dependent tests will be implemented via distribution of the selected subsystem tests in the browser matrix. These tests will be selected to cover navigation through application pages and activation of the page controls. The tests should be repeated for Microsoft Internet Explorer and Netscape Navigator. Different versions of these browsers should be covered. The versions should be selected according to their popularity.
5. Regression Testing
The purpose of regression testing is to ensure that the functionality of the tested application is not broken due to changes in the application. Regression tests re-use functional subsystem and system tests. Functional test results will be captured and reviewed. The results certified as correct will be kept in the regression test expected results baseline. Regression tests will also be based on the browser matrix distribution.
6. Production Roll-out
The final test ensures that the application is installed properly at the production Web server and accessible via the Internet.
Testing Tools
The following testing tools can be used to manage and automate the testing process for the Flights application:
- TestDirector
Flights specific tests will be kept as part of the existing WinRunner test project except for performance tests. Performance tests will be part of the existing LoadRunner test project. Results of the test execution and discovered defects will also be kept in these projects.
The TestDirector Web template database will be used as a source for the detailed test plan development.
- WinRunner with WebTest Add-in
WinRunner can be used to automate system tests, browser matrix and regression tests. These test types have a high number of common scenarios so high reusability can be achieved.
The testing of the Flights Order handling mechanism should be done using the WinRunner Data Driver so that multiple data sets will be entered by a single scenario.
If application stability is low due to frequent changes, test automation should be shifted to the functional tests.
- LoadRunner (virtual users will be generated using Web protocol).
LoadRunner should be used for performance testing of the central Web site running the Flights application.
Detailed testing tools' implementation plan in the testing process framework should be provided in a separate Tool Implementation Plan document.
Test Case Success/Fail Criteria
A test case succeeds if all its steps are executed in all selected environments, and there is no difference between expected results and the actual application behavior and output. This means either an explicit definition of the expected behavior and output of the application or the conclusion of the test engineer that actual behavior and application output are acceptable according to his/her experience as a domain expert.
The test case fails if at least one executed step failed. A step fails if there is a difference between expected results and the actual application behavior and output. The difference could be explicit or implicit as described above. The test fails even if the difference is caused by a known defect already registered in the company’s defect tracking system.
Testing process
The testing process summary is shown in the diagram below:
Subsystem tests includes design, automation and execution stages that partially overlap on the time axis. Feature testing can combine functional testing and aspects specific for Web applications like Web pages quality. When initial stability of the application functionality is achieved, browser dependent behavior of the application can be explored. System tests include stages similar to the subsystem testing phase. Design of the system test starts when certain stability is achieved during execution of the subsystem test. The first test cycle starts concurrently with the last subsystem test cycles. System tests should combine user scenarios planning and execution, and operational testing. Operational testing should be done in the environment close to production. The production environment will be built close to release. If operational testing is moved closer to release, it can use this environment. The testing process is completed by the release procedure. The exact entrance/exit criteria for every stage are described below. All communication issues between departments during entrance/exit of corresponding stages are described in the Communication Procedures and Vehicles document.
Entrance/Exit Criteria
- Subsystem testing
- Features evaluation.
The features evaluation process starts as soon as the feature is integrated with the Flights application and is released to the Quality Control team. The feature acceptance condition is a sanity test conducted by the developer. The sanity test should confirm that a user is able to activate the feature for at least one hour while no defects that limit access to the functionality are observed (e.g. crashes, assertions).
The features evaluation process ends when a test engineer evaluates every feature function and confirms conceptual validity of the feature design. The process ends when a final list of corrective actions is defined and approved by Quality Control team leader and developer responsible for the feature.
- Test requirements development
Test requirements development starts when the features evaluation process confirms the conceptual validity of the feature design. The scope of the changes should be defined and approved. Afterwards, development of the requirements can start for the stable functions. Test requirements can also be developed as a means of the feature design validation.
Test requirements development ends when all functions and parameters of the feature selected for testing (see Tested Features paragraph) are covered by the requirements. The process ends when requirements are reviewed and approved by the Quality Control team leader.
- Test case design
Test case design starts when requirements for a feature are approved. Another condition is the stability of the design. This means that the design is approved and no major changes are expected.
Test case design ends when all test requirements selected for the testing are covered by the test cases. The test cases design process is completed when all the test cases are reviewed and approved by the Quality Control team leader.
- Automation
The automation process starts when feature functions selected for automation are stable. Stability means that there are such navigation flows in the tested application that a function could be activated by the user and execution of the paths allows permanent functions activation for at least one hour. These navigation flows should be easily identified, i.e. identification of the flow does not take more than 5 minutes.
Automation stops when all planned test scripts are finally implemented. A scope of the automated test scripts should be available in the Testing Tools Implementation document. Final implementation means that the scripts are reviewed by a senior test engineer and approved by the Quality Control team leader.
- Test cycle
A test cycle starts when all test cases selected for the test cycle are implemented and can be found in the TestDirector repository. The tested application should pass an R&D sanity check proving that the basic functionality of the application is stable, i.e. the application can be activated for at least ½ hour without major failures.
The test cycle is stopped either when all selected test cases for the cycle were executed or a cycle is suspended according to the suspension criteria (see the Suspension Criteria paragraph).
- System testing
- User scenarios
User scenario development starts when the first subsystem test cycle is completed and no major changes are expected as results of the cycle. Another condition is the availability of tutorial and training materials.
User scenario development ends when criteria described in the Testing Strategy and Approach section, paragraph “System testing” is achieved. The stage is exited when users' scenarios are reviewed by a senior test engineer and an education group manager, and are approved by the Quality Control team leader.
- Test case design
Test case design starts when user scenarios are selected for implementation reviewed and approved.
Test case design ends when all user scenarios selected for the implementation are covered by the test cases. The test case design process is exited when all the test cases are reviewed and approved by the Quality Control team leader.
- Automation
The automation process starts when no major changes in the user model and interface of the tested application are expected. Infrastructure of the subsystem automated test suite should be completed. User scenarios selected for automation should be accessible for the user.
Automation ends when all planned test scripts are finally implemented. A scope of the automated test scripts should be available in the Testing Tools Implementation document. Final implementation means that the scripts are reviewed by a senior test engineer and approved by the Quality Control team leader.
- Test cycle
See 1.4 above.
- Web specific testing
- Test requirements
Test requirements development starts when subsystem testing confirms the conceptual validity of the application design (at least at the feature level). The final list of supported Web browsers, servers and operating systems is provided by the product-marketing manager.
Test requirements development ends when analysis of the Web specific application aspects does not bring any additional testing points. Another criteria can be the completed coverage of the Web testing checklist provided as a template with TestDirector 6.0.
The process is exited when requirements are reviewed and approved by Quality Control team leader.
- Test case design
Test case design starts when Web specific test requirements are approved. Another condition for starting the design process is reliability of the application estimated by the subsystem test execution. The mean time between failures should reach at least 30 minutes (average execution duration of a test scenario).
Test case design stops when all test requirements selected for testing are covered by the test cases. The test case design process ends when all the test cases are reviewed and approved by the Quality Control team leader.
- Automation
The automation process starts when no major changes in the user model and interface of the tested application are expected. Infrastructure of the subsystem automated test suite should be completed. Test cases selected for automation should be accessible to the user.
Automation stops when all planned test scripts are finally implemented. The scope of the automated test scripts should be available in the “Testing Tools Implementation” document. Final implementation means that the scripts are reviewed by a senior test engineer and approved by the Quality Control team leader.
- Test cycle
See 1.4 above.
Test Suspension Criteria/Resuming Requirements
A test case is suspended if an executed step failed in such way that it is impossible to proceed to any following step in the test. The suspended test case can be resumed when the reason for suspension is eliminated. The resumed test case should be executed again from the beginning.