GENERATING ERROR FREE SOFTWARE

Abstract -The purpose of this document is to describe the structured testing methodology for software testing, also known as basis path testing. Based on the cyclomatic complexity measure of McCabe, structured testing uses the control flow structure of software to establish path coverage criteria. The resultant test sets provide more thorough testing than statement and branch coverage. .

INTRODUCTION

The term software engineering was first used at a 1968 NATO workshop in West Germany. It focused on the growing software crisis! Thus we see that the software crisis on quality, reliability, high costs etc. In the 1950’s when Machine languages were used, testing is nothing but debugging. When in the 1960’s, compilers were developed, testing started to be considered a separate activity from debugging. In the 1970’s when the software engineering concepts were introduced, software testing began to evolve as a technical discipline. Over the last two decades there has been an increased focus on better, faster and cost-effective software. Also there has been a growing interest in software safety, protection and security and hence an increased acceptance of testing as a technical discipline and also a career choice. Software testing is the process of executing software and comparing the observed behavior to the desired behavior. The major goal of software testing is to discover errors in the software [MYERS2], with a secondary goal of building confidence in the proper operation of the software when testing does not discover errors. The conflict between these two goals is apparent when considering a testing process that did not detect any errors. In the absence of other information, this could mean either that the software is high quality or that the testing process is low quality. There are many approaches to software testing that attempt to control the quality of the testing process to yield useful information about the quality of the software being tested. Although most testing research is concentrated on finding effective testing techniques, it is also important to make software that can be effectively tested. It is suggested in [VOAS] that software is testable if faults are likely to cause failure, since then those faults are most likely to be detected by failure during testing. Several programming techniques are suggested to raise testability, such as minimizing variable reuse and maximizing output parameters. In [BERTOLINO] it is noted that although having faults cause failure is good during testing, it is bad after delivery. For a more intuitive testability property, it is best to maximize the probability of faults being detected during testing while minimizing the probability of faults causing failure after delivery.

Testing:

A verification method that applies a controlled set of conditions and stimuli for the purpose of finding errors.

This is the most desirable method of verifying the functional and performance requirements.

Testing happens at the end of each phase of development.

Testing should concentrate if the requirements match the development.

Test results are documented proof that requirements were met and can be repeated

The resulting data can be reviewed by all concerned for confirmation of capabilities.

Every testing project has to follow the waterfall model of the testing process.

The waterfall model is as given below

1.Test Strategy & Planning

2.Test Design

3.Test Environment setup

4.Test Execution

5.Defect Analysis & Tracking

6..Final Reporting

The V-Model of the Software Testing Life Cycle along with the Software Development Life cycle given below indicates the various phases or levels of testing.

Broad Categories of Testing

Based on the V-Model mentioned above, we see that there are two categories of testing activities that can be done on software, namely,

Static Testing

Dynamic Testing

The kind of verification we do on the software work products before the process of compilation and creation of an executable is more of Requirement review, design review, code review, walkthrough and audits. This type of testing is called Static Testing.

When we test the software by executing and comparing the actual & expected results, it is called Dynamic Testing

Widely employed Types of Testing

From the V-model, we see that are various levels or phases of testing, namely, Unit testing, Integration testing, System testing, User Acceptance testing etc.

Let us see a brief definition on the widely employed types of testing.

Unit Testing:The testing done to a unit or to a smallest piece of software. Done to verify if it satisfies its functional specification or its intended design structure.

Integration Testing: Testing which takes place as sub elements are combined (i.e., integrated) to form higher-level elements

Regression Testing:Selective re-testing of a system to verify the modification (bug fixes) have not caused unintended effects and that system still complies with its specified requirements

System Testing: Testing the software for the required specifications on the intended hardware

Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria, which enables a customer to determine whether to accept the system or not.

Performance Testing: To evaluate the time taken or response time of the system to perform it’s required functions in comparison

Alpha Testing:Testing of a software product or system conducted at the developer’s site by the customer

Beta Testing:Testing conducted at one or more customer sites by the end user of a delivered software product system.

The Testing Techniques

To perform these types of testing, there are two widely used testing techniques. The above said testing types are performed based on the following testing techniques.

Black-Box testing technique:

This technique is used for testing based solely on analysis of requirements (specification, user documentation.). Also known as Functional testing.

White-Box testing technique:

This technique us used for testing based on analysis of internal logic (design, code, etc.)(But expected results still come requirements). Also known as Structural testing.

When Testing should occur?

Testing is sometimes incorrectly thought as an after-the-fact activity; performed after programming is done for a product. Instead, testing should be performed at every development stage of the product .Test data sets must be derived and their correctness and consistency should be monitored throughout the development process. If we divide the lifecycle of software development into “Requirements Analysis”, “Design”, “Programming/Construction” and “Operation and Maintenance”, then testing should accompany each of the above phases. If testing is isolated as a single phase late in the cycle, errors in the problem statement or design may incur exorbitant costs. Not only must the original error be corrected, but the entire structure built upon it must also be changed. Therefore, testing should not be isolated as an inspection activity. Rather testing should be involved throughout the SDLC in order to bring out a quality product.

Testing Activities in Each Phase

The following testing activities should be performed during the phases

  • Requirements Analysis - (1) Determine correctness (2) Generate functional test data.
  • Design - (1) Determine correctness and consistency (2) Generate structural and functional test data.
  • Programming/Construction - (1) Determine correctness and consistency (2) Generate structural and functional test data (3) Apply test data (4) Refine test data.
  • Operation and Maintenance - (1) Retest.

Now we consider these in detail.

Requirements Analysis

The following test activities should be performed during this stage.

  • Invest in analysis at the beginning of the project - Having a clear, concise and formal statement of the requirements facilitates programming, communication, error analysis and test data generation.

The requirements statement should record the following information and decisions:

  1. Program function - What the program must do?
  2. The form, format, data types and units for input.
  3. The form, format, data types and units for output.
  4. How exceptions, errors and deviations are to be handled.
  5. For scientific computations, the numerical method or at least the required accuracy of the solution.
  6. The hardware/software environment required or assumed (e.g. the machine, the operating system, and the implementation language).

Deciding the above issues is one of the activities related to testing that should be performed during this stage.

  • Start developing the test set at the requirements analysis phase - Data should be generated that can be used to determine whether the requirements have been met. To do this, the input domain should be partitioned into classes of values that the program will treat in a similar manner and for each class a representative element should be included in the test data. In addition, following should also be included in the data set: (1) boundary values (2) any non-extreme input values that would require special handling.

The output domain should be treated similarly.

Invalid input requires the same analysis as valid input.

  • The correctness, consistency and completeness of the requirements shouldalso be analyzed - Consider whether the correct problem is being solved, check for conflicts and inconsistencies among the requirements and consider the possibility of missing cases.

Design

The design document aids in programming, communication, and error analysis and test data generation. The requirements statement and the design document should together give the problem and the organization of the solution i.e. what the program will do and how it will be done.

The design document should contain:

  • Principal data structures.
  • Functions, algorithms, heuristics or special techniques used for processing.
  • The program organization, how it will be modularized and categorized into external and internal interfaces.
  • Any additional information.

Here the testing activities should consist of:

  • Analysis of design to check its completeness and consistency - the total process should be analyzed to determine that no steps or special cases have been overlooked. Internal interfaces, I/O handling and data structures should specially be checked for inconsistencies.
  • Analysis of design to check whether it satisfies the requirements - check whether both requirements and design document contain the same form, format, units used for input and output and also that all functions listed in the requirement document have been included in the design document. Selected test data which is generated during the requirements analysis phase should be manually simulated to determine whether the design will yield the expected values.
  • Generation of test data based on the design - The tests generated should cover the structure as well as the internal functions of the design like the data structures, algorithm, functions, heuristics and general program structure etc. Standard extreme and special values should be included and expected output should be recorded in the test data.
  • Reexamination and refinement of the test data set generated at the requirements analysis phase.

Programming/Construction

Here the main testing points are:

  • Check the code for consistency with design - the areas to check include modular structure, module interfaces, data structures, functions, algorithms and I/O handling.
  • Perform the Testing process in an organized and systematic manner with test runs dated, annotated and saved. A plan or schedule can be used as a checklist to help the programmer organize testing efforts. If errors are found and changes made to the program, all tests involving the erroneous segment (including those which resulted in success previously) must be rerun and recorded.
  • Asks some colleague for assistance - Some independent party, other than the programmer of the specific part of the code, should analyze the development product at each phase. The programmer should explain the product to the party who will then question the logic and search for errors with a checklist to guide the search. This is needed to locate errors the programmer has overlooked.
  • Use available tools - the programmer should be familiar with various compilers and interpreters available on the system for the implementation language being used because they differ in their error analysis and code generation capabilities.
  • Apply Stress to the Program - Testing should exercise and stress the program structure, the data structures, the internal functions and the externally visible functions or functionality. Both valid and invalid data should be included in the test set.
  • Test one at a time - Pieces of code, individual modules and small collections of modules should be exercised separately before they are integrated into the total program, one by one. Errors are easier to isolate when the no. of potential interactions should be kept small. Instrumentation-insertion of some code into the program solely to measure various program characteristics – can be useful here. A tester should perform array bound checks, check loop control variables, determine whether key data values are within permissible ranges, trace program execution, and count the no. of times a group of statements is executed.
  • Measure testing coverage/When should testing stop? - If errors are still found every time the program is executed, testing should continue. Because errors tend to cluster, modules appearing particularly error-prone require special scrutiny.

The metrics used to measure testing thoroughness include statement testing (whether each statement in the program has been executed at least once), branch testing (whether each exit from each branch has been executed at least once) and path testing (whether all logical paths, which may involve repeated execution of various segments, have been executed at least once). Statement testing is the coverage metric most frequently used as it is relatively simple to implement.

The amount of testing depends on the cost of an error. Critical programs or functions require more thorough testing than the less significant functions.

Operations and maintenance

Corrections, modifications and extensions are bound to occur even for small programs and testing is required every time there is a change. Testing during maintenance is termed regression testing. The test set, the test plan, and the test results for the original program should exist. Modifications must be made to accommodate the program changes, and then all portions of the program affected by the modifications must be re-tested. After regression testing is complete, the program and test documentation must be updated to reflect the changes.

Testing Techniques:

White-Box Testing: The goal at this point is to establish for this strategy the analog to exhaustive input testing in the black-box approach. Causing every statement in the program to execute at least once might appear to be the answer, but it is not difficult to show that this is highly inadequate. The analog is usually considered to be path testing .

Practices:

This section outlines some of the general practices comprising white-box testing process. In general, white-box testing practices have the

Following considerations:

  1. The allocation of resources to perform class and method analysis and to document and review the same.
  2. Developing a test harness made up of stubs, drivers and test object libraries.
  3. Development and use of standard procedures, naming conventions and libraries.
  4. Establishment and maintenance of regression test suites and procedures.
  5. Allocation of resources to design, document and manage a test history library.
  6. The means to develop or acquire tool support for automation of capture/replay/compare, test suite execution, results verification and documentation capabilities.

1. Code Coverage Analysis

1.1 Basis Path Testing

A testing mechanism proposed by McCabe whose aim is to derive a logical complexity measure of a procedural design and use this as a guide for defining a basis set of execution paths. These are test cases that exercise basis set will execute every statement at least once.

1.1.1 Flow Graph Notation :

Control flow graphs describe the logic structure of software modules. A module corresponds to a single function or subroutine in typical languages, has a single entry and exit point, and is able to be used as a design component via a call/return mechanism.A notation for representing control flow similar to flow charts and UML activity diagrams.The flow graph depicts logical control flow using the notation shown below:

Flow graph is a pictorial representation of a program contains nodes and links. it is a simplified version of the flow chart.

1.1.2 Cyclomatic Complexity

The cyclomatic complexity gives a quantitative measure of the logical complexity. This value gives the number of independent paths in the basis set, and an upper bound for the number of tests to ensure that each statement is executed at least once. An independent path is any path through a program that introduces at least one new set of processing statements or a new condition (i.e., a new edge). Cyclomatic complexity provides upper bound for number of tests required to guarantee coverage of all program statements.

Cyclomatic complexity has a foundation in graph theory and is computed as follows:

  1. The number of regions corresponds to the cyclomatic complexity.
  2. Cyclomatic Complexity V(G) for a flow graph, G is defined as:

V(G)=E-N+2

Where E – No. of Edges

N – No of Flow Graph Nodes.

  1. Cyclomatic complexity, V(G) for a flow graph G is also defined as
  2. V(G)=P+1
  3. Where P – No. of Predicate nodes

1.1.3 Deriving Test Cases:The basis path testing method can be applied to a procedural desing or to source code.

APPLICATION OF BASIS PATH TESTING ON THE GIVEN ALGORITHM using PDL:

PROCEDURE average: