Test life cycle (TLC)

System study

Scope/Approach/Estimation

Test Plan Design

Test Case Design

Test Case Review

Test Case Execution

Defect Handling

GAP Analysis

  1. System study:

We will study the particular s/w or project/system.

  • Domain:

In domain, there may be different types of domains like banking, finance, Insurance, Marketing, Real-time, ERP, SEIBEL, Manufacturing etc.

  • Software:

Front End/Back End/ Process.

Front End: GUI, VB, D2K.

Back end : Oracle, Sybase, SQL server, MS access, DB2

Process: Languages. Eg: c, c++, Java..etc.

  • Hardware: servers, internet, intranet applications.
  • Functional Point/LOC:

Functional Point: No of lines that are required to write a micro function.

Micro function: Function which cannot possible to break further

1 F.P = 10 lines of code.

  • No. of pages of software/system
  • No. of resources of software/system
  • No. of days to be taken to develop software/system
  • No. of modules in the software/system (i.e-Associate/Core/Maintainace)
  • Pick one priority  High / Medium / Low.
  1. Scope/Approach/Estimation:

What to be tested.

Scope

What not be tested.

Eg:

U I S A

   
   
   
   

Module

Approach: Test Life Cycle (All the phases of TLC)

Estimation:

LOC (lines of code) / F.P (functional point) / Resource.

1 P.F =10 lines of code.

Example:Input=1000 LOC

For this 1000 LOC we can estimate the time to complete the whole TLC

System study-5days-No division

Scope/approach/Estimation-2days-No division

Test plan-2days-No division

Test case design-10days-yes to divide

Here we have 1000 LOC = 300 Test cases = 10 days

Test case review (= ½(Test case design))

-5days-yes to divide

Test case Execution-10days-yes to divide

Defect handling-6days-yes to divide

30TC=1Defect=5 Hours(for tracking)

For 300TC= 10 Defects=50 Hours=6days

Gap analysis-5days-No division

------

Total No of days required = 45 man days

(for one time Manual testing)

------

1 time Manual testing=45 man days

Project management=20/100(45 days)=9 days

Content management (data storage, Management of project & tools) =10/100(45days) =4.5days

For buffer==10days

------

This is for one resource==68 days

------

If we have 4 resources==68/4 =17 days

  1. Test Plan Design:

Test Plan:

Test plan includes all the areas.

  1. Who are the client & their details? And also the company& their details where testing is taken place
  2. Reference Documents

Like BRS, SRS, & DFD etc.

  1. Scope of the Project
  2. Project Architecture & Data Flow diagrams.
  3. Test Strategy
  4. Deliverables
  5. Schedules
  6. Milestones
  7. Risk/Mitigation/Contingency
  8. Testing Requirements
  9. Assumptions
  10. Test Environment/ Project
  11. Defects
  12. Escalation process

Assume us preparing the Test Plan for Avazpour.

  1. Client: Avazpour.

Company: iNEKTechnologies.

Here we write the details about the company.

  1. Reference Documents: BRS, SRS, & SDS
  2. Scope:

Overview: Here in Avazpour we have two phases.

Phase1: Customer Lookup.

Phase2: User/Group Lookup

Take in this case where we assume we have to Releases.

Release 1: In Release 1 we test first Phase 1.

We test only unit testing, Integration tests & System testing. After that we find bugs if any and fix it. Then we retest after fixing.

Release 2: In release 2 we test Phase 2

We test only Phase 2. Also do regression testing for which if there any affects with this attachment of Phase 2 with Phase1&2.

  1. Project Architecture:

In this we represent the application in a pictorial format by using Dataflow diagrams, Activity diagrams & E-R Diagrams.

  1. Test Strategy:

Test strategy explains the application Test factors & Test types.

These are the requirements must to be filled before doing any testing.

Pre Condition: It specifies the requirements to do a particular testing.

Start Criteria:To start a particular test we select criteria.

Pause:In case if there is any problem to conduct test case execution we may stop for sometime.

Suspension: We suspend the test case execution if any requirements are not

Fulfilled.

Deliverables:Test Case execution Reports.

  1. Resources/ Responsibilities/ Roles:

We specify Each Resource name, Role & their Responsibilities.

Also we give the clear picture of team who done the Testing.

Example:

ResourceRoleResponsibility

Name: sameeraTest LeadPreparation of

  1. Deliverables:8. Schedules:9. Milestone:

a). System study document Jun 01-jan08 Jun 08

b). Understanding documents

c). Issues Document

d). Test plan Document

e). Test Case Documents

f). Test Case Review Documents

g). Defect Reports

h). Tractability Metrics

i). Functional Coverage Document

j). Test Reports.

10. Risk/Contingency/Mitigation:

Risk:Any Unexpected Event which will affect the project.

Contingency: Prevention step to overcome from the risk.

Mitigation:It specifies the solution to cover the risk after it occurs.

Typical Risks:Broke links, Server problems, weak bandwidth, wrong builds,

Database down, wrong data, application server down, Integration failures, Resource problems, unexpected events,

And other in general risks.

Example: In a project where the release engineer not uploaded the required build.

11. Training: Training will be given to the resources if they are not having the required Skills.

12. Assumptions: To test application we make some assumptions.

Example: For doing System testing we first do the unit & Integration testing.

If these documents are not sending by the client then we report to them.

  1. Test Environment: It specifies the software, hard ware and other system details to test the application.
  2. Defects: Defect report documents
  3. Escalation process: While conducting the test if any resource gets any doubts or problem whom to report can be specified here.

It is nothing but communication flow from Bottom-to Top level in testing process.

Note: Test bed: A test bed configuration is identified and planned from hardware and operating system version and compatibility specifications.

Test data:After identifying the requirements for a test the creation of test data is to be made. The testing team can make the test data or can also be provided by the client.

  1. Test case Design: (heart of testing)
  • Test case is description of what is to be tested what data to be used and what actions to be done to check the actual risk against the expected result.
  • A test case is simply a test with formal steps and instructions.
  • Test cases are valuable because they are repeatable, reproducible under the same/different environments and easy to improve upon with feedback.

Format of Test case Design:

Pre Condition / Description / Data / Expected results / Actual results / Status / Remarks / Bug Number
Constraint/Condition to be met / Format:
#Check whether/verify system displays expected result page.
#Action: User clicks on particular <button> or link page
#Data: / Data to test / System should display page with the details / As expected (or) Whatever system display / Pass (or) Fail / Comments / Eg: Bug-01

Techniques to write a test case:

Boundary Value analysis:

By using boundary value analysis we take upper&lower boundary values and we check only those values.

Equivalence Class portions

We check attributes/parameters of functionalities.

Error guessing

Testing against specifications.

Use case:

Format:

  1. Description: it specifies the description of use case
  2. Actors: Here we specify the actors involved actually in using this use case
  3. Pre condition:
  4. User Action& System Response

-Typical flow

-Normal Flow

-Exceptional flow

5. Post condition

6. Specific Requirements

7. Business Validations

8. Parking Lot

  1. Test case items are :

TC no.

Pre-condition

Description

Expected output

Actual output

Status

Remarks

  1. Test Case Review:

Review means re-verification of test case. These are included in the review format.

First Time Right (FTR)

types of reviews:

peer – peer review  same level

team lead review

team manager review

review process:

Take demo of the functionality

Go through use case / function specification

Try to see TC & find out the gap between

Test cases Vs. Use Cases

Submit the review report

  1. Test Case Execution:

This case execution includes mainly 3 things.

  1. I/P:

Test cases

Test data

Review comments

SRS

BRS

System availability

Data availability

Database

Review doc

  1. Process: Test it.
  1. Output:

Raise the defect

Take a screen shot & save it.

  1. Defect Handling:

Identify the following things in defect handling.

Defect No./Id.

Description

Origin TC id

Severity

  • Critical
  • Major
  • Medium
  • Minor
  • Cosmetic

Priority

  • High
  • Medium
  • Low

Status

Following is the flow of defect handling:

Raise the defect

Review it internally

Submit to developer

We have to declare severity of defect & after declare the priority.

According to priority, we will test the defect.

  1. GAP Analysis:

Finding the difference between the client requirement & the application developed.

Deliverables:

Test plan

Test scenarios

Defect reports

BRs Vs SRs.

SRs Vs Test Case.

TC vs. Defect.

Defect is open / closed.