Page 1

Metrics & Measures

in Context

Discussion Document

Title: Metrics & Measures in Context

Description:This document provides a set of metrics in measures and the context in which they apply. Refer also to the related Excel Workbook.

Author(s):Mark Crowther

Version:1.0

Table of Contents

1.0 introduction

Stakeholders

2.0 Metrics and measures in context

1) Number of Test Cases Planned v Ready for Execution

2) Number of Test Scenarios Identified v Functional Areas

3) Total Time Spent on Preparation v Estimated Time

4) Number of Test Cases Executed v Test Cases Planned

5) Number of Test Cases Passed, Failed, Blocked and Not Run

6) Total Number of Test Cases Passed by Functional Area

7) Total Time Spent on Execution v Estimated Time

8) Total Number of Critical and High Bugs Raised and Closed

9) Days Elapsed from Bug Recording to Bug Fix Reporting for Critical & High

10) Total Number of Bugs Closed v Total Number of Bugs Re-Opened

11) Bug Distribution Totals by Functional Area per Period

12) Days Elapsed from Bug Fix Reporting to Retest

1.0 introduction

Metrics and measures are a vital element of management information that is essential to the efficient and effective running of the testing function at JLD. Understanding the need for them is one step, the next is how to use them in context of the problem they are meant to help resolve and the actions that can be taken given what they tell us.

It will be apparent that if the costs of staff and other resources are known it is a simple matter to use the knowledge acquired from the metrics set to evaluate the cost implications of what is observed. Where KPIs have been identified any improvement in the process they are concerned with naturally translates into measurable improvement of efficiencies, costs and greater ROI.

Combined with the practices of Root Cause Analysis and Plan-Do-Study-Act cycles the organisation has a way to really attack the Cost of Quality issues it faces. Saving year on year what would have been spent year on year without metrics providing the guidance and insight into where improvements can be made.

Stakeholders

Throughout the business many project stakeholders can benefit from the metrics and measures that are being captured. The charts and graphs that are produced should inform typical questions they may have for example:

  • Senior Management

“Are the high level phases are on track and so the release on track?”

  • Project Manager

“Is progress to plan, are there any significant events occurring to affect the plan?”

  • Product Manager

“Is the testing being done and the bugs being found going to give me a quality product?”

  • Development Manager

“Is testing thorough, are areas being tested equally at a good rate, how Is the code quality?”

  • Test Manager

“Test work on track, no areas being left untested or over tested, bug find & retest progress?”

In the following section each metric and measure is discussed in context, numbered references match those given in the document “JLD - TestMetricsDashboardWorkbook.xls”.

2.0 Metrics and measures in context

1)Number of Test Cases Planned v Ready for Execution

  • Graph/Chart Type: Burndown Chart

Example Burndown Chart from the Dashboard

  • Data Acquisition: Daily Update
  • Refresh Frequency: Snapshot
  • Author: Test Manager, Test Analyst
  • Key Stakeholders: Senior Management, Project Manager, Test Manager
  • Example KPI:

Number of days over or under projected ‘Planning & Analysis Phase’ end date.

Average rate of delivery over time for all Test Case types per Test Analyst.

  • Description and Purpose

Once the Test Analyst has identified the set of Test Cases that will be needed, along with the time it will take to write them, the delivery of the Test Cases fully written up and ready to execute must be tracked.

This metric when charted provides at-a-glance visibility at to the rate of delivery for an agreed set of Test Cases and of they are being delivered at a rate that will see them fully completed at the planned end date of the test Planning & Analysis Phase. This will ensure the Test Execution Phase that follows can start on time.

This is one of three key charts for Senior and Project Management as it informs them if this major test phase is on track and so if the project is being kept on track. Key events are flagged by the tags on the chart and the information for these is available from the Test Manager or Analyst.

2)Number of Test Scenarios Identified v Functional Areas

  • Graph/Chart Type: Pie Chart

Example Pie Chart from the Dashboard

  • Data Acquisition: Daily Update
  • Refresh Frequency: Snapshot
  • Author: Test Manager, Test Analyst
  • Key Stakeholders: Test Manager, Development Manager, Product Manager
  • Example KPIs:

Number of Test Cases per Functional Area per release

Increase in test coverage across function points per area per release

  • Description and Purpose

This chart is the first that can be created after the Test Analysts have completed analysis of testing needs for a release and issued the Test Breakdown documents.

It informs the Test Manager about the testing complexity for the areas of functionality and of the coverage per area. The chart indicates where the greatest effort is most likely to be expended during Test Case authoring and execution.

From this chart the Test Manager is also able to consider other questions such as; Why does one area of functionality have more Test Cases than others? Do any high risk areas not appear to have enough Test Cases written for them? Can the team manually execute all Test Cases or is support needed? Can those delivering automation of the regression set manage the higher volumes of Test Cases?

3) Total Time Spent on Preparation v Estimated Time

  • Graph/Chart Type: Line Graph

Example Line Graph from the Dashboard

  • Data Acquisition: Daily Update
  • Refresh Frequency: Snapshot
  • Author: Test Manager, Test Analyst
  • Key Stakeholders: Test Manager, Project Manager
  • Example KPIs:

Variance for test analysis duration between planned and actual per period

  • Description and Purpose

Using this chart the Test Manager and Project Manager can quickly see if the assigned staff are spending the time expected on project tasks.

This chart can be used in conjunction with Chart 1 to help answer why any reduction in delivery may be occurring, other than factors such as unexpected complexity slowing analysis for example.

Any variance from planned time will tell the management team that either staff are perhaps being drawn onto other tasks or are not available through absenteeism.

The variance that is recorded can then be used to inform more accurate estimation to support better project planning. The affects of Corrective Actions from Root Cause Analysis work can also be observed and reflected in the KPI.

4) Number of Test Cases Executed v Test Cases Planned

  • Graph/Chart Type: Burndown Chart

Example Burndown Chart from the Dashboard

  • Data Acquisition: Daily Update
  • Refresh Frequency: Snapshot
  • Author: Test Manager, Test Analyst
  • Key Stakeholders: Senior Management, Project Manager, Test Manager
  • Example KPIs:

Number of days over or under projected ‘Test Execution Phase’ end date.

Average rate of execution over time for all Test Case types per Tester.

  • Description and Purpose

This metric provides at-a-glance visibility at to the rate of execution for the Test Cases. More importantly it informs the Test Manager and Project Manager as to whether execution will be completed on the date required by the project.

This is one of three key charts for Senior and Project Management as it informs them if this major test phase is on track and so if the project is being kept on track. Key events are flagged by the tags on the chart and the information for these is available from the Test Manager or Tester.

5) Number of Test Cases Passed, Failed, Blocked and Not Run

  • Graph/Chart Type: Stacked Column

Example Stacked Column from the Dashboard

  • Data Acquisition: Daily Update or DB polling
  • Refresh Frequency: Snapshot or Real Time
  • Author: Test Manager, Test Analyst, Scripted Data Feed
  • Key Stakeholders: Test Manager, Project Manager, Development Manager
  • Example KPIs:

Number of Blocked or Failed Test Cases as a percent of the total run

  • Description and Purpose

This chart shows the total number of Test Cases to be executed and their status over time during the Test Execution Phase.

The Project Manager will want to know how complete testing is and that means how many Test Cases are passed. Even where metrics 3 and 4 make it appear the project is on track it could be that excessive numbers of blocked or failed Test Cases will delay project completion.

The Development Manager will want to know how many failures or blocked Test Cases there are as that reflects the quality of development team work and represents fixes they will need to spend time investigating and fixing.

6) Total Number of Test Cases Passed by Functional Area

  • Graph/Chart Type: Pie Chart

Example Pie Chart from the Dashboard

  • Data Acquisition: Daily Update or DB polling
  • Refresh Frequency: Snapshot or Real Time
  • Author: Test Manager, Test Analyst, Scripted Data Feed
  • Key Stakeholders: Test Manager, Project Manager, Development Manager
  • Example KPIs:

None

  • Description and Purpose

Through metric 5 the project team can see how many Test Cases are passing of the full set to be run, however it doesn’t inform them as to what areas of the item under test is providing these passes.

This metric provides us the detail about what functional areas are passing and which are not. It allows the Test Manager and Development Manager to know where problems may lie in the software or indicate where development and testing resources should be assigned.

When reviewing metrics 8 and 11 it is useful to also check this metric and metric 5 as bug totals found and recorded are dependent on Test Cases having been run.

7) Total Time Spent on Execution v Estimated Time

  • Graph/Chart Type: Line Graph

Example Line Graph from the Dashboard

  • Data Acquisition: Daily Update
  • Refresh Frequency: Snapshot
  • Author: Test Manager, Test Analyst
  • Key Stakeholders: Project Manager, Test Manager
  • Example KPIs:

Variance for test execution duration between planned and actual per period

  • Description and Purpose

Using this chart the Test Manager and Project Manager can quickly see if the assigned staff are spending the time expected on project tasks.

This chart can be used in conjunction with Chart 4 to help answer why any reduction in execution may be occurring.

Any variance from planned time will tell the management team that staff are perhaps being drawn onto other tasks preventing them focusing on the tasks being measured or are not available through absenteeism of some form.

The variance that is recorded can then be used to inform more accurate estimation of execution times to support better planning for future projects. The affects of Corrective Actions from Root Cause Analysis work can also be observed and reflected in the KPI.

8) Total Number of Critical and High Bugs Raised and Closed

  • Graph/Chart Type: Line Graph

Example Line Graph from the Dashboard

  • Data Acquisition: Daily Update or DB polling
  • Refresh Frequency: Snapshot or Real Time
  • Author: Test Manager, Test Analyst, Scripted Data Feed
  • Key Stakeholders: Senior Management, Project Manager, Test Manager
  • Example KPIs:

Rate of closure for critical and high severity bugs against the rate of discovery

  • Description and Purpose

This chart focuses on the most important bugs found during test execution and tracks the ongoing total of each over time, including a totals line for ease of calculation.

It’s expected the Critical and High severity bugs will always be fixed before a release can happen and so their continued presence will prevent the project finishing on time.

An insight this chart provides is whether there are more critical and high severity bugs being raised over time than development can fix and/or the test team can retest.

Ideally the Raised and Closed lines should run close to parallel with each other, where the Raised line moves away from the Closed line there are more bugs being found than can be closed.

For Senior Management closure of Critical and High bugs may be a release criterion hence the significance of this chart, especially when used with Chart 5.

9) Days Elapsed from Bug Recording to Bug Fix Reporting for Critical & High

  • Graph/Chart Type: Stacked Column

Example Stacked Column from the Dashboard

  • Data Acquisition: Daily Update or DB polling
  • Refresh Frequency: Snapshot or Real Time
  • Author: Test Manager, Test Analyst, Scripted Data Feed
  • Key Stakeholders: Development Manager, Project Manager, Test Manager
  • Example KPIs:

Number of Days to deliver a fix to Critical and High bugs after notification

  • Description and Purpose

Chart 8 provides an overview of whether bugs being reported are being closed at a similar rate but says nothing about whether the time is due to development or testing.

This chart tells the project team how long the Critical and High severity bugs have been awaiting a fix by the development team after being raised.

For the Development Manager it provides a valuable insight into how long these fixes take to deliver not just the amount that is delivered.

An increase in the number of old bugs as the project progresses suggests there is an issue preventing the team addressing them and putting at risk the project release date.

10) Total Number of Bugs Closed v Total Number of Bugs Re-Opened

  • Graph/Chart Type: Stacked Area Chart

Example Stacked Area Chart from the Dashboard

  • Data Acquisition: Daily Update or DB polling
  • Refresh Frequency: Snapshot or Real Time
  • Author: Test Manager, Test Analyst, Scripted Data Feed
  • Key Stakeholders: Test Manager, Development Manager
  • Example KPIs:

Total number of bugs flagged as fixed-closed that are later re-opened

  • Description and Purpose

For a number of reasons it is expected that a certain amount of bugs that are closed will eventually be reopened as the issue reappears as further testing is performed.

Previous charts such as Chart 8 focus on bugs that are opened and closed with no distinction as to whether they are records for new or previously known issues.

This chart highlights the total amount and rate at which previously closed bugs are being reopened as the issues are rediscovered through new or regression testing. The Development Manager can then investigate further to understand why this is happening and affect a resolution.

11) Bug Distribution Totals by Functional Area per Period

  • Graph/Chart Type: Stacked Area Chart

Example Stacked Area Chart from the Dashboard

  • Data Acquisition: Daily Update or DB polling
  • Refresh Frequency: Snapshot or Real Time
  • Author: Test Manager, Test Analyst, Scripted Data Feed
  • Key Stakeholders: Development Manager, Product Manager, Test Manager
  • Example KPIs:

Total volume of bugs found as a percent of the whole per functional area

  • Description and Purpose

The total amount of bugs that have been found as testing progressing and whether there are any areas that have either not had any bugs raised against them or a large amount.Read in conjunction with Charts 2 and 6 it can be seen how are available, have been executed and passed along with how many bugs that has resulted in.

If there are many bugs and a few test Cases passed there is a greater risk of more bugs. Where no bugs are shown testing may be starting late and the Test Manager may choose to shift focus of effort.

Product Managers will also be keen to see this chart as late testing may mean their part of the release is not tested thoroughly enough to be released at all.The Development Manager gains insight into possibly more complex areas of functionality to develop and the Test Manager about complexity or completeness of testing. This chart provides a way to identify which areas of functionality should be investigated first.

12) Days Elapsed from Bug Fix Reporting to Retest

  • Graph/Chart Type: Stacked Column

Example Stacked Column from the Dashboard

  • Data Acquisition: Daily Update or DB polling
  • Refresh Frequency: Snapshot or Real Time
  • Author: Test Manager, Test Analyst, Scripted Data Feed
  • Key Stakeholders: Test Manager, Project Manager, Development Manager
  • Example KPIs:

Number of days to complete retest of a fix provided for Critical and High bugs