A Question-drivenEvaluation Framework

A Question-designed Evaluation Framework

  1. Program/service clarification

To start developing an evaluation framework, a comprehensive description of the proposed or existing program/service needs to be constructed. This will assist in making subsequent decisions about the evaluation. Providing answers to the following questions will enable recording of relevant background information about the program/service:

  • What information is available about the program/service?
  • What are the aims and objectives of the program/servicebeingevaluated?
  • How long has the program/service been established?
  • Where is the program/service placed within the organisation?
  • What are the core activities of the program/service?
  1. Purpose

Evaluations are conducted for different purposes (eg: to gain insight, to determine progress, to change practice, or to assess effect). Documenting the intended purpose of the evaluation enablesnegotiation of clear boundaries amongst stakeholders, and supports the achievement of the intended evaluation outcomes. Questions to assist this stage include:

  • What is the intended purpose of conducting the evaluation?
  • Who and what has precipitated the evaluation?
  1. Stakeholders

Identifying key stakeholders early in the planning process, and incorporating their needs and opinions is critical to a successful evaluation. Stakeholders encompasspeopleinvolved in the delivery of the program/service, affected by the program/service, and those interested in the evaluation findings. Questions thatassist in identifyingeach stakeholder group and exploring their role in the evaluation include:

  • Who are the stakeholders that need to be engaged in the evaluation?
  • How will stakeholders be involved in the evaluation?
  • What should be the inclusion / exclusion criteriafor stakeholder engagement?
  1. Key evaluation questions

The construction of questions is fundamental to every evaluation. Question design will be dependent on the purpose of the evaluation, and influenced by the different stages of the program/service development.The following are examples of different evaluation questions for different purposes and useful at different times of a program/service:

Questions to assist in program/service design include:

  • Is there a need for the program/service?
  • What do we know about the problem that the program/service will address?
  • What is recognised as best practice in this area?
  • Is the program/service feasible and viable?
  • What are the intended outcomes of the program/service and how will it achieve them?

Questions todetermine implementation progress and/or to change practice when a program/service is in a settled statemight be:

  • What program/service components should be measured?
  • Is there consistency with the program/service plan?
  • How could delivery of the program/service change to make it more effective?
  • Are targets set for the program/service being reached?
  • Is the program/service reaching the target population?

Questions to consider the impact of a program/service, that is well established and/or near its completion may include:

  • Have the stated objectives of the program/service been achieved?
  • How worthwhile and valuable has the project/service been? (effectiveness)
  • What are the unintended outcomes of the program/service?
  • Were participants satisfied with the program/service?
  • Has the program/service been cost-effective?(efficiency)
  1. Assembly of evidence

Gathering credible evidence to inform evaluation judgments and provide answers to evaluation questions and recommendations is achievable through a variety of different methods. Different approaches work in different circumstances and maintaining data collection rigor is essential to support utilisation of the findings. Resources, costs and timelines discussed further on, also influence how much data collection is achievable for the evaluation. Questions to consider include:

  • What qualitative and quantitative data should be collected?
  • When should data be collected (baseline, ongoing, pre-post, different time-intervals)?
  • What is a sufficient number of participants to enable credible evaluation judgement and recommendations?
  • What data collection instruments will be used?
  • Who will collect, analyse and interpret the data?
  • How will the data be analysed to address the key evaluation questions?
  1. Budget and resources

Early identification of human and financial inputs available to conduct the evaluation is also important. This information influencesthe scope of the evaluation including the approaches used, and the type of evidence collected. Early evaluation planning also enablesfunding proposal to contain detailed budgets and realistic timelines to conduct comprehensive evaluations. Questions that seek clarity about budget and resources include:

  • What is an anticipated cost for each component/activity of the evaluation (eg: stakeholder consultations, tool development, data collection and analysis, and reporting)?
  • What funding is available within the organisation to support the evaluation?
  • Has funding from external sources been exploredto conduct the evaluation?
  • Who will conduct the evaluation (eg: staff within the organisation, external evaluator or a collaborative arrangement)?
  1. Timelines

Establishing timelines for the development of an evaluation framework and each component/activity of the evaluationassists with resource allocation and identification of key milestones. Timelines that take into consideration the planning, implementation, monitoring, data collection, analysis, reporting, and dissemination aspects for each component/activity are more likely to be achievable.

Developing a detailed timeframe enables decision-makers to consider realistic approaches, reach, and outcomes for the evaluation. This is particularly important if the financial and reporting imperatives constrain the evaluation. Questions to establish timelines incorporate several previously identified inthe stakeholders, assembly of evidence, and budget and resources section of this framework. Others to consider include:

  • When would the evaluation commence?
  • When do the evaluation findings need to be available to be timely and influential?
  1. Limitations

Limitations may relate to resource and timeline constraints or to data collection deficiencies (eg: poor question design or low response rates to surveys), which can influence the size and scale of the evaluation. Early identification of possible barriers or gatekeepers enables alternate approaches or strategies to avoid impeded progress of the evaluation.

Acknowledgement of any limitations in the design, implementation, data collection or analysis supports more credible findings and recommendations arising from the evaluation. Consider responses to the following questions:

  • What limitations exist that may affect the evaluation design and findings?
  • Who and what might obstruct the progress of the evaluation?
  • What strategies are in place to minimise the effect of the evaluation limitations?
  1. Ethical consideration

Ethical standards and principles needconsideration when developing an evaluation framework to maintain rigor and professionalism in the evaluation. Questions to seek response to include:

  • Do any ethical issues arise regarding the proposed evaluation approaches?
  • Are there any conflicts of interest?
  • What ethics approval processes exist within the organisation to support the proposed evaluation activities?
  1. Dissemination and utilisation

A final component of a comprehensive evaluation framework should be the dissemination and utilisation of the evaluation findings. The sharing of evaluation processes and findings enhances evaluation capacity across organisationsand supports informed decision-making and appropriate actions. Seeking responses to the following types of questions will provide insight to how the evaluation may best assist decision-makers:

  • Who should be informed of the evaluation findings?
  • How will the evaluation findings be disseminated?
  • What decisions, if any,are the evaluation findings expected to influence?
  • How will we know if the evaluation findings and/or recommendations haveinformeddecision-making?

A Question-designed Evaluation Framework