Managing for Outcomes: Workshops for Agencies October 2002

Impacts and Intervention Logic:

Quick Reference Guide

Pulling it together: Intervention Logic:

What is it? Intervention logic is a generic term used in managing for outcomes to refer to an evidence-based, systematic and reasoned description of the links between outcomes and outputs.

What’s it used for? It is used to derive the best mix of outputs from your agency’s “vital few” outcomes. It can provide a framework for performance evaluation.

When coupled with robust State Indicators (see Cheat Sheet #1) and Impact Measures, an Intervention Logic drives continuous improvement in many areas of organisational performance, including:

  1. Identifying the interventions that maximise agency outcomes
  2. Prioritising agency outputs and interventions
  3. Monitoring agency performance towards outcome goals
  4. Testing of assumptions and hypotheses behind the connections between interventions and results.

Where to start? Start with the precisely defined ‘vital few’ outcomes (see Cheat Sheet #1). An Intervention Logic should be developed for each of the ‘vital few’ outcomes that are the focus of an agency.

Commonly used approaches to specifying how activity drives results are:

1Narrative models that describe in words how an agency’s activities enhance its outcome(s).

2Flow Diagrams or “box and arrow” diagrams that link outcomes to activities.

3Frameworks or matrices (eg Funnell “Programme Logic”) that systematically challenge management to consider the key barriers and drivers, risks, resource requirements, and performance measures for each outcome within the Intervention Logic “backbone”.

4Quantitative models based on indicator and impact information derived from the agency’s experience and international best practice.

These forms of Intervention Logic will be enhanced further by testing specific logic statements (IF “x happens”, THEN “y will probably happen”) between immediate, intermediate and end outcome.

Generally only a summarised version of the intervention logic would be relevant to report externally.

There is no firm or compulsory description of the final form of an agency’s Intervention Logic. The different models suit different types of audience. But, the 2002 SoI round has shown that a flow diagram combined with a succinct narrative description of the Intervention Logic is a very powerful management communication tool for internal and external stakeholders.

The outcomes of individual agencies will contribute to the overall “intervention logic” of a sector. Within these overall sector outcome hierarchies, agencies must focus their attention on their services, outputs and interventions (the things the agency manages).

Remember: an Intervention Logic is not static. Agencies should test and improve their Intervention Logic as their understanding of the effectiveness of their agency’s interventions improves over time.

Impact Measures

Cheat Sheet #1 talked about ‘State Indicators’, which measure the prevailing condition experienced by the community (or a specific group, entity or geographic area) at a given point of time. They are the result of all influences and actions - the impact and consequences of government action and the impact and consequences of actions and activities beyond the control of government.

Impact measures by contrast aim to quantify the difference in outcome caused by an agency’s action alone and therefore are a powerful tool for improving an agency’s intervention mix and expenditure decisions. Impact Measures are required to assess the actual results of the agency’s interventions, ie the change attributable to the agency’s actions.

State Indicators focus on problems; Impact Measures validate solutions.

Production of good impact measures is arguably the hardest task for an outcome-focused agency. But evidence of effectiveness is a core component of Intervention Logic (see Pathfinder Building Block 2). If an agency cannot demonstrate impact (e.g. directly or citing literature), it should still document the logic or rationale behind its belief that its choice of interventions is the right choice.

The most difficult decisions in Impact Measurement revolve around choosing the ‘best’ method. For management purposes, good methods must produce accurate, affordable and timely measures of impact that are strongly attributable to core interventions of an agency or partnership. Good methods:

  • Seek wherever possible to generate objective measures that can be validated and replicated by an external observer;
  • Generate performance information in an ethical and culturally sensitive manner;
  • Generate performance information at a cost lower than its value in decision-making;
  • Use the best quantitative or qualitative outcome data appropriate to the Impact Measure; and
  • Allow measurement errors to be factored into decision-making

Beware of “Indicator Hell”. Give priority to measuring the impact of outputs that:

  • Represent a higher performance risk to the agency and/or its clients;
  • Consume significant resources (in total or per intervention);
  • Are expected to have a big impact at modest cost; and
  • Are being piloted and have significant capacity for growth.

Remember for agencies measuring impacts for the first time – don’t expect to make quick decisions. Delays of multiple years between designing an intervention and first production of Impact Measures are common.

Remember: “weighing a sheep does not make it fatter”. Impact measures, like state indicators and intervention logics, should be used to help with management decisions, not just for the sake of measurement. They must be used to change what we do, and enhance outcomes over time.

Questions about intervention logic, impact measures and risk that you could ask when working with your department

Developing/refining an Intervention logic

Possible questions to ask / What to look for in their response
Have you identified intervention logics for all of your ‘vital few’ outcomes? / The department has a clear focus on its vital few outcomes and these cover a significant proportion of its dominant outputs.
What outputs could you deliver to contribute to these outcomes? / The department has genuinely considered alternatives to the current outputs.
How will each output you have identified achieve progress towards one or more of the vital few outcomes for your department? / The best thing to look for is clarity that the links in the intervention logic have the form of, “If we do this, then we expect this to happen.”
And that they do not have the form of, “First we do this and then we do that.” which is a process description rather than an intervention logic.
Which of these possible outputs will you actually deliver? / The department has identified the best cost-effective outputs on the basis of robust evidence. This could include cost/utility analysis i.e. is looking at what is achievable given resource/capability constraints rather than simply what is desirable.
The department has been systematic in identifying the capability and resource implications of their preferred intervention approach.
Their Minister supports their planned outputs.
What else affects the outcomes that you are seeking to contribute to? / The department has identified factors outside of its control that affect the desired outcomes, and can identify (where possible) the entities that control those factors. May be an opportunity to prevent “reinventing the wheel” by investigating how those other agencies define the outcome / define state indicators / measure their impact on the outcome.
What uncertainties exist in your outcomes and output planning? / The department has identified the assumptions and gaps in its intervention logic that require further testing.
What unintended outcomes could or do occur from your outputs? / Evidence that the department understands complexities in their business area. Remember unintended outcomes can enhance the outcome or detract from it. As well as identifying unintended consequences check that the agency has considered possible incentive effects. A ‘no surprises’ approach is the best one.
How will you improve your intervention logic over the next year or more? / The department has a clear plan to test and refine its logic. There is evidence of inquiry and reflection in the intervention logic process, and the department is open to new data and information from external sources as well.

Developing impact measures

How will you know that you’ve made a difference to the outcomes you’ve identified? / The department has identified how it intends to measure the impact of its interventions separate from the effect of all other factors.
Is the impact measure well-defined? Have key terms been sufficiently tightly defined? / The department is clear about what is intending to measure and for whom (or what group/entity). Ambiguity will reduce the usefulness of the data and reduce the confidence that the department can have in making decisions based upon the impact measure.
Are you able to get this information? If you don’t currently have the information you need, what could you do so that you have it in the future? / Your department has checked availability and cost of data.
Your department recognises information gaps, and has plans for how to obtain data in future that it needs but doesn’t have.
How will you use this information in your decision-making? / Your department is not simply intending to measure for measuring sake but is clear about how it will use this information to make decisions.
Your department recognises any data and measurement limitations in demonstrating effectiveness.
What is the cost of collecting the information? What will be the benefits? / The department has considered costs of developing impact measures and has carefully considered this against benefits that they may get.
How robust is your information? What development is needed to improve it? / Your department has clear data collection, verification and reporting systems and is producing statistically robust information.
Who else might be collecting information that you need? / Your department has thoroughly investigated other sources (international and local) before considering new data requirements.
Who else might need the information that you are collecting? / Your department has considered who else might want/need the same information (e.g. population agencies) and has consulted them to ensure information can meet all needs (e.g. be disaggregated).

1