Stage-Based Measures of Implementation Components
October 2010
National Implementation Research Network
Dean Fixsen, Karen Blase, Sandra Naoom, & Melissa Van Dyke
Frank Porter Graham Child Development Institute
University of North Carolina at Chapel Hill
Stage-Based Measures of Implementation Components
October 2010
National Implementation Research Network
Dean Fixsen, Karen Blase, Sandra Naoom, & Melissa Van Dyke
With the identification of theoretical frameworks resulting from a synthesis of the implementation evaluation literature, there has been a need for measures of the implementation components to assess implementation progress and to test the hypothesized relationships among the components. Research has not yet produced reliable indicators of the implementation components identified in the synthesis.
Since the beginnings of the field, the difficulties inherent in implementation have "discouraged detailed study of the process of implementation. The problems of implementation are overwhelmingly complex and scholars have frequently been deterred by methodological considerations. ... a comprehensive analysis of implementation requires that attention be given to multiple actions over an extended period of time" (Van Meter & Van Horn, 1975, p. 450 - 451; see a similar discussion nearly three decades later by Greenhalgh, Robert, MacFarlane, Bate, & Kyriakidou, 2004). Adding to this complexity is the need to simultaneously and practically measure a variety of variables over time, especially when the variables under consideration are not well researched. Recent reviews of the field (Ellis, Robinson, Ciliska, Armour, Raina, Brouwers, et al., 2003; Greenhalgh et al., 2004) have concluded that the wide variation in methodology, measures, and use of terminology across studies limits interpretation and prevents meta-analyses with regard to dissemination-diffusion and implementation studies.
Recent attempts to analyze components of implementation have used 1) very general measures (e.g. Landenberger & Lipsey, 2005; Mihalic & Irwin, 2003) that do not specifically address core implementation components, 2)measures specific to a given innovation (e.g. Olds, Hill, O'Brien, Racine, & Moritz, 2003; Schoenwald, Sheidow, & Letourneau, 2004) that may lack generality across programs, or 3) measures that only indirectly assess the influences of some of the core implementation components (e.g. Klein, Conn, Smith, Speer, & Sorra, 2001; Panzano, et al., 2004).
The following assessments are specific to “best practices” extracted from:1) the literature, 2) interactions with purveyors who are successfully implementing evidence-based programs on a national scale,3) in-depth interviews with 64 evidence-based program developers, 4) meta-analyses of the literature on leadership, and 5) analyses of leadership in education (Blase, Fixsen, Naoom, & Wallace, 2005; Blase, Naoom, Wallace, & Fixsen, in preparation; Fixsen, Naoom, Blase, Friedman, & Wallace, 2005; Heifetz & Laurie, 1997; Kaiser, Hogan, & Craig, 2008; Rhim, Kowal, Hassel, & Hassel, 2007).
For more information on the frameworks for Implementation Drivers and Implementation Stages derived by the National Implementation Research Network, go to The synthesis of the implementation evaluation literature can be downloaded from the NIRN website.
You have our permission to use these measures in any non-commercial way to advance the science and practice of implementation, organization change, and system transformation. Please let us know how you are using the measures and let us know what you find so we can all learn together. As you use these measures, we encourage you to do cognitive interviewing of key informants to help revise the wording of the items to help ensure each item taps the desired aspect of each implementation component.
We ask that you let us know how you use these items so we can use your experience and data to improve and expand the survey. Please respond to Dean Fixsen (contact information below). Thank you.
Dean L. Fixsen, Ph.D.
Senior Scientist
FPG Child Development Institute
CB 8040
University of North Carolina at Chapel Hill
Chapel Hill, NC 27599-8040
Cell # 727-409-1931
Reception 919-962-2001
Fax 919-966-7463
References
Blase, K. A., Fixsen, D. L., Naoom, S. F., & Wallace, F. (2005). Operationalizing implementation: Strategies and methods. Tampa, FL: University of SouthFlorida, Louis de la Parte Florida Mental Health Institute.
Ellis, P., Robinson, P., Ciliska, D., Armour, T., Raina, P., Brouwers, M., et al. (2003). Diffusion and Dissemination of Evidence-Based Cancer Control Interventions. (No. Evidence Report /Technology Asessment Number 79. (Prepared by Oregon Health and ScienceUniversity under Contract No. 290-97-0017.) AHRQ Publication No. 03-E033. Rockville, MD: Agency for Healthcare Research and Quality.
Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation Research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network (FMHI Publication #231).
Greenhalgh, T., Robert, G., MacFarlane, F., Bate, P., & Kyriakidou, O. (2004). Diffusion of innovations in service organizations: Systematic review and recommendations. The Milbank Quarterly, 82(4), 581-629.
Heifetz, R. A., & Laurie, D. L. (1997). The work of leadership. Harvard Business Review, 75(1), 124-134.
Kaiser, R. B., Hogan, R., & Craig, S. B. (2008). Leadership and the fate of organizations. American Psychologist, 63(2), 96-110.
Klein, K. J., & Sorra, J. S. (1996). The challenge of innovation implementation. Academy of Management Review, 21(4), 1055-1080.
Klein, K. J., Conn, B., Smith, A., Speer, D. B., & Sorra, J. (2001). Implementing computerized technology: An organizational analysis. Journal of Applied Psychology, 86(5), 811-824.
Landenberger, N. A., & Lipsey, M. W. (2005). The Positive Effects of Cognitive-Behavioral Programs for Offenders: A Meta-Analysis of Factors Associated with Effective Treatment. Journal of Experimental Criminology, 1(4), 451-476.
Mihalic, S., & Irwin, K. (2003). Blueprints for Violence Prevention: From Research to Real-World Settings-Factors Influencing the Successful Replication of Model Programs. Youth Violence and Juvenile Justice, 1(4), 307-329.
Olds, D. L., Hill, P. L., O'Brien, R., Racine, D., & Moritz, P. (2003). Taking preventive intervention to scale: The nurse-family partnership. Cognitive and Behavioral Practice, 10, 278-290.
Panzano, P. C., & Roth, D. (2006). The decision to adopt evidence-based and other innovative mental health practices: Risky business? Psychiatric Services, 57(8), 1153-1161.
Panzano, P. C., Seffrin, B., Chaney-Jones, S., Roth, D., Crane-Ross, D., Massatti, R., et al. (2004). The innovation diffusion and adoption research project (IDARP). In D. Roth & W. Lutz (Eds.), New research in mental health (Vol. 16). Columbus, OH: The Ohio Department of Mental Health Office of Program Evaluation and Research.
Rhim, L. M., Kowal, J. M., Hassel, B. C., & Hassel, E. A. (2007). School turnarounds: A review of the cross-sector evidence on dramatic organizational improvement. Lincoln, IL: Public Impact, Academic Development Institute.
Schoenwald, S. K., Sheidow, A. J., & Letourneau, E. J. (2004). Toward Effective Quality Assurance in Evidence-Based Practice: Links Between Expert Consultation, Therapist Fidelity, and Child Outcomes. Journal of Clinical Child and Adolescent Psychology, 33(1), 94-104.
Van Meter, D. S., & Van Horn, C. E. (1975). The policy implementation process: A conceptual framework. Administration & Society, 6, 445-488.
Tracking Implementation Progress
Many organizations and funders have come to understand that Full Implementation can be reached in 2 – 4 years with support from a competent Implementation Team (skillful implementation efforts focused on multiple evidence-based programs or other innovations) or Purveyor (skillful implementation efforts focused on focused on one evidence-based program). These organizations and funders also realize that implementation is an active process with simultaneous work going on at many levels to help assure full and effective uses of the evidence-based program. The following “Implementation Quotient” was developed to track implementation progress over many years, and to track the return on investing in implementation capacity.
[NOTE: The use of “met fidelity criteria” in the following assumes that a good measure of performance/ fidelity assessment has been developed for the evidence-based program. That is, performance/ fidelity assessment scores are highly correlated with intended longer-term client/ consumer outcomes.]
Implementation Quotient
- Determine the number of practitioner positions allocated to the innovation in the organization (Allocated Position N = ___).
- Assign a score to each allocated practitioner position: Score
- Practitioner position vacant = 0
- Practitioner in position, untrained= 1
- Practitioner completed initial training= 2
- Practitioner trained + receives weekly coaching = 3
- Practitioner met fidelity criteria this month= 4
- Practitioner met fidelity criteria 10 of past 12 months= 5
- Sum the scores for all practitioner positions (Practitioner Position Sum = ___).
- Divide the Practitioner Position Sum by the Allocated Position N.
- The resulting ratio is the “Implementation Quotient” for that innovation in that organization.
- Plot this ratio and post it to track implementation progress.
For one larger provider organization, the figure below shows the Implementation Quotient data for 10-years (20 6-month blocks of time = 120 months). The graph starts with the commencement of the Initial Implementation Stage. In this organization, Full Implementation (50% of the practitioners meeting fidelity criteria) was reached for the first time in month 54 (block 9).
Stage-based Assessments of Implementation
As the original (2008) set of items was used, it became apparent that respondents new to innovations were unable to answer the questions with confidence. These experiences led to the following stage-based assessments of implementation.
Stages of Implementation
The research and practice reviews conducted by the National Implementation Research Network have identified four Stages of Implementation:
- Exploration Stage
- Installation Stage
- Initial Implementation Stage
- Full Implementation Stage
In principle, the Stages are fairly straightforward. In practice, they are not linear and that complexity adds to the measurement challenges. In practice, the Stages are fluid with provider organizations and human service systems going into and out of a Stage over and over. Even skilled Implementation Teams and Purveyor groups find that it takes about 3 or 4 years to get an organization to the point of Full Implementation (as defined here). Even with a lot of help, not all agencies that attempt implementation of an evidence-based program or other innovation ever meet the criteria for Full Implementation.
Note that the proposed criteria for Stages apply to a single, specified evidence-based program or innovation. A single organization might be in the Full Implementation Stage with one innovation and in the Exploration Stage with another. Stages and assessments of implementation are specific to each innovation and should not be viewed as characteristics of organizations.
It may require interviews with several key informants to find the information needed to determine the Stage of Implementation for the identified evidence-based program or other innovation within an organization or human service system.
Exploration:
An organization/ group is 1) ACTIVELY CONSIDERING the use of an EBP or other innovation but has not yet decided to actually begin using one. The group may be assessing needs, getting buy in, finding champions, contacting potential purveyors, or any number of things -- but 2) THEY HAVE NOT DECIDED to proceed. Once the decision is reached to use a particular innovation, the Exploration Stage ends (of course, in reality, it is not always such a neat and tidy conclusion; the Exploration Stage typically is re-visited repeatedly in the first year or so).
Installation:
An organization group 1) HAS DECIDED to use a particular innovation and 2) IS ACTIVELY WORKING to get things set up to use it. The group may be writing new recruiting ads and job descriptions, setting up new pay scales, re-organizing a unit to do the new work, contracting with a purveyor, working on referral sources, working on funding sources, purchasing equipment, finding space, hiring trainers and coaches, or any of a number of things -- but 3) THE FIRST PRACTITIONER HAS NOT begun working with the first client/consumer using the new EBP/ innovation.
It seems that many agencies get into this Stage and find they do not have the resources/ desire to continue. These agencies might go back to the Exploration Stage (provided the 2 criteria for that Stage are being met) or may abandon the process altogether.
Initial Implementation:
The timer on this Stage BEGINS the day 1) the first NEWLY TRAINED PRACTITIONER 2) attempts to USE THE NEW EBP/ INNOVATION 3) WITH A REAL CLIENT/ CONSUMER. There may be only one practitioner and only one client/ consumer, but that is enough to say the Initial Implementation Stage has begun and is in process. If at a later time an organization has no trained practitioner working with a client/consumer, the organization might be said to be back in the Installation Stage (provided the 3 criteria for that Stage are being met at that time).
It seems that many agencies get into the Initial Implementation Stage and limp along for a while without establishing/ building in/ improving their capacity to do the implementation work associated with the evidence-based program (e.g. full use of the Implementation Drivers including practitioner selection, training, coaching, and performance/ fidelity assessments; facilitative administration, decision support data systems, and systems interventions; leadership). Consequently, most of these efforts are not very successful in producing consumer outcomes and rarely are sustained (they come and go as champions and interested staff come and go).
Full Implementation:
Determine the number of POSITIONS the organization has allocated as practitioners for the innovation. Determine the number of practitioners that CURRENTLY meet all of the performance/ fidelity assessment criteria. FULL IMPLEMENTATION OCCURS WHEN AT LEAST 50% OF THE ALLOCATED POSITIONS ARE FILLED WITH PRACTITIONERS WHO CURRENTLY MEET THE FIDELITY CRITERIA.
If there is no fidelity assessment in place, Full Implementation cannot be reached. If there are 10 allocated positions and 6 vacancies, Full Implementation cannot be reached. Note that Full Implementation lasts only as long as the 50% criterion is met -- this may be only for one day initially since meeting fidelity once is no guarantee that it will ever occur again for a given practitioner. In addition, practitioners come and go. When a high fidelity practitioner is replaced with a newly hired/ trained practitioner it takes a while and a lot of coaching to generate high fidelity performance with the new practitioner. When there is a very demanding innovation (e.g. residential treatment; intensive home based treatment) and fairly high practitioner turnover (e.g. average practitioner tenure less than 3 years), an organization may take a long time to reach Full Implementation.
Exploration and Installation Stage Assessments
To use the stage-based assessments of implementation, the evaluator must first determine the stage of implementation for the innovation in an organization. There are no fixed rules to follow, so evaluators must use their good judgment.
We have divided the implementation assessments into two groups: one group generally is more appropriate for an organization considering the use or attempting for the first time to use an evidence-based program or other innovation (Exploration and Installation Stage Assessments).
Early on, implementation capacity does not exist. Thus, asking questions and encouraging planning about some key components is within the means of the respondents and is less daunting for respondents (gosh, look at all this stuff we have to do!).
Initial and Full Implementation Stage Assessments
The second group is more appropriate for an organization that has initiated the use of an evidence-based program or other innovation and is attempting to improve the quality and expand the use of the innovation in the organization (Initial and Full Implementation Stage Assessments).
Later on, practitioners, supervisors, and managers are more familiar with implementation methods and can rate not only the presence but the strength of the components. Thus, Likert scales that include more of the implementation best practices are included in those assessments.
Of course, if an organization already has implemented other evidence-based programs successfully and now is doing Exploration and Installation Stage-related work with another, the Initial and Full Implementation Stage Assessments might be appropriate. Use good judgment. There are no fixed rules to guide evaluators.
Exploration and Installation Stage Assessments of Implementation
As organization or community groups begin considering evidence-based programs or other innovations, they also need to begin considering implementation supports for those innovations. Implementation supports often are ignored early on, just because their importance is not common knowledge. Yet, full, effective, and sustained uses of innovations on a socially significant scale depend upon taking the first right steps for implementation and for the innovation.
The template on the following two pages provide prompts for thinking about and planning for implementation supports for a given evidence-based program or other innovation.
Competency Implementation Drivers Analysis and Discussion Template
Selected Staff Cohort (e.g. Practitioners, Staff, Coaches, Leadership Teams, Administrators, Implementation Team):
Competency Implementation Drivers / How important is this Driver in promoting fidelity and/or positive outcomes?
A = High
B = Medium
C = Low / Are resources and materials available from others for this Driver for this staff cohort? (yes/no/DK) / Who is/will be responsible for ensuring functional use of the Driver (e.g. timely use, quality, sustainability, integration)? / Is there a measure of Driver effectiveness? (yes/no/DK) / Given the current state of development of this Driver how much work would be required to significantly improve it?*
A = High
B = Medium
C = Low / In looking across the Competency Drivers, how well integrated is this Driver (the greater the number of responsible entities, the greater the integration challenge and the greater the threat to compensatory benefits)
A = Well integrated
B = Moderate
C = Poor
Staff Selection
Staff Training
Competency Implementation Drivers / How important is this Driver in promoting fidelity and/or positive outcomes?
A = High
B = Medium
C = Low / Are resources and materials available from others for this Driver for this staff cohort? (yes/no/DK) / Who is/will be responsible for ensuring functional use of the Driver (e.g. timely use, quality, sustainability, integration)? / Is there a measure of Driver effectiveness? (yes/no/DK) / Given the current state of development of this Driver how much work would be required to significantly improve it?*
A = High
B = Medium
C = Low / In looking across the Competency Drivers, how well integrated is this Driver (the greater the number of responsible entities, the greater the integration challenge and the greater the threat to compensatory benefits)
A = Well integrated
B = Moderate
C = Poor
Staff Coaching
Staff Performance
Evaluation
(Fidelity)
Organizational Implementation Drivers Analysis and Discussion Template