Implementation Science Framework

National Implementation Resource Network

http://nirn.fpg.unc.edu/learn-implementation/implementation-drivers/training

Implementation Drivers are the engine of change (Fixsen et al., 2005). As with the Stages, Drivers are dynamic and interact in interesting ways to produce consistent uses of innovations and reliable outcomes for students and others.

Three Types of Drivers

Implementation Drivers have been categorized as Competency, Organization, and Leadership supports. Effective innovations are, by definition, new ways of work. For competency development, new ways of work need to be taught and learned through training and coaching with practitioners (teachers, district staff, implementation team members) who have been selected (mutual selection for individuals is similar to the Exploration Stage for organizations) to be the first to use the innovation. As coaches support practitioners in learning the innovation and as performance (fidelity) assessments are used to monitor the progress of teaching and learning, organization and system facilitators and barriers are identified.

Organization supports are developed by facilitative administrators (superintendents, principals, non-teaching staff) who change organization practices and support systems interventions so they can establish a hospitable environment for the use of effective innovations and the use of effective implementation supports for practitioners. Having a decision support data system is an essential component for guiding the processes of establishing the innovation, the implementation supports for practitioners, and the assessments of immediate outcomes. For example, data may show low fidelity performance for a group of teachers being coached by a given individual while other teachers readily are meeting performance criteria. These data might support a decision to focus on the quality of coaching rather than the quality of teaching. Finally, implementation requires leadership that can help resolve adaptive issues (convening groups to identify problems, arriving at consensus regarding how to approach a solution, detecting progress toward resolution) and technical problems (setting goals, managing time and effort, solving problems of known dimensions) that arise in the course of initiating changes in the ways of work and managing change in organizations and systems.

These interactive processes are integrated to maximize their influence on staff behavior and the organizational culture. The interactive implementation drivers also compensate for one another so that a weakness in one component can be overcome by strengths in other components. These core implementation components (implementation drivers) are shown in the figure below.

Performance Assessment:

Performance assessment helps to assure successful and sustained implementation of programs and practices. Performance Assessment is designed to assess the use of the skills that are taught in training and reinforced and expanded in coaching processes. These sometimes are called practitioner fidelity assessments or practice profiles. Such assessments measure the vital new behaviors -- often new teaching/interaction practices with students.

The most effective intervention will not produce positive effects if it is not implemented. Thus, assessments of performance are a critical component of implementation. Performance assessments can also include:

The results of performance assessments seem to have many practical uses. Coaches can use the information to sharpen their professional development agendas with practitioners. Administrators can use the information to assess the quality of training and coaching. Implementation Teams can use the information as a guide for implementation at the practice and program development levels. And, researchers can use the information as an outcome measure for some studies and as an independent variable in others.

Selection

Staff selection is an implementation driver although it is not discussed often and rarely evaluated in human service programs. Nevertheless, selection is a key ingredient of implementation at every level:

·  selection of practitioners,

·  selection of organization staff (trainers, coaches, evaluators, administrators), and

·  selection of staff for Implementation Teams.

Selection of staff is important to having effective practitioners, excellent trainers, effective coaches, skilled evaluators, facilitative administrators, or effective purveyors. Not everyone is suited to each role. People who are outgoing and decisive may make good practitioners or Implementation Team members. People who are methodical and comfortable making judgments based on specified criteria may make better evaluators. People who are more comfortable with public speaking and “performing” might make better trainers. With respect to given evidence-based practices or programs, the extent of knowledge and direct experience in the specific program or practice might be more critical for some positions than others.

Staff selection also represents the intersection with a variety of larger system variables. General workforce development issues, the overall economy, organizational financing, the demands of the evidence-based program in terms of time and skill, and so on impact the availability of staff for human service programs.

Training

The effective use of innovation requires behavior change at the practitioner, supervisory, and administrative support levels. Training and coaching are the principal ways in which behavior change is brought about for carefully selected staff in the beginning stages of implementation and throughout the life of evidence-based practices and programs. Most skills needed by successful practitioners can be introduced in training but really are learned on the job with the help of a consultant/coach (e.g., craft information, engagement, treatment planning, teaching to concepts, clinical judgment).

The content of training will vary considerably depending upon the evidence-based practice or program, clinical practice guideline, or management strategy that is being implemented.

The methods of training seem to be less variable. There seem to be common approaches to imparting knowledge, skills, and abilities in programs to train practitioners (e.g., Bedlington, Booth, Fixsen, & Leavitt, 1996; Joyce & Showers, 2002; Schoenwald et al, 2000), trainers (e.g., Braukmann & Blase, 1979; Ogden et al., in press), coaches (e.g., Smart, Blase, et al., 1979; Joyce & Showers, 2003), fidelity evaluators (Davis, Warfel, Maloney, Blase, & Fixsen, 1979; Wineman, et al., 1979), and administrators (Baron, Watson, Coughlin, Fixsen, & Phillips, 1979; Atherton, Mbekem, & Nyalusi, 1999).

The common approaches to training include providing information about history, theory, philosophy, and rationales for program components and practices conveyed in lecture and discussion formats. Lecture and discussion can produce knowledge acquisition and understanding. Skills and abilities related to carrying out the program components and practices can be demonstrated (live or on tape) then followed by behavior rehearsal to practice the skills and receive feedback on the practice (Blase et al., 1984; Joyce & Showers, 2002; Kealey, Peterson, Gaul, & Dinh, 2000). Practice to criterion means that Day 1 training is over when Day 1 skills have been learned by each participant (Rob Horner, personal communication).

Coaching

Most skills needed by successful practitioners can be introduced in training but really are learned on the job with the help of a coach. Coaches not only expand the knowledge and skills taught in training, they also impart craft knowledge (e.g., engagement, ethics, managing work flow, clinical judgment).

Coaching needs to be work based, opportunistic, readily available, and reflective (e.g., debriefing discussions). Spouse (2001) described four main roles of a coach:

·  Supervision

·  Teaching while engaged in practice activities

·  Assessment and feedback

·  Provision of emotional support

After a few decades of research on training teachers, Joyce & Showers (2002) began to think of training and coaching as one continuous set of operations designed to produce actual changes in the classroom behavior of teachers. One without the other is insufficient. Behavior change is difficult for most people (for example, some people hire personal coaches to help them exercise more or change their eating behavior or stop smoking). With newly learned behavior there are several simultaneous problems that must be faced:

·  Newly-learned behavior is crude compared to performance by a master practitioner.

·  Newly-learned behavior is fragile and needs to be supported in the face of reactions from consumers and others in the service setting.

·  Newly-learned behavior is incomplete and will need to be shaped to be most functional in a service setting.

In addition to helping to establish new behavior in the clinical environment, emotional and personal support is another role for a coach (Spouse, 2001). In human services, practitioners are the intervention. Evidence-based practices and programs inform when and how they interact with students or stakeholders but it is the educator who delivers the intervention through his or her words and actions. In the transactional interplay between practitioner and consumer, each affects the other in complex ways.

Decision Support Data Systems

Decision Support Data Systems are sources of information used to help make good decisions internal to an organization. Effective organizations make use of a variety of measures to:

·  assess key aspects of the overall performance of the organization,

·  provide data to support decision making, and

·  assure continuing implementation of the evidence-based intervention and benefits to consumers over time.

All modern organizations have a financial data collection and reporting system that regularly is monitored internally and externally (e.g. through employment of professional financial managers and clerks in the organization, careful attention from the governing board, and annual audits by external experts). Effective organizations also have data collection and reporting systems for their treatment and management processes and outcomes. Decision support data systems are an important part of continuous quality improvement for interventions, implementation supports, and organization functioning (e.g. used as the “study” part of the never-ending plan-do-study-act cycle). Implementation Teams help organizations establish and evolve their data systems so information is immediately accessible and useful to practitioners, trainers, coaches, and managers for short-term and long-term planning and improvement at clinical and organizational levels.

Human service organizations and systems are dynamic, so there is ebb and flow to the relative contribution of each component to the overall outcomes (e.g. Panzano, Seffrin, Chaney-Jones, Roth, Crane-Ross, Massatti, et al., 2004). The decision support data system feedback loops appear to be critical to keeping an evidence-based program “on track” in the midst of a sea of change. If the feedback loops (staff performance evaluations and decision support data systems) indicate needed changes, then the organization adjusts the integrated system to improve effectiveness and efficiency.

Facilitative Administration:

The research and evaluation literature rarely addresses the impact of facilitative administrative supports on successful outcomes for consumers. However, the “craft” of program implementation (as described in national meetings of purveyors and implementers) makes clear the importance of administrative decisions and supports in the implementation process. What is meant by facilitative administrative supports? Facilitative administrative support is proactive, vigorous and enthusiastic attention by the administration to reduce implementation barriers and create an administratively hospitable environment for practitioners. In an organization that ‘hosts’ an evidence-based program or practice, facilitative administration includes internal policy analyses and decisions, procedural changes, funding allocations and a culture that is focused on what it takes to implement with fidelity and good outcomes.

One survey contrasted the views of practitioners who were successful or unsuccessful in implementing evidence-based practices and programs in organizations. Neither group felt the administration facilitated their use of the new practices. The successful group felt that the administration eliminated some barriers related to paperwork and uses of time while the unsuccessful group felt “worn down” by an unsupportive administration. Thus, a facilitative administration regularly asks for feedback from all levels of the organization (360-degree feedback) with particular attention to the satisfaction of practitioners with the administration’s actions and advocacy related to implementation of the evidence-based program or practice. Barriers are reduced to facilitate the direct service to teachers and students and ensure that paperwork is both minimized and functional. In addition, resources and supports are created to ensure that the implementation drivers (e.g. selection, training, coaching) are fully developed, used and improved over time. Such issues as practitioner workload, safety, remuneration, communication, and feedback are pro-actively addressed by the administration to the satisfaction of the practitioners and ultimately to the benefit of the teachers, students, and families receiving services.

Systems Intervention

Implementation takes place in a shifting ecology of agency, community, state and federal social, economic, cultural, political, and policy environments. Even the most effective evidence-based efforts can be overwhelmed by funding, policy and regulatory environments that are at odds with the service delivery expectations and requirements of the program or practice.

Fidelity, intervention outcomes and sustainability are heavily influenced by the degree to which agency, community, state and federal systems are supportively aligned and enabling with respect to implementation. Champions and persons with influence work together to build and sustain the culture, policies, regulatory practices, and funding mechanisms necessary for both the implementation drivers and the intervention practices to thrive. Systems intervention requires attending to multi-level alignment, maintaining leadership and focus, creating and staying connected to champions, intervening to change policies and funding contingencies, and remaining vigilant at local, state and federal levels for both windows of opportunity and threats to fidelity and sustainability are all fertile arenas for systems intervention. Leadership and responsibility for this systems alignment function must be clearly articulated at each level and with an overall structure to support the communication within and among these levels.

Adaptive Leadership

Competent leaders are needed throughout an infrastructure for implementation. It is rare to find a description of change that does not point to leadership as an important contributor to success or failure. For decades, good leaders were known by their good results but the critical skills were not well understood. How leadership contributes to success now is better understood, thanks to theoretical orientations based on complexity theory (Morgan & Ramirez, 1983; Stacey, 2002), frameworks for describing the salient features of leadership (Hall & Hord, 1987; 2011; Heifetz et al, 1997; 2009), and meta-analyses and syntheses of the literature (Kaiser, Hogan, & Craig, 2008; Rhim, Kowal, Hassel, & Hassel, 2007; Waters, Marzano, & McNulty, 2005).

Technical leadership might be thought of as good management. The leader is engaged, quick to recognize and respond to issues that arise, organizes groups to solve problems, and regularly produces desired results. In terms of complexity theory (Stacey, 2002), technical leaders work in the zone where there is substantial agreement about what needs to be done and reasonable certainty about how to do it. Adaptive leadership is required in the zone of complexity where there is little agreement and less certainty. The concept of adaptive leadership resonates with leaders who recognize the layers of complexity involved in any large-scale systems reform.