sowh-072015audio
Cyber Seminar Transcript
Date: 07/20/15
Series: Spotlight on Women’s Health
Session: Implementation Science 101
Presenter: Alison Hamilton.
This is an unedited transcript of this session. As such, it may contain omissions or errors due to sound quality or misinterpretation. For clarification or verification of any points in the transcript, please refer to the audio version posted at
Molly:At this time, we are at the top of the house so I’m very pleased to introduce our speaker today. We have Dr. Alison Hamilton with us. She is a Research Health Scientist and Director of the Qualitative Methods Group at VA HSR&D Center for the Study of Healthcare Innovation, Implementation and Policy in VA Greater Los Angeles. At this time, Dr. Hamilton, are you ready to share your screen?
Dr. Hamilton:Yes, I’m ready.
Molly:Excellent.
Dr. Hamilton:Can you see it?
Molly:Yep.
Dr. Hamilton:Great…thanks. Let me know if there are any problems. Hi everyone. Good morning Pacific Coast time, good afternoon everyone else. Thank you so much for joining me today for Implementation Science 101.
I really can’t even start this without thanking some extremely critical people. Several of the names that you see here…many of these folks have presented pieces of what you’re about to see or have informed this extensively, so my thanks go out to all of them for content and/or inspiration and also to the Women’s Health Research Network for supporting this Cyberseminar Series.
We are going to start, as most of you know…we’re going to start the cyberseminar with a poll question. If you could just start by selecting the option that fits your situation the best with regard to what you need the most help with at this point in your work and implementation science?
Molly:Thank you so much. As our attendees can see on your screen, we do have the first poll question. The question is: I need the most help with…and the answer options are selection a study design, defining my implementation strategies, figuring out my model, designing my evaluation, understanding what implementation science is all about. It looks like about 80% of our audience has voted thus far so that’s great…very helpful to know where to gear the talk towards. We’ve capped off at just about 80% with some pretty clear trends so I’m going to go ahead and close the poll and share those results. Alison, do you want to talk through those real quick or would you like me to?
Dr. Hamilton:I’m happy to do that since I can see them. Thank you, Molly. The clear winner here is understanding what implementation science is all about and that is my goal today so hopefully that number will go down by the end of the cyberseminar today. Then we’ve got an even split around evaluation, study design and implementation strategies with a fewer needing more help with figuring out your model. That is extremely helpful for me. I’m going to be covering all those things today and we’ll come back to a similar poll at the end. Can you see my screen again Molly?
Molly:We’re good to go…thank you.
Dr. Hamilton:Thanks everyone. My objectives today are really to provide a brief overview of implementation science. I’ll provide you with some definitions, with the rationale for this field and the goals and then I’m going to talk briefly through key components of the implementation research including study design options, implementation strategies, theories and frameworks and mixed methods evaluations. As you see, we have several things to cover in the time that we have so this is really oriented towards the overview…just to give you that flavor for what implementation science is about and what you need to do in this field to have the potential for success. There are references at the end and I usually leave a good amount of time at the end to answer your questions.
This is a relatively new field…implementation science…and as with any field that is getting up to speed, there are many definitions available. I’ve put two here on the screen that have stood the test of time during the development of this field; one from two early pioneers in the field…Martin Eccles and Brian Mittman from their implementation science journal and one from Jeff Curran, another pioneer in the field.
The first definition, and these are very compatible…first definition is that “Implementation science is the study of ways to promote the systematic uptake of research findings and other evidence based practices into routine practice. This includes the study of influences on healthcare professional and organizational behavior. “Dr. Curran and colleague’s definition is “An effort specifically designed to get best practice findings and related products into routine and sustained use via appropriate uptake interventions.” They emphasize that implementation science is an active approach focused on stimulating change.
I think the keys to hone in on here…at the heart of implementation science is evidence based practice but related to that very centrally are the ways in which these evidence based practices make their way into routine care. At the heart again, of implementation science, is change. You’re interested in changing something. Now, in some cases the change is oriented towards putting something into place that wasn’t there and it should be there, improving something that is in place but is not being utilized to the extent that it should be and then there’s been a more recent focus on de-implementation, which is taking things out of practice that should not be there. There is a lot of really interesting efforts going on in this field but pretty much no matter what the effort is, the emphasis is on change and it might even be that something is in place that your group didn’t put there but it’s sort of a naturally occurring change or a change that’s perhaps beyond your control but that you want to study none the less because it’s changing the nature of care. For example, in the VA we might think of primary care transformation towards the medical home. That wasn’t necessarily something that many of us on the phone had something to do with directly but none the less, it was focused on changing primary care and continues to be and there are many people around the country studying that change and figuring out how best to implement those changes.
Now, NIH is extremely active…National Institute of Health…is extremely active in implementation research as well. In their R01 dedicated to Dissemination and Implementation Research and Health, they ask for research that will identify, develop and refine effective and efficient methods, systems, infrastructures and strategies to disseminate and implement a variety of things such as evidence based health behavior change interventions, a number of improvement interventions related to prevention, early detection, treatment etc. as well as clinical guidelines, policies and data monitoring and surveillance tools. These are the types of research that NIH is looking for and similarly VA requests for proposals have looked for research along those lines. This just helps to show what are the types of areas in which you might do implementation science coming directly from one of the key funding mechanisms in this area.
Some of you who have participated in implementation science trainings before may have seen various versions of what we call a pipeline. I didn’t put a pipeline in this presentation. I just wanted to give you a slightly different way of looking at the connections between different types of research in health services. This is from an article by Ross Brownson and colleagues that just shows the pipeline in a slightly different way where you see not only the steps or the often typical progression in health services research toward implementation studies but also the nature of the feedback loops amongst these different types of studies with efficacy and effectiveness often being necessary after implementation research has been done. As I’ll describe more later, dissemination and implantation studies typically take this type of phase approach that you see in the upper right bar and you may have studies that fit in only one or two of those boxes or you may have a study that covers the breadth of those steps from exploration to sustainment. More and more there has been a strong interest in sustainment. In other words, not only how do we get evidence-based practices into routine care but how do we actually keep them there. That has been a really lively area of research in the field.
Now, why do we need implementation science? There are extensive articles written about this. Again, I have references at the end but I thought it might be helpful just to draw again from that R01 description, which is not incompatible with many of the VA efforts in this area. Really, what we’ve seen in terms of the rise of implementation science is a response to the fact that we have an incredibly extensive multi-billion dollar investment in research nationally and internationally but relatively little spent on how to insure that the research results actually inform and improve healthcare quality, the delivery of services and the sustainability of evidence-based tools. None the less, we know from what has come to be a pretty vast body of literature and work that the different stakeholders in the health services…providers, patients, family, caregivers etc. need to have empirically supported strategies in order to integrate scientific knowledge and effective interventions into everyday use. Again, there is this real emphasis on quality and the ways in which research findings can improve the quality of the health services that we’re all involved in trying to promote.
There are several goals in implementation science. These are three goals…this is not an exhaustive list, but these are some of the goals you’ll see a lot in the field with one of the main goals being to develop reliable strategies for improving health related processes and outcomes and then to facilitate widespread adoption of strategies. I’m going to talk a little bit later about implementation strategy so that’s coming up. There is also an interest in producing insights and generalized knowledge regarding implementation processes, barriers, facilitators and strategies. Really, with barriers and facilitators, this has been an area of tremendous activity to the point where for many healthcare services in the VA, we know an extensive amount about barriers and facilitators and we’ve really moved forward into more advanced approaches and implementation science with the knowledge that we have of barriers and facilitators.
Finally, again because this field is growing and expanding, there is a strong interest and a goal around refining implementation theories and hypotheses, methods and measures. As a science, it’s also growing and one of the goals certainly is to do work that supports the growth of this field.
In terms of key components of implementation research, there are some central things that you need to consider. Of course, as with any health services research there is the issue of study design. There are some complexities to study design in implementation research, which I’ll talk about in a minute…implementation strategies and we’re going to talk about what are they and what do you do with them, conceptual models and theoretical frameworks, why do you need one, what do you do with it and mixed method evaluations…what is important to measure or assess and how do you do it? Those are the four components that I’m going to talk about for our time today.
In terms of study design, you’re typically going to start with a question and there’s a typical progression. Of course, things don’t always happen in this way but it’s helpful as sort of a rubric that Dr. Curran and his colleagues put forth in their 2012 article, basically starting with the question of what should be done to address a certain problem. We are pretty problem focused. There is a gap in quality, there is a practice that should be in place that’s not…some of the things I mentioned before where you’re really looking to change something and you need to think through what can we do, what knowledge do we have, what strategies do we have, what expertise do we have to be able to address that problem.
One thing you would look at is what is available in terms of the evidence base? Do we need more evidence before we go further along in our implementation research, do we have enough evidence but the evidence isn’t being used and then, in terms of what’s going on with the evidence based practice, you might be looking at what factors influence it's use or potential to be used. That sometimes takes the shape of the barriers and facilitators research but a lot of this is often preparatory to larger scale studies where you really need to define the scope of the problem and then how you’re going to address that problem. Oftentimes, what represents the next step along the pathway is understanding and even potentially testing what needs to be done to facilitate use of the evidence based practice. Ostensibly, you could stop there, but in implementation science a critical part…as I’ll talk about a little bit later…is evaluation. I would argue (I believe this is the case) that you really can’t have an implementation research project without some degree of evaluation and there are different gradations of that but in any case, you need some type of evaluation to know that what you’ve done is effective and to know what next steps to take.
In terms of study designs, all research designs are possible in implementation research. You’ll see randomized control trials, comparative effectiveness studies, quasi-experimental studies, pilot studies etc. What might differ for you in doing implementation research is that you’re going to be looking not only at maybe your traditional outcomes but also what has come to be known in the field as implementation outcomes.
You see here two lists, which are very similar in some ways. Enola Proctor and colleagues wrote a central article about implementation outcomes listing out this handful of outcomes as some of the key outcomes that are examined in implementation research projects. Many years before this, Russ Glasgow and colleagues put forth the RE-AIM framework, which may be familiar to many of you which stands for reach, effectiveness, adoption, implementation and maintenance with the idea that we need to assess these different components of our implementation studies in order to know whether they’ve been effective and so you’re not limiting your study to clinical outcomes but also incorporating these implementation outcomes. There are others besides those listed here but these are sort of the core implementation outcomes that you can call upon to help think through what you might want to measure or assess in your study.
Another area of difference in implementation research is that your units of analysis might be different. For example, your units of randomization might be much larger so you’re looking not so much at individuals but at sites or clinics and that might bring to mind…Wow, what would my sample size be if the unit of analysis is the cite or the clinic? Then your end might be 10, it might be 20. That brings up a lot of statistical challenges and there is excellent work going on in that area of the field. You might be measuring units that are beyond individual patients so for example, looking at organizational climate measures, looking at performance of clinics so the lens that you’re using in your research might be different than what you’re used to in other types of research.
Another area that I’ll come back to later but, another thing that is kind of different about implementation research is that there is a lot of back and forth between the units of analysis. You may have different people on your team looking at different aspects at maybe the same points in time or at different points in time in order to get a full and thorough picture of what’s happening in implementation.
I am not going to get into this extensively because we have excellent resources available on hybrid study designs but I do just want to mention one slide about hybrid studies which you may have heard about or you may be familiar with and I would recommend reading Jeff Curran and colleagues paper on this topic and also going to the cyberseminar archives to search for hybrid and there are several talks that explain hybrid study designs in detail. Just for one quick minute, I’ll say that hybrid study designs were really, I think…kind of a revolution in implementation research where a lot of thought was put into this idea that we could blend implementation and effectiveness research, that if we try to do one or the other it’s going to take too long to get to outcomes. Of course, one of our goals in implementation research is to really speed up the time from outcomes to implementation in usual care.
You have three different types of hybrid study designs…I, II and III and I personally like thinking through these types by asking a series of questions and seeing where the primary question falls. You’ll see that in a hybrid type I, the primary question is still an effectiveness question, in a type II you have a balance between your effectiveness and implementation research questions and, in a type III you’re more veering towards the implementation side and less on the effectiveness side. Again, there are such fantastic resources that I’m not going to take the time today to spell this out for today’s cyberseminar.