Adaptive Interventions and Personalized Medicine

with Susan Murphy

November 4, 2014

In our latest podcast,AmandaApplegateinterviewsSusan Murphy, Methodology Center principal investigator, Herbert E. Robbins Distinguished University Professor of Statistics, research professor at the Institute for Social Research, and professor of psychiatry at the University of Michigan. The discussion focuses two topics, thesequential, multiple assignment, randomized trial (SMART), which allows scientists to develop adaptive interventions, and the just-in-time, adaptive intervention (JITAI), which uses real-time data to deliver interventions as needed via mobile devices. Susan’sMacArthur Fellowshipis also discussed; the podcast was recorded before she waselected to the Institute of Medicineof the National Academies.

Podcast Timeline:

00:00 – Introduction
01:01 – MacArthur Foundation "Genius Grant"
03:31 – Two SMARTs in the field (one to develop anadaptive intervention for alcohol abuseand one to develop anadaptive intervention for helping clinics implementan intervention effectively)
12:44 – The just-in-time, adaptive intervention (JITAI)
18:34 – The future of SMART and JITAI

Speaker 1: The Methodology Center Perspective podcast is brought to you by The Methodology Center at Penn State, your source for cutting edge research methodology in the social, behavioral and health sciences.

Amanda: Hello and welcome to Methodology Minutes. I'm your host, Amanda Applegate and our guest today is Susan Murphy. She's The Methodology Center principle investigator, HE Robbins professor of Statistics, research professor in the Institute of Social Research and professor of Psychiatry at The University of Michigan.

Susan was named a MacArthur Fellow in 2013 for her work on sequential multiple assignment randomized trial. We've wanted to record a podcast with her since The Methodology Minutes podcast series began. Susan, thank you so much for being here and welcome to Methodology Minutes.

Susan: It's great to be here.

Amanda: Congratulations on your MacArthur award. I'd love to start by hearing a little bit about what that has been like. How did hearing the news impact you and what has changed in the months since you learned about the award?

Susan: The best thing was the confidence that it gave a variety of clinical and behavioral scientists to go forward with these more unusual designs. They're having to leave tradition behind and it is risky for them. This award gives them a lot of confidence that they could pull it off, it's going to work out for them. That's the main thing that's happened is that my collaborators have more confidence and many more potential collaborators have been contacting me about these types of designs.

Amanda: Fantastic. The MacArthur Foundation recognized you for your work on the sequential multiple assignment randomized trial or SMART, of course it is a very new and can be daunting for new researchers. Can you talk a little bit about this trial design and what inspired you to develop it?

Susan: I was working with researchers in substance use and we now understand that for many people substance use tends to have a very chronic or a waxing and waning course.

Also, many of our interventions will only work for maybe one half or 60% of the people and even if it does work for the person, the treatment of intervention works for the person, often over time it may stop working.

Amanda: Right.

Susan: In those kinds of settings clinicians and behavioral scientists, they have to think about what sequence of treatments am I going to provide to this person, because the chance of the first treatment curing that person is not all that high. They have to have a whole plan for how they're going to ... How long are they going to try this first treatment? If the treatment doesn't work, what treatment should they do next? Try next?

Or if the treatment is working, many of these treatments are burdensome, so often they'll want to think what kind of maintenance therapy might I provide? The big questions that motivate this type of trial are questions concerning sequences of treatments. What treatment do I try first? What treatment do I try second if the response is insufficient? How do I use how the patient is responding to the treatment to make those decisions?

Amanda: Okay, so you're able to customize it to how that person is doing.

Susan: You want to collect data that will allow you to do that customization.

Amanda: Okay. In recent years we know that there's been a large number of SMART studies to address a broad array of health problems. Can you give us a few examples?

Susan: Yeah. Yes, I'd like to talk about two examples in particular. One example is the very first one I was involved with. This study was run by Dave Osland at UPenn and he had a lot of guts to do this because he was brand new.

The setting itself is quite interesting. At that time, now naltrexone which is opioid receptor antagonist was provided often to people who had alcohol dependency. There's a lot of problems with naltrexone because people would suffer side effects and they would stop taking the medication or the medication wouldn't work and thus they would stop taking the medication.

He wanted to try and better understand how to build a strategy, an algorithm around using naltrexone to help people. The questions that arose in trying to build that algorithm were how long do you try naltrexone before you say, "Look it's not working. We've got to try something different." In this case not working could be not working because the person didn't take the drug.

The first question or the first randomization is to how long you try the drug before you try it? It was to either five or to heavy drinking days. If someone was randomized five heavy drinking days, they were allowed a lot more time to try to respond to naltrexone than in the other case.

In either case as soon as a person met the trigger for this heavy drinking days, they would be immediately re-randomized to either a switch in treatment, so just give up on naltrexone or to combining naltrexone with a behavioral therapy which is designed to improve adherence to naltrexone.

Amanda: Oh, okay.

Susan: One of the reasons why they might not be responding is because they weren't adhering to the drug. The behavioral intervention might help them adhere to the drug. This study, this is SMART one, the very first, it is the very first that I was involved from the beginning to the end with. The two big questions were how long to try the initial treatment and what treatment to try next.

If there was response or if there was non-response. The other study is a study that just got funded only within the last month and it's really exciting. Whereas Osland's study was for alcohol dependent subjects and all the treatments in the study concerned treatments for patients who were alcohol dependent, this other study that just was funded, it's going into the field, Amy Kilbourne at the University of Michigan is the PI and Danny Almirall, one of my collaborators was the primary person involved in the design.

This design, it has to do with a treatment or an intervention that's for the clinic as opposed to the patient. This intervention is called Replicating Effective Programs. It's an intervention that's provided to the clinic to help the clinic then provide another intervention, evidence-based intervention to patients.

This is called an implementation intervention because the intervention is not on the patients. It's on the clinicians in the clinic. In this particular case, the evidence-based interventions at the patient level that they're trying to ensure is implemented is called Life Goals Collaborative Care. It's a psycho-social intervention for people who have mental illness.

Let's go back to our implementation intervention. That is the intervention that's acting on the clinic level, on the clinicians. There is this effective program called Replicating Effective Programs and it works pretty well, but around 50% of the time a clinic site doesn't respond well to it.

This is because there's a lot of obstacles to changing the provision of treatment to a more evidence-based program. Because you have this amount of heterogeneity only around half of the sites may respond or maybe even only 25% of the sites will respond. That is they'll start implementing the evidence-based program. You have to think what can I do for this site? What intervention? What other thing besides Replicating Effective Programs can I do for this site so that they'll actually implement the evidence-based Lie Goals Clinical Care?

What Amy came with, developed is this external facilitators and internal facilitators. External facilitators are people who sit at a central location and people at the site can access and talk to the external facilitators, get advice on how to make sure Life Goals Collaborative Care is really implemented.

Amanda: Okay, trouble shooting.

Susan: Yeah, trouble shooting, that sort of thing. This is certainly more expensive than replicating effective programs, the original program, but it's not all that expensive because there's one person and many sites can access the help of that one person.

That's one option. Another option is called internal facilitators. This is very expensive. This is a person who has protected time at the clinic with the sole purpose during that protected time to improve the implementation of the evidence-based practice.

What Amy did was, Amy Kilbourne, the PI of this study in working with Danny, they decided they had to come together and they had to decide, what are the big, important questions that would motivate this SMART trial? The big questions were, which treatment to provide to sites that are insufficient responders to standard Replicating Effective Programs? Should you provide the really expensive one or the less expensive one?

Then if a treatment continued to show non-response, what should you do? The whole goal of these implementation interventions that is these facilitators, these different internal versus external facilitators is at the end of the day, people who have mood disorders should see their life improve.

In particular the primary outcome is a quality of life measure for mental health at the patient level. Even though the implementation intervention is acting on the clinic, on the physicians, the clinicians, the goal is to improve the patient's life.

What's going to happen is 100 clinics are going to start with Replicating Effective Programs. They're going to be followed for six months and then they'll be assessed as to whether or not they've effectively implemented Life Goals Collaborative Care.

If they haven't, and they're expecting around 75 of the 100 programs not to implement it, then each clinic will be randomized to either add an external facilitator or add both the external and the internal facilitator. Very expensive option. Then the clinics will be followed for another six months.

Again, they'll be assessed to see if they're adequately implementing Life Goals Collaborative Care. Among the clinics who are continuing not to respond sufficiently to, that is implement Life Goals Collaborative Care, these clinics will then again be re-randomized. In particular it's the clinics that started off with just the external facilitator.

They'll be re-randomized to either continue with the external facilitator. Maybe if we just try it for another six months they'll finally be able to get it implemented or we'll actually add internal facilitator, the most expensive option.

At the end of the day, one would like to end up with a sequence of implementation interventions. A sequence of treatments that is not terribly expensive, so you don't have to at the very beginning start with both external and internal facilitators. You only use these internal facilitators if all else fails.

Amanda: When absolutely needed. Right.

Susan: When they're absolutely needed. That's the idea here is that you're going to have to have a sequence of treatments for the clinics.

Amanda: Right. There's kind of a path that they'll follow.

Susan: Right, that they'll follow. If the clinics are not adhering well then you'll have to change their intervention. You'll have to step up their intervention.

When you contrast these two SMARTs, one is at the patient level and it's talking about treatments for patients. The other is talking about treatments for clinics to ensure that the clinics implement best practices. Everything, always, the end point is to improve patient health.

Amanda: Wow. That's really interesting.

Susan: In both cases, but in one case you're acting on the clinic level and the clinicians and in the other case you're directly acting on the patient level. Very interesting.

Amanda: Wow. That is really interesting. Listeners who want to learn more about SMART can listen to podcast 12 with Susan's former protégé and current colleague, Methodology Center investigator, Danny Almirall or visit The Adaptive Intervention's research page on the Methodology Center website.

As excited as the MacArthur Foundation, The National Institutes of Health and The Methodology Center are about SMART, you've also branched out into an even newer field of research. Can you tell us about your latest work on just in time adaptive interventions or JITAIs?

Susan: The interventions that are developed by SMART, they're called adaptive interventions and they're aimed at helping clinicians help you. How do clinicians help you if you're alcohol dependent? How do we get clinicians to help you if you have mood disorders? This is a different thing.

This is about helping you help yourself. The adaptive interventions that we've been talking about are about big decisions, what treatment medication to try first. Or how long to try that medication. Or what behavioral treatment to try or how long to try it.