Transcript:
------
The broadcast is now starting. All attendees are in listen only mode.
Hello! Welcome to the presentation entitled, “Good Ideas Are Not Enough. Making Evidence-Based Practices Work for Your Campus.” I'm Dolores Cimini. I'm a licensed psychologist at the University at Albany and I'm very pleased to provide this presentation to you today. [Next slide]
Today, I have several learning objectives that I'd like to cover with you. First of all, I'd like to define what evidence-base practices are. Secondly, I would like to talk about how evidence-based practices can be implemented on different campuses. Thirdly, I'd like to discuss some of the challenges and barriers that exist when implementing evidence-based practices on different types of campuses. And lastly, I'd like to talk with you about some of the resources that are available to help you select evidence-based practices that might work for your campus. [Next slide].
Where do we begin? Science has done a lot for us over the past decades in helping us understand what works to address drinking and drug use on college campuses. There is now a very strong base of evidence-based practices that we can draw from, so the good news is we don't have to reinvent the wheel. We can look to a variety of sources, journal publications, databases, and the like to help us move forward. [Next slide]
[3:08] So, what are evidence-base practices? The term evidence-base practice was first developed in the mid-1990’s in medicine. And what evidence-based practices are defined as are practices that have been tested through research, that have been examined with particular populations, and having an understanding of such populations. And thirdly, that they have been adopted and modified to meet the needs of the particular population.So, if we apply standardized practices to our work that are consistent across time, if we have a good understanding of our population that we’re working with, and if we can apply those things and test what is actually working, or not working, on our campus we have an evidence-base practice. [Next slide]
[4:14] What helps us determine evidence-based practices and rather they are working? There are a number of characteristics and they have a lot to do with research. First of all, what we need to do is have a control group and a comparison group comparing what we are trying to test with other conditions. A control group is a group that does not get the same intervention that we’re trying to test and a comparison group is a similar type of group. The difference is that a control group is one that is randomized that if you have two students one is randomly placed into your intervention group and the other is placed into your control group. A comparison group does not involve randomization with what we're doing here is then we're using natural groups. We also use surveys and assessments before we do the intervention and after, so we do a pre-testor baseline assessment or a posttest assessment. We compare pre-test and post-test. We also look at behavioral indicators of change and we try and answer some specific questions. We look at specific things and how they are changing so what we look at is things like drinking frequency and quantity, we look at negative consequences of drinking or drug misuse and we also look at protective behaviors or things that might reduce the risk for college students. [Next slide]
[6:07]There are some possible barriers to implementation of evidence based practices. Barriers can exist in dissemination that is that when we get information out about are evidence-based practices to other campuses, other schools, other departments. We also may have barriers in terms of adoption. We may have trouble getting resources to adopt our strategies. Implementation, that is, implementing evidence-based practices the same way they were implemented in the laboratory or in an original setting they were tested in. And lastly, maintenance. That is we may do well with implementing an evidence-based practice and then over time we may have some difficulties continuing to implement it in the same way and that may affect the outcomes of our evidence-base practice. [Next slide].
[7:05] So, let's talk about barriers a bit. What we want to make sure to do to avoid experiencing barriers is we want to engage in proper training of the staff who are doing the intervention. We want to train them all together, if possible, and we want to train them in the same way and we want to train them using clear protocols. We have a tendency, sometimes, to change interventions as we go along or to reinvent the wheel or to become perhaps more dramatic in the interventions we do because if we're more dramatic than we might find some better results but it's very important to stick to intervention protocols.So, that is called Preventionist Drift.People drift from the initial period of time where they learn an intervention and practice it initially. So, we need to engage in ongoing training and ongoing supervision of interventions in order to have them remain consistent and true to the tested laboratory setting. [Next slide]
[8:28] Keep in mind that we can learn as much from interventions that don't work as we can from those that do. The key is to follow the methodology that wepropose. That is, to make sure that we're doing the intervention the way it’s defined as being done and that we test what we're doing through a survey and other assessment measures. So let's look at some survey characteristics that we might want to look consider when we’re building evidence-based practices. First of all, we want to select measurements that are most likely to assess what we want to assess. So if we want to assess alcohol frequency and quantity of youth, we want to be sure that we're using measures that do that - just as the daily drinking questionnaire for example. We want to be aware of the time frame of survey administrations, so for example if we were doing a campus-wide survey, in October around Halloween, we may actually see some higher rates of alcohol use than if we administered the survey in early February. So, it's very important to look at periods of high drinking and drug use that are associated with your campus and some of those are Halloween, Thanksgiving, New Year's, Spring Break and graduation. And we want to try and avoid giving surveys where time periods capturethose questions.So if we’re asking about alcohol use in the past 30 days what we want to be sure and do is not assess within 30 days after spring Break or any other high-risk peak periods, for example. We also want to consider changes or variables that are getting… Ok, sorry. Let me start with this one over.
There are several considerations that we might want to think about when administering surveys. First of all, we want to select surveys that most accurately assess the variable that we’re interested in. If we're interested in examining the frequency and quantity of alcohol use, for example, what we want to doisselect a measure such as the daily drinking questionnaireand the like that actually looks at the frequency and the quantity that we want to measure. Secondly, we want to be sure that we're not inadvertently assessing high risk periods of drinking post events such as Halloween, New Year's, Thanksgiving, spring break. So if we ask a question about drinking in the last 30 days and we’re asking at five days after Spring Breakon our campus we're going to get higher rates of drinking because of the existence of spring Break and at that may actually mask the results of our intervention. So we want to be sure that the timing is appropriate. Next we want to consider a range of variables that we want to measure. Not just one thing but many things. And also, we want to look at the mode of administration. Perhaps on your campus it might be easier to do a web-based survey and you might get a high response there. Many campuses do have challenges with web-based surveys and in that case however, you may want to consider using paper and pencil surveys or focus groups are other ways of measuring change. Those are just some survey administration considerations. [Next slide]
[12:41] Data collection considerations. It's important when collecting data to keep periods of data collection constant. If we are collecting data in February of one year we don’t want to collect follow-up data in October of the following year. And that relates to the topic that I discussed in the last slide that there are some peaks and valleys in the rates of drinking and drug use across different campuses. Selecting an interval that works for your campus, in advance, will really help be able to assess in the most accurate way. [Next slide]
[13:45] Here’s a little cartoon. It really reminds me of data sometimes feeling intimidating to some folks, but in fact data can be our best friend. [Next Slide]
[14:09] When we conduct data analysis there are a number of things we can do to help make our findings more accurate. What we want to do first is look for outliers. We have typically a data set that we work with and outliers are those that really are above and beyond what we expect from a college student. So, if we ask a college student on a survey question, “how many drinks did you consume in the past week?” If a responded says 100 drinks that’s really infeasible. That’s an outlier. What statisticians do is that they then remove that case from the data set, so that the data set is not distorted. Second, think critically about your findings. Think about different scenarios that might be occurring. This of the context. This of your data and consider you data with a healthy skepticism. Analyze your data in a way that takes in the campus environment that you work in and that it’s sensitive to changes in your campus environment. Lastly, beware of making categorical statements. I’ll describe that a bit on the next slide. [Next slide]
[15:45] Let’s take an example of the drawbacks of making categorical statements. Consider what you see on this slide here. That 85% of students drink zero, one, two, three drinks and then 15% are higher than that. If we find that some students are drinking higher than the 85%, for example, somebody that who would drink 20 drinks a week who would be in that top 15%. If they reduce their drinking from 20 drinks a week to 15 drinks a week they’re still considered to be in that top 15% and they’re still considered to be high risk. However they have made a significant reduction in their drinking but it’s noteworthy and they enhanced their health. What we need to do is not categorize necessarily and solely by a high and a low risk but look within the categories and see if there are reductions within the high risk group and within the low risk group. That would help us refine our data analysis and help us understand that in fact even though at follow-up that are some students that are still engaging in high risk behaviors over time they did reduce and that’s progress. [Next Slide]
[17:23] Before declaring success or failure remember that anything that we do, any one thing that we do, is part of a bigger puzzle. It is part of a more comprehensive set of interventions and programs that we do at our institutions of higher education. We need to examine moderators. They are things within our students like as gender, sex, age, fraternity and sorority membership, class year that make interventions potentially work less well or better than other interventions. Those are called moderators. We also need to look at what we call mediators. That is things in the environment that may help an intervention work better or maybe associated with an intervention working less well. For example, if we were doing an evidence-based intervention and we experience during the time of the intervention an alcohol-related death that may influence the results that we see. We always have to look at things in context and continue to collect our data. Then finally what we want to pay attention to is that at all times implement our interventions with fidelity. Implement them in the way that they were originally designed to be implemented when they were tested in the laboratory are in the original. [Next slide]
[19:07] When evidence-based practices where do we look? How do we understand that? Well there are several things that we might want to look at when we are examining why things are working and why things are not. They are collaboration, networking, program promotion, building intervention capacity, evaluation of our interventions, establishing diversified funding and resource bases and engaging from the very beginning in sustainability of interventions. Well talk a few minutes about each of these.
When implementing evidence-based practices collaboration is key. We need to have partners to deliver very strong evidence-based practices. It takes a village to implement an evidence-based practice. It's not really a cliché, it's actually a fact. No one person can deliver an evidence-based practice to the scale that a college campus needs it. We want to get partners that are interested in the same things we are. That are willing to put in time and resources and personal strengths into the delivery of the intervention. Resources don't necessarily mean money. It may mean somebody with particular strengths, or interacting with students or getting students to an intervention that's needed. In addition, we need to strategically position collaborator so that they are exhibiting their strengths when engaging in the intervention. The other question that we want to ask is have we engaged our partners in diversified roles. Again, it plays to people strength. Have we really maximized our collaborators and engaged them fully? So that they feel that they are contributing and that they feel that they are really a part of the intervention. That’s very critical. [Next slide]
[22:22]Networking in program promotion is also very important. Everyone who is involved in any aspect of an evidence-based practice is engaged in that practice rather developing it, implementing it, evaluating it, recommending students to participate in the intervention and the like. It's important to keep these stakeholders engaged and excited about what we do through networking. That is letting people know how the intervention is going, keeping them updated with periodic data reports,keeping them engaged and updated around the evidence-based intervention. It goes without saying, but a thank you can go a very long way with our partners. When we promote our programs and when we share our successes with our partners, with our campus, with our media offices (which is something I would highly recommend) it is critical to acknowledge all of the partners that have made the evidence-based practice and its implementation and evaluation a reality. [Next slide]
[23:52] Building intervention capacity. This is something that's very central to the evidence-based practices from the start. Many evidence-based practices either succeed or fail based on the fact that they do or do not have sufficient resources to be implemented. What I would recommend is that you look at these questions on these slides and see what is necessary. Determine what is necessary to build capacity, to develop, deliver and evaluate your evidence-base practice - your evidence based intervention. Is it dollars?Is it staffing? Is it the knowledge of a researcher? Is it getting students involved in recommending other students to participate in the intervention? All of these things really help strengthen capacity for evidence based intervention. [Next slide]
[25:10] Evaluation of intervention. Evaluation of interventions are really the linchpinand the central component of evidence-based practices because evidence based practices are built on a fact. They exist on the fact that the intervention that we are testing work (and we’re doing) work. We need to be sure that we have the capacity to evaluate our interventions. We need to be sure that we have someone on our team that can work with statistics, who understands methodology, who can let us know through data reports rather our intervention is working before we get to far into it and utilize too many resources for something that may not work. We need to look at where are partners are there. One of the things that’s worked really well for many campuses is to develop partnerships with faculty members and academic departments on our campus. Many faculty members are very interested in alcohol and other drug misuse research and actually getting access to student populations that we work with on a daily basis. It's really a win-win situation. As we are getting the expertise of researchers in data analysis and evaluation the faculty members also benefit by getting the opportunity to be able to collect data, to access our population, and to publish (which is something that they're required to do very often to keep their positions as faculty member). Those are just some suggestions and if you look at some of the questions on this slides it will also help you navigate in the area of evaluation capacity for your evidence-base intervention. [Next slide]