TEAC 800
Quantitative Research: Experimental & Quasi-Experimental Overview
Begin with a question and/or a problem of practice.
Create a hypothesis about this problem/question.
Based on your hypothesis, choose a method of educational research.
If your hypothesis includes an independent variable (something you can change/manipulate) and a dependent variable (something you hope will change as a result of the change/manipulation you make), then you should lean toward experimental or quasi-experimental research.
If you are trying to prove a cause & effect relationship, you need to try to set up an experiment.
An experiment has three features:
1. You set up an experimental condition (the independent variable) contrasted either with a second condition or a control group.
2. You use random assignment of subjects (not necessarily individuals, but possibly teachers, classes, or schools) to the experimental conditions.
3. You measure the changes accurately or with precision.
A quasi-experiment has two main features:
1. You set up an experimental condition (the independent variable) contrasted either with a second condition or a control group.
2. Measure the changes accurately.
A major difference between experimental and quasi-experimental is that quasi-experiments do not feature random assignment of subjects to conditions. There are a variety of reasons researchers conduct quasi-experiments, including many times when it is impossible and/or unethical to randomly assign subjects to experimental conditions.
If you want to establish cause, there are three criteria to meet:
1. Temporal Antecedence: your independent variable must clearly happen before the dependent variable changes. That is, the change you measure has to clearly be a result of your intervention.
2. Correlation: the independent and dependent variables must be related.**
3. Lack of plausible alternate explanations: you must make the case that there is nothing else besides your independent variable that could be causing the dependent variable to change. In education, this is a very difficult criterion to establish.
**Be very cautious and always remember that correlation does not automatically mean causation. Two things can be correlated, but one may not necessarily be causing the other. This is where the idea of a “third variable” comes in: some third thing related to both of the other variables that may be the true link between them.
In quantitative research, there are three important considerations in setting up your research. You need to set up the conditions of your research to maximize:
1. Internal validity: you establish internal validity by setting up your experiment (or quasi-experiment) in such a way that your independent variable is the only possible explanation of change you will measure in your dependent variable (see #3 above—lack of plausible alternate explanation). The biggest threat to internal validity is confounding variables, which are inseparable from your other variables. For instance, you can’t separate student ability from instructional methods—the kids who are smarter may benefit more or less than the kids who struggle when you test out a new instructional method. The solution to confounding variables is random assignment. This works with “large n” studies—when you have lots of subjects, by randomly assigning them to the experimental conditions, you should be able to equalize the natural variation in people (ability, SES, gender, ethnicity, etc.). Random assignment does not necessarily help in situations with small numbers of subjects, because even randomly assigning 10 people to two groups may not result in two equivalent groups. The minimum number for “large” depends in part on what you are measuring. Typically, many statistical measures are for n > 30. Many researchers are critical of pre/post test experiments because too many confounding variables can enter your study in the time that elapses between pre and post.
2. Precision: you need to be able to accurately measure even small changes in your dependent variable. Statistical significance is a technical term that refers to the degree of certainty that your results are not just what would be expected from the normal variation of people. The larger your group of subjects, the more likely it is that even small differences will meet the criteria for statistical significance. The question you then ask is: how important are these differences? With precision, you sometimes run into the “ceiling effect”. For instance, if you are using an achievement test to measure student achievement after some type of instructional intervention, students who were already at the 95th percentile may only move up to the 97th percentile. Students already near maxing out their score on a test will hurt your precision in attempting to measure their gains. Researchers address this by choosing measures that the subjects are not maxing out to start with.
3. Generalizability: there are several dimensions of generalizability:
- Generalized population: can your study apply to other groups of people? You can help your study generalize in this way if you have random selection, rather than volunteers or convenience groups. Note that random selection is different than random assignment. Random assignment is about assigning your subjects to different experimental conditions. Random selection is about how you get your subjects in the first place. Getting teachers to volunteer to pilot a new textbook is not random selection. Neither is researching the teacher next door to you because that is the most convenient location. Obviously, random selection can be difficult to achieve—will the people you randomly select agree to cooperate with you?
- Generalized environment: can your study apply to groups in different environments? A way to solve this is to conduct your experiment multiple times in multiple environments. Again, this is not easy or quick, but can be necessary to establish that what you find applies to other situations.
- Generalized outcomes: were the changes you measured just a result of the type of test you used? You can address this by measuring your dependent variable in different ways and/or at different times.
When not to use experiments or quasi-experiments:
· If you are not interested in proving cause.
· If you have more than a few variables. You can’t determine cause if you have many independent variables all having an effect on one dependent variable.
· Sometimes, if you have a longitudinal study, over time, too many confounding variables can creep in to be able to rule out plausible alternate explanations.
· It is not always possible to set up experimental conditions to test a hypothesis.
· (experiments only) It is not always possible to randomly assign subjects.
· (quasi-experiments only) If you cannot establish a baseline set of data (when conducting time series experiments), especially with nonstationary data.