Quantitative Research Design

I.  Characteristics

II.  Purpose

III.  Concept of controlling variance

IV.  Procedures for controlling variance

A.  Randomization

B.  Elimination or holding “constant”

C.  Inclusion

D.  Statistical control

V.  Characteristics of good research design

A.  Freedom from bias

B.  Freedom from confounding

C.  Control of extraneous variables

D.  Statistical precision

Experimental Research

I.  Characteristics

II.  Criteria for well designed Experiments

A.  Adequate control

B.  Lack of artificiality

C.  Basis for comparison

D.  Adequate information from data

E.  Uncontaminated data

F.  No confounding variables

G.  Representativeness

H.  Parsimony

III.  Experimental validity

IV.  Experimental designs

A.  Posttest-Only Control Group

B.  Pretest-Posttest Control Group

C.  Soloman Four-Group

D.  Factorial

1) factors

2) interaction effects

E.  Repeated Measures

F.  Extended Time Designs


Quantitative Research Design

Characteristics

Strong association with positivism- positive evaluation of science and scientific method. Sequenced, structured, prescriptive, outcomes as numbers. Comparisons and partitioning of numbers is basis of interpretation.

Purpose

Provide answers to research questions and to control variance!!! To explain variance, or differences in the distribution of scores or measurements.

Concept of controlling variance

Controlling variance is a main focus of quantitative research design.

Controlling variance means explaining and accounting for the variance in the variables studied. Variance in partitioned according to variables in the study.

(Statistical tests are referred to as ANOVA or analysis of variance)

EXAMPLE OF CIRCLE AND SOURCES OF VARIANCE

Procedures used to control variance

Randomization- attempting to “equate” groups prior to treatment. Distributes intitial differences equally. It is the variance that exists “within” the groups to begin with. “Within group” variance. Also called “error”variance. Meaning it is unaccounted for. Randomization can control variance due to confounding, organismic variables. Variance occurring from the treatments would be referred to as the “between group” variance.

Elimination Identify and isolate extraneous variables that may confound effects. Elimination of variables is accomplished by converting variables to “constants”or holding them “constant”- holding conditions equal or the same with respect to a give variable in a given study (all males, all LD, all 4th graders, etc.). \

Inclusion (Building conditions or factors into the design as independent variables) Inclusion- variable is included in the design so that its potential effects on the dependent variable can be studied.

EXAMPLE FROM PEDHAZUR p. 214 (exclusion of and inclusion of sex as an extraneous variable) Implications of controlling through elimination or inclusion- effects external validity of findings.

Classroom teacher example: Chemistry scores and method with inclusion of ability as independent variable

Statistical control- adjusting the dependent variable scores to remove the effect of the control variable. When planning for statistical control, look for variable that is correlated with the dependent variable. Then we are able to (statistically) examine difference in scores independent of the control variable. Obtain the scores on the control variable prior to testing. Since correlated, treatment may effect control variable. (The control variable is often referred to as “covariate” statistical test would be called ANCOVA or analysis of covariance.)

Remember!!! Basic idea is one of comparing sources of variance not just measuring the total amount of variance in the scores.

Characteristics of Good Research Design

Freedom from bias- no systematic variations at work. Variations are only due to random fluctuations therefore differences can be attributed to independent variables at work. Randomization helps control bias

Freedom from confounding and

Control of extraneous variable-controlled by elimination or inclusion of variable. When confounding is present, can’t separate the effects of the variables.

Statistical precision- design can increase the power to detect treatment effects by reducing the amount of random or error variance. (Remember we are comparing sources of variance not total amount of variance)

So if we can account for more of the random variance by attributing it to certain variables apart from our dependent variable…we will have a larger percent of the remaining variance attributed to our treatment variable.

Experimental Research

Characteristics

Research in which at least one independent variable is deliberately manipulated by researcher. May or may not include a control group. But will always include random assignment to treatment groups or random selection.

Experimental variable- the independent variable that is manipulated by the researcher. Has multiple levels of experimental treatments (levels of the experimental variable)

Participants- those being tested, treated, in the experiment.

Criteria for Well-designed Experiment-general guidelines for good quantitative design apply but also;

Adequate control- aids in interpretation

Lack of artificiality- conducted such that findings could be applied to real world.

Basis for comparison- use of a control group or multiple levels of treatment or with some external criteria.

Adequate information from the data- data must be appropriate for statistical test used.

Uncontaminated data- good procedures help insure this

No confounding variables

Representativeness- helps build case for generalizability

Parsimony- simple is better if it answers the questions you are interested in.

Experimental validity

Internal- how accurately data can be interpreted.

Threats- see notes from previous discussion on validity.

External- how generalizable are the findings.

Threats-Interaction effects a)testing (pretest priming for post test, b) selection biases and experimental treatment (findings with one group not generalizable for another.) c)Reactive effects- differences just due to the fact of participating “Hawthorne effect” d)Multiples treatment effects- cause carry over or residual effects

Experimental Designs

Symbols R = Randomization X = Treatment M = Method

G = Group O = Observation ( some sort of data collected)

S = Subject N = total participants

n = participant in treatment group

Posttest-Only Control Group

R G1 X O1

R G2 -- O2

Pretest-Posttest Control Group

R O1 G1 X O3

R O2 G2 -- O4

Soloman Four-group Design

Combination of posttest-only control group and pretest-posttest control group designs.

What advantages does this design have over either of the other two above?

R G1 O1 X O2

R G2 O3 -- O4

R G3 -- X O5

R G4 -- -- O6

Factorial Designs- Minimum of two independent variables (called factors) with at least two levels of each independent variable. Very common in educational research.

Organismic variable (Grouping variable)
(Independent) /
Experimental variable (Manipulated variable)
(Independent)
Method or treatment
/ Dependent variable
(Achievement, etc)
Gender / X1 or M1 / X2 or M2 / Post test
Female
n=40 / female participants
n=20 / female participants
n=20 / O1
Male
n=40 / male participants
n=20 / male participants
n=20 / O2
N = 80 / O3 / O4

Interaction- effect on the dependent variable such that the effect of independent variable is significantly different from one level to another.

Repeated measures designs- All participants receive all treatments and all participants are measured repeatedly on the dependent variable. May also include a pretest.

S1 X1 O1 ------X2 O2 ------X3 O3

S2 X1 O1 ------X2 O2 ------X3 O3

. . . .

. . . .

. . . .

Sn X1 O1 ------X2 O2------X3 O3

Extended time designs ( Repeated Posttest-Control group design)- Can also include pretest.

R G1 X1 O1—O2—O3

R G2 __ O4---O5---O6