Outline

Introduction to the t-distribution

A.What have we done so far?

B.t-test for one sample mean

C.Sampling distribution of t

D.Step-by-Step example of t-test

E.Additional considerations

F. t-test for two independent sample means

G. Step-by-Step example of 2 sample t-test

Introduction to The t-distribution

A.What we have done so far:

  • Compared one sample mean to known population 
  • Assumed 2 (and ) were known exactly
  • In reality, 2 is rarely known!
  • Must use s2, our unbiased estimate of 2
  • Can’t use z-statistic, need to use t-statistic

Research Problem:

Does time pass more quickly when we’re having fun?

Perform a pleasant task for 12 minutes.

Judge how much time has passed.

Will judgement differ significantly from 12 on average?
B.t-test for One Sample Mean

t =

where: or

replacement for

  • t is a substitute for z whenever  is unknown
  • s2 estimate for 2 in the formula for standard error
  • s serves as estimate of 
  • df = n-1
C.Sampling Distribution of t

  • a family of distributions, one for every (n)
  • Not actually n, but degrees of freedom

df = n –1

larger df = t gets closer to normal curve

larger df = s2 better estimate for 2

  • A sampling distribution of a statistic
  • All possible random samples of size n
  • symmetric, unimodal, bell-shaped
  •  = 0
  • Major difference between t and z:

tails of t are more plump

center is more flat

Table of the t-distribution:

Table B.2

  • Values represent critical values (CV)
  • Values mark proportions in the tails
  • t is symmetric, so negative values not tabled

Need three things:

(1)one-tailed or two-tailed hypothesis

(2) alpha () level of significance

(3) degrees of freedom (df)

  • Caution, not all values of df in table

If df not there, use CV for smallerdf

example: df = 43, use CV for df = 40

df = 49, use CV for df = 40

1


1

Step-by-Step Example

(1)Does a pleasant task affect judgments of time passed?

Hypothesized:hypoth = 12

Sample data: = 10.5 s2 = 2.25 n =10

(2)Statistical Hypotheses:

assume two-tailed

H0:

H1:

(3)Decision Rule:

 =

df = (n-1)

Critical value from Table B.2 =

(4)Compute observed t for one sample mean:

t = where

s = t =

(5)Make a decision:

(6)Interpret results:

E.Additional Considerations

Bottom Line:

  • Use t whenever is unknown
  • one-sample t when you have one sample mean compared against an hypothesized population 
  • Your hyp can be any reasonable value to test against
  • Hypothesis testing steps with t are same as with z
  • Differences:
  • use s2 to estimate 2
  • need df (n-1) to determine critical value

What Does All this Mean in Practical Terms?

  • Critical values for t will be larger than for z

example:  = .05, two-tailed

zcrit =  1.96

tcrit =  3.18 when df = 3

tcrit =  2.04 when df = 30

tcrit =  1.98 when df = 120

  • Larger n, less discrepancy between t and z
  • Larger n, smaller critical value
  • Larger n, more power! Easier to reject H0

Reporting t in the literature:

t (9) = 3.19, p < .05

Written scientific report:

“When distracted by a pleasant task, participants significantly underestimated how many minutes had passed (M = 10.5 minutes, SD = 1.5) during the 12 minute task period, t(9) = -3.16, p < .05, two-tailed.”

  • Reject H0 p < .05 “statistically significant”

Retain H0 p > .05 “not statistically significant”

More about variance:

  • For inferential statistics, variance is bad
  • Big differences between scores, makes it hard to see trends or patterns
  • High variance = noise, confusion in the data
  • High variance = larger standard error
  • When variance large, more difficult to obtain a statistically significant outcome. Effect size estimates will be smaller as well.
  • What can we do?

Decrease experimental error/noise

Increase sample size to reduce standard error

More about p-values:

t (9) = 3.19,p < .05

  • p = probability of making Type 1 Error

p =probability in the tail

> or < your alpha level

  • Reject H0p

Retain H0p

  • p < observed t falls in critial region (in the tails)
  • p > observed t does not fall in critial region

t-test for Difference Between Two

Independent Sample Means

Typically have two sample means:

example:

Does new drug reduce depression?

Placebo: 1 = 30

New Drug:2 = 25

Compare two sample means…

are they from the same population….

are the differences simply due to chance?

Research Problem:

Are people less willing to help when the person in

need is responsible for his/her own misfortune?

“Please take a moment to imagine that you're sitting in class one day and the guy sitting next to you mentions that he skipped class last week to go surfing (or because he had a terrible case of the flu). He then asks if he can borrow your lecture notes for the week. How likely are you to lend him your notes?”

1------2------3------4------5------6------7

I definitely would I definitely WOULD

NOT lend him my notes lend him my notes

High responsibility “went surfing”

Low responsibility “had a terrible case of the flu”

UCSB class data:

High responsibility: 1 = 4.65s2 = 2.99

Low responsibility:2 = 5.34 s2 = 2.06

Hypotheses Testing with Two Sample t

A.Statistical Hypotheses (two-tailed):

H0:1 = 2

H1:12

Alternative form:H0:1 - 2 = 0

H1:1 - 2 0

One-Tailed Hypotheses

Upper tail critical:

H0:1 2

H1:12

Lower tail critical:

H0:1 2

H1:1 < 2

Logic of the new t-test:

t =

1 approximates µ1 with some error

2 approximates µ2 with some error

Error for one sample mean: =

Error for two sample means: =

Note use of equivalent symbols:

= sdiff = s(M1-M2)

B.Computing independent measures t-statistic:

t =sample mean difference

estimated standard error

observed t statistic:

t =

degrees of freedom:

df = (n1 - 1) + (n2 – 1)

estimated standard error (when n1=n2):

=

What if sample sizes are unequal?

Take weighted average of the two variances
(a pooled variance estimate):

Step 1: pooled variance estimate:

or

Step 2: standard error:

=

T-Test for Two Independent Sample Means

sample data:

High Resp:1 = 4.65Low Resp: 2 = 5.34

(Surfing)n1 = 74(Flu)n2 = 73

s12 = 2.99s22 = 2.06

(1)Research Question/Hypothesis:

IV =

DV =

(2)State the Statistical Hypotheses:

H0:

H1:

(3)Create a Decision Rule:

(a) = .05

(b)two-tailed test

(c)df = (n1 – 1) + (n2 – 1)

Closest df = , so critical value is

(4)Compute the Observed t:

(a)First compute pooled variance:

or

(b)Compute standard error:sdiff=

sdiff =

(c)Compute t: t = =

(5)Decision:

(6)Interpretation:

Homework Problems

Chapter 9: 7, 8, 12

Chapter 10:8 (not section e), 10, 13