Statistics 550 Notes 20

Reading: Section 4.1-4.2

I. Hypothesis Testing Basic Setup (Chapter 4.1)

Review

Motivating Example: A graphologist claims to be able to identify the writing of a schizophrenic person from a nonschizophrenic person. The graphologist is given a set of 10 folders, each containing handwriting samples of two persons, one nonschizophrenic and the other schizophrenic. In an experiment, the graphologist made 6 correct identifications. Is there strong evidence that the graphologist is able to better identify the writing of schizophrenics better than a person who was randomly guessing would?

Probability model: Let be the probability that the graphologist successfully identifies the writing of a randomly chosen schizophrenic vs. nonschizophrenic person. A reasonable model is that the 10 trials are iid Bernoulli with probability of success .

Hypotheses:

The alternative/research hypothesis should be the hypothesis we’re trying to establish.

Test Statistic: that is used to decide whether to accept or reject .

In the motivating example, a natural test statistic is , the number of successful identifications of schizophrenics the graphologist makes. The observed value of this test statistic is 6.

Critical region: Region of values of the test statistic for which we reject the null hypothesis, e.g.,.

Errors in Hypothesis Testing:

TrueState of Nature
Decision / is true / is true
Reject / Type I error / Correct decision
Accept (retain) / Correct decision / Type II error

The best critical region would make the probability of a Type I error small when is true and the probability of a Type II error small when is true. But in general there is a tradeoff between these two types of errors.

Size of a test: We say that a test with test statistic and critical region is of size if

.

The size of the test is the maximum probability (where the maximum is taken over all that are part of the null hypothesis; for the motivating example, the null hypothesis only has one value of in it) of making a Type I error when the null hypothesis is true. A test with size is said to be a test of (significance) level.

Suppose we use a critical region with the test statistic for the motivating example. The size of the test is where Y has a binomial distribution with n=10 and probability p=0.5.

Power: The power of a test at an alternative is the probability of making a correct decision when is the true parameter (i.e., the probability of not making a Type II error when is the true parameter).

The power of the test with test statistic and critical region at p=0.6 is and at p=0.7 is where Y has a binomial distribution with n=10 and probability p. The power depends on the specific parameter in the alternative hypothesis that is being considered.

Power function: .

Neyman-Pearson paradigm: Set the size of the test to be at most some small level, typically 0.10, 0.05 or 0.01 (most commonly 0.05) in order to protect against Type I errors. Then among tests that have this size, choose the one that has the “best” power function. In Chapter 4.2, we will define more precisely what we mean by “best” power function and derive optimal tests for certain situations.

For the test statistic , the critical region has a size of 0.377; this gives too high a probability of Type I error. The critical region has a size of 0.0547, which makes the probability of a Type I error reasonably small. Using , we retain the null hypothesis for the actual experiment for which W was equal to 6.

P-value: For a test statistic , consider a family of critical regions each with different sizes. For the observed value of the test statistic from the sample, consider the subset of critical regions for which we would reject the null hypothesis, . The p-value is the minimum size of the tests in the subset ,

p-value = .

The p-value is the minimum significance level for which we would still reject the null hypothesis.

The p-value is a measure of how much evidence there is against the null hypothesis; it is the minimum significance level for which we would still reject the null hypothesis.

Consider the family of critical regions for the motivating example. Since the graphologist made 6 correct identifications, we reject the null hypothesis for critical regions . The minimum size of the critical regions is for i=6 and equals 0.377. The p-values is thus 0.377.

Scale of evidence

p-value / Evidence
<0.01 / very strong evidence against the null hypothesis
0.01-0.05 / Strong evidence against the null hypothesis
0.05-0.10 / weak evidence against the null hypothesis
>0.1 / little or no evidence against the null hypothesis

Warnings:

(1) A large p-value is not strong evidence in favor of . A large p-value can occur for two reasons: (i) is true or (ii) is false but the test has low power at the true alternative.

(2) Do not confuse the p-value with . The p-value is not the probability that the null hypothesis is true.

II. Testing simple versus simple hypotheses: Bayes procedures

Consider testing a simple null hypothesis versus a simple alternative hypothesis , i.e., under the null hypothesis and under the alternative hypothesis .

Example 1: iid . , .

Example 2: has one of the following two distributions:

P(X=x)
0 / 1 / 2 / 3 / 4
/ 0.1 / 0.1 / 0.1 / 0.2 / 0.5
/ 0.3 / 0.3 / 0.2 / 0.1 / 0.1

Bayes procedures: Consider 0-1 loss, i.e., the loss is 1 if we choose the incorrect hypothesis and 0 if we choose the correct hypothesis. Let the prior probabilities be on and on . The posterior probability for is

.

The posterior risk for 0-1 loss is minimized by choosing the hypothesis with higher posterior probability.

Thus, the Bayes rule is to choose (equivalently ) if

and choose (equivalently ) otherwise.

For , the Bayes rule is choose if

and choose otherwise.

Note that the Bayes risk for the prior of a test is 0.5*P(Type I error)+0.5*P(Type II error). Thus, the Bayes procedure for the prior minimizes the sum of the probability of a type I error and the probability of a type II error.

Example 1 continued: Suppose (1.1064, 1.1568, -0.1602, 1.0343, -0.1079), . Then

We choose if , or equivalently

,

which for this data is . Writing , we have that we choose for priors with and for priors with .

III. Neyman-Pearson Lemma (Section 4.2)

In the Neyman-Pearson paradigm, the hypotheses are not treated symmetrically: Fix . Among tests having level (Type I error probability) , find the one that has the “best” power function.

For simple vs. simple hypothesis, the best power function means the best power at . Such a test is called the most powerful level test.

The Neyman-Pearson lemma provides us with a most powerful level test for simple vs. simple hypotheses.

Analogy: To fill up a bookshelf with books with the least cost, we should start by picking the one with the largest width/$ and continue. Similarly, to find a most powerful level test, we should start by including in the critical region those sample points that are most likely under the alternative relative to the null hypothesis and continue.

Define the likelihood ratio statistic by

,

where is the probability mass function or probability density function of the data X.

The statistic L takes on the value when and by convention equals 0 when both .

We can describe a test by a test function . When , we always reject . When , , we conduct a Bernoulli trial and reject with probability (thus we allow for randomized tests) When , we always accept .

We call a likelihood ratio test if

Theorem 4.2.1 (Neyman-Pearson Lemma): Consider testing vs.

(a) If and is a size likelihood ratio test, then is a most powerful level test

(b) For each , there exists a most powerful size likelihood ratio test.

(c) If is a most powerful level test, then it must be a level likelihood ratio test except perhaps on a set A satisfying

Example 1: iid . , .

The likelihood ratio statistic is

Rejecting the null hypothesis for large values of is equivalent to rejecting the null hypothesis for large values of .

What should the cutoff be? The distribution of under the null hypothesis is so the most powerful level tests rejects for

where is the CDF of a standard normal.

Example 2:

P(X=x)
0 / 1 / 2 / 3 / 4
/ 0.1 / 0.1 / 0.1 / 0.2 / 0.5
/ 0.3 / 0.3 / 0.2 / 0.1 / 0.1
/ 3 / 3 / 2 / 0.5 / 0.2

The most powerful level 0.2 test rejects if and only if X=0 or 1.

There are multiple most powerful level 0.1 tests, e.g., 1) reject the null hypothesis if and only if X=0; 2) reject the null hypothesis if and only if X=0; 3) flip a coin to decide whether to reject the null hypothesis when X=0 or X=1.

Proof of Neyman-Pearson Lemma:

(a) We prove the result here for continuous random variables X. The proof for discrete random variables follows by replacing integrals with sums.

Let be the test function of any other level test besides . Because is level , . We want to show that .

We examine and show that . From this, we conclude that

The latter integral is because

Hence, we conclude that as desired.

To show that , let

Suppose . This implies which implies that . Thus,

.

Also, similarly,

and

(since for ).

Thus,

and this shows that

as argued above.

(b) Let where is the cdf of under . By the properties of CDFs, is nonincreasing in c and right continuous.

By the right continuity of , there exists such that

. So define

Then,

So we can take to be .

(c) Let be the test function for any most powerful level test. By parts (a) and (b), a likelihood ratio test with size can be found that is most powerful. Since and are both most powerful, it follows that

(1.1)

Following the proof in part (a), (1.1) implies that

which can be the case if and only if when (i.e., ) and when (i.e., ) except on a set A satisfying .

1