Statistics 550 Notes 1

Reading: Section 1.1.

I. Basic definitions and examples of models (Section 1.1.1)

Goal of statistics: Draw useful information from data.

Model based approach to statistics: Treat data as the outcome of a random experiment that we model mathematically. We draw useful information from the data by drawing inferences about parameters that describe the random experiment.

Random Experiment: Any procedure that (1) can be repeated, theoretically, over and over; and (2) has a well-defined set of possible outcomes (the sample space).

The outcome of the experiment is data .

Examples of experiments and data:

  • Experiment: Randomly select 1000 people without replacement from the U.S. adult population and ask them whether they are employed. Data: . if person i in sample is employed, if person i in sample is not employed.
  • Experiment: Randomly sample 500 handwritten ZIP codes on envelopes from U.S. postal mail . Data: . = 216 x 72 matrix where elements in the matrix are numbers from 0 to 255 that represent the intensity of writing in each part of the image.

The probability distribution for the data over repeated experiments is .

Frequentist concept of probability:

Proportion of times in repeated experiments that the data falls in the set E.

(Statistical) Model: Family of possible ’s:

. The ’s label the’s and is a space of labels called the parameter space.

Goal of statistical inference: On the basis of the data, make inferences about the true that generated the data.

We will study three types of inferences:

(1)Point estimation – best estimate of .

(2)Hypothesis testing – decide whether is in a specified subset of .

(3)Interval (set) estimation – estimate a set that lies in.

Goal of this course: Study how to make “good” inferences.

Example of a statistical model:

Example 1: Yao Ming’s free throw shooting.

What is the probability that Yao Ming will make a free throw the next time he attempts one in an NBA game?

Data: In the 2007-2008 season, Yao made 345 out of the 406 free throws he attempted (85.0%). Let denote whether or not Yao made his 1,2,…,406th free throw of the season (1 denotes made, 0 denotes missed).

Model 1: IID Bernoulli model. are independent and identically distributed (iid) random variables with a Bernoulli distribution.

, .

Model 2: Markov chain model. Let follow a Markov chain with transition matrix

Last Shot
This Shot / Made / Missed
Make / a / b
Miss / c / d

where the stationary probability of making a free throw, , is defined as . We furthermore assume that the initial free throw is drawn from the stationary distribution, i.e., . In this model, each of the free throws has a marginal Bernoulli distribution, but the free throws can be dependent. The model can be specified as

, .

Other modeling issues:

(1) Should we use just the 2007-2008 data or use previous seasons’ data (Yao’s free throw shooting might be changing over time).

(2) Even if we knew in the above models, is this the right to use to predict Yao’s first free throw in the 2008-2009 (again, Yao’s free throw shooting might change over time).

Choosing models:

Consultation with subject matter experts and knowledge about how the data are collected are important for selecting a reasonable model.

George Box (1979): “Models, of course, are never true but fortunately it is only necessary that they be useful.”

We will focus mostly on making inferences about the true conditional on the model’s validity, i.e., the true belongs to the family of distributions specified by the model, . Another important step in data analysis is to investigate the model’s validity through diagnostics (techniques for doing this will be discussed in Chapter 4).

II. Parameterization and Parameters (Section 1.1.2)

Model: .

Parameterization: A way of labeling the distributions in the model. Formally, an onto map from a parameter space is called a parameterization of .

Example 1 continued: In model 1 for Yao’s free throw shooting, a parameterization is , where are iid Bernoulli .

The parameterization is not unique. In Model 1 for Yao’s free throw shooting,we could also use the parameterization to label the distributions in the model.

We try to choose a parameterization in which the components of the parameterization are interpretable

in terms of the phenomenon we are trying to measure.

Example 2: The level of phosphate in the blood of kidney dialysis patients is of concern because kidney failures and dialysis can lead to nutritional problems. Phosphate levels tend to vary normally over time. Doctors are interested in the mean level of phosphate over a period of time. A blood test was performed on a dialysis patient on six consecutive clinic visits. The data is , the milligrams of phosphate per deciliter at the six visits. The model is

iid (where are the mean and variance of the normal distribution respectively).

Three possible parameterizations are (), () and . The first two parameterizations are more interpretable because they contain one parameter that corresponds exactly to what we are interested in, the patients’ mean phosphate level.

Parametric vs. Nonparametric models: Models in which is a nice subset of a finite dimensional Euclidean space are called “parametric” models, e.g., the model in Example 2 is parametric. Models in which is infinite dimensional are called “nonparametric.” For example, if in Example 2, we considered iid from any distribution with a density, the model would be nonparametric.

Identifiability: The parameterization is identifiable if the map is one-to-one, i.e., if .

The parameterization is unidentifiable if there exists such that .

When the parameterization is unidentifiable, then parts of remain unknowable even with “infinite amounts of data”, i.e., even if we knew the true

Example 3: Suppose iid exponential with mean , i.e.,

The parameterization is identifiable. The parameterization is unidentifiable because .

Note on notation: We will use to denote the probability mass function if the distribution is discrete or the probability density function if the distribution is continuous.

Parameter: A parameter is a feature of , i.e., a map from to another space .

e.g., for Example 2, iid ,

,the mean of each , is a parameter.

, the variance of each , is a parameter.

is a parameter.

Some parameters are of interest and others are nuisance parameters that are not of central interest.

In Example 2, for the parameterization (), the parameter is the parameter of interest and the parameter is a nuisance parameter. The doctors’ primary interest is in the mean phosphate level.

A parameter is by definition identified, meaning that if we knew the true , we would know the parameter.

Proposition: For a given parameterization , is a parameter if and only if the parameterization is identifiable.

Proof: If the parameterization is identifiable, then is equal to the inverse of the parameterization which maps . If the parameterization is not identifiable, then for some , we have and consequently we can’t write for any function .

Remark: Even if the parameterization is unidentifiable, components of the parameterization may be identified (i.e., parameters).

Why would we ever want to consider an unidentifiable parameterization?

Components of the parameterization may capture the scientific features of interest. We may be interested if certain components of the parameterization are identified.

Example 5: Survey nonresponse. A major problem in survey sampling is nonresponse.

Example: On Sunday, Sept. 11, 1988, the San Francisco Examiner ran a story headlined:

3 IN 10 BIOLOGY TEACHERS BACK BIBLICAL CREATIONISM

Arlington, Texas. Thirty percent of high school biology teachers polled believe in the biblical creation and 19 percent incorrectly think that humans and dinosaurs lived at the same time, according to a nationwide survey published Saturday…

The poll was conducted by choosing 400 teachers at random from the National Science Teachers Association’s list of 20,000 teachers and sending these 400 teachers questionnaires. 200 of these 400 teachers returned the questionnaires and 60 of the 200 believe in biblical creationism.

Let 1 or 0 according to whether the ith teacher believes in biblical creationism, .

Let 1 or 0 according to whether the ith teacher would respond to the questionnaire if sent it, .

We would like to know the proportion of teachers that believe in biblical creationism, .

The data from the experiment of randomly sampling 400 teachers is (i) the number of teachers that respond, call this and (ii) the number of teachers that respond and believe in biblical creationism, call this .

The distribution of over repeated random samples is that is hypergeometric

and the conditional distribution of is

A parameterization for the model is

,

proportion of people who would respond if sent the

questionnaire

proportion of teachers who back biblical creationism.

among the teachers who would respond, proportion

who back biblical creationism

This parameterization includes our quantity of interest, the proportion of teachers who back biblical creationism --

But this parameterization is not identifiable:

and

have the same distribution for .

The quantity the article reported an estimate of, , is identified (i.e., a parameter) but our quantity of interest,

, is not identified (i.e., not a parameter).

III. Statistics

A statisticis a random variable or random vector that is a function of the data.

Example 2 continued: iid . Two statistics are and the sample variance .

III. Regression Model (Section 1.1.4)

In the regression setting, each individual unit has a response variable and a vector of explanatory variables . A regression model is a model for

The multiple linear regression model is and . The coefficients can be interpreted as the change in the mean of that is associated with a one unit change in when are held fixed.

Example: The 1966 Coleman Report on “Equality of Educational Opportunity” sought to explain how student achievement in schools was associated with the resources of the school and the socioeconomic background of the student, e.g.,

= verbal achievement score in school (6th graders)

staff salaries per pupil

% of students in 6th grade of school whose father has a white collar occupation

SES (socioeconomic status)

teachers’ average verbal scores

mothers’ average education

Problem 1.1.9 (Problem 4 on homework 1) is concerned with the impact of collinearity of the explanatory variables on identifiability of the parameter vector . The variables would be collinear if one variable was a linear function of the other variables. were close to being collinear in the Coleman study because

socioeconomic status was highly correlated with the resources of the school (staff salaries per pupil) prior to the desegregation of schools.

1