1

STAT 211

Handout 6 (Chapter 6): Point Estimation

A point estimate of a parameter  is a single number that can be regarded as the most plausible value of .

Unbiased Estimator:A point estimator, =  + error of estimation, is an unbiased estimator of  if E()=  for every possible value of . Otherwise, it is biased and Bias = E()- .

Read the example 6.2 (your textbook).

Example 1: When X is a binomial r.v. with parameters, n and p, the sample proportion X/n is an unbiased estimator of p.

To prove this, you need to show E(X/n)=p where=X/n.

E(X/n)= E(X)/n, Using the rules of the expected value.

= np / n=pIf X~Binomial(n,p) then E(X)=np (Chapter 3)

Example 2: A sample of 15 students who had taken calculus class yielded the following information on brand of calculator owned: T H C T H H C T T C C H S S S (T: Texas Instruments, H: Hewlett Packard, C=Casio, S=Sharp).

(a)Estimate the true proportion of all such students who own a Texas Instruments calculator.

Answer=0.2667

(b)Three out of four calculators made by only Hewlett Packard utilize reverse Polish logic. Estimate the true proportion of all such students who own a calculator that does not use reverse Polish logic.

Answer=0.80

Example 3 (Exercise 6.8) : In a random sample of 80 components of a certain type, 12 are found to be defective.

(a)A point estimate of the proportion of all such components which are not defective.

Answer=0.85

(b)Randomly select 5 of these components and connect them in series for the system. Estimate the proportion of all such systems that work properly.

Answer=0.4437

Example 4 (Exercise 6.12) :

X: yield of 1st type of fertilizer. ,E(X)= Var(X)=

Y: yield of 2nd type of fertilizer.,E(Y)=Var(Y)=

Show is an unbiased estimator for

It means that you need to show

Example5 (Exercise 6.13) :X1,X2,….,Xn be a random sample from the pdf f(x)=0.5(1+x), -1x1, -11. Show that is an unbiased estimator for .

It means that you need show .

= where

,

The standard error:The standard error of an estimator is its standard deviation .

The estimated standard error:The estimated standard error of an estimator is its estimated standard deviation =.

The minimum variance unbiased estimator (MVUE):The best point estimate. Among all estimators of  that are unbiased choose the one that has minimum variance. The resulting is MVUE.

Example6: If we go back to example 1, the standard error of is where

Example 7:If we go back to example 5, the standard error of is

where Var(X)=

E(X2)=

Example 8:For normal distribution, is the MVUE for . Proof is as follows.


The following graphsare generated by creating 500 samples with size 5 from N(0,1) and calculating the sample mean and the sample median for each sample.

Example 9 (Exercise 6.3):Given normally distributed data yield the following summary statistics.

Variable n Mean Median TrMean StDev SE Mean

thickness 16 1.3481 1.3950 1.3507 0.3385 0.0846

Variable Minimum Maximum Q1 Q3

thickness 0.8300 1.8300 1.0525 1.6425

(a)A point estimate of the mean value of coating thickness.

(b)A point estimate of the median value of coating thickness.

(c)A point estimate of the value that separates the largest 10% of all values in the coating thickness distribution from the remaining 90%.

Answer=1.78138

(d)Estimate P(X<1.5) (The proportion of all thickness values less than 1.5)

Answer=0.6736

(e)Estimated standard error of the estimator used in (a).

Answer=0.084625

METHODS OF OBTAINING POINT ESTIMATORS

  1. The Method of Moments (MME)

Let X1,X2,….,Xn be a random sample from a pmf or pdf. For k=1,2,…., the kth population moment of the distribution is E(Xk). The kth sample moment is .

Steps to follow : If you have only one unknown parameter

(i)calculate E(X).

(ii)equate it to .

(iii)Solve for unknown parameter (such as 1).

If you have two unknown parameters, you also need to compute the following to solve two unknown parameters with two equations.

(iv)calculate E(X2).

(v)equate it to .

(vi)Solve for the second unknown parameter (such as 2).

If you have more than two unknown parameters, repeat the same steps for k=3,….. until you can solve it.

Example 10: Show that MME of the parameter  in Poisson distribution is

There is one unknown parameter.

The 1th population moment of the distribution is E(X)= .

The 1th sample moment is

Then is the MME for 

Example 11:Find the MME for the parameters  and in gamma distribution.

There are two unknown parameters.

The 1th population moment of the distribution is E(X)= .

The 1th sample moment is

Then = but this did not help to solve for any unknown parameter. We need to continue the steps.

The 2nd population moment of the distribution is E(X2)= 2(1+).

The 2nd sample moment is

Then 2(1+)=

Since we have 2 unknown parameters and two equations, we can solve for the unknown parameters.

The MME for  and  are and , respectively

Example 12:Find the MME for the parameters  and 2 in normal distribution.

There are two unknown parameters.

The 1th population moment of the distribution is E(X)= .

The 1th sample moment is

Then = but we still need to solve for the second unknown parameters. We need to continue the steps.

The 2nd population moment of the distribution is E(X2)= 2 +2 .

The 2nd sample moment is

Then 2 +2 =

Then this can be solved for the second unknown parameter.

The MME for  and 2 are and , respectively

  1. The Method of Maximum Likelihood (MLE)

Likelihood function is the joint pmf or pdf of X which is the function of unknown  values when x's are observed. The maximum likelihood estimates are the  values which maximize the likelihood function.

Steps to follow:

(i) Determine the likelihood function.

(ii) Take the natural logarithm of the likelihood function.

(iii) Take a first derivative with respect to each unknown  and equate it to zero (if you have m unknown parameters, you will have m equations as a result of derivatives).

(iv) Solve for unknown 's.

(v)Check if it really maximizes your function by looking at a second derivative.

Example 13: Show that MLE of the parameter  in Poisson distribution is

There is one unknown parameter.

L=Likelihood = p(x1,x2,….,xn) = p(x1)p(x2)….p(xn) by independence

= ..…….. =

ln(L)=

then

then the MLE of  is

The Invariance Principle: Let .,,..., be the MLE's of the parameters ,,...,. Then the MLE of any function h(,,...,) of these parameters is the function h(.,,...,) of the MLE's

Example 14:

(1) Let X1,…,Xn be a random sample of normally distributed random variables with the mean  and the standard deviation .

The method of moment estimates of  and 2 are and, respectively

The maximum likelihood estimates of  and 2 are and, respectively

(2) Let X1,…,Xn be a random sample of exponentially distributed random variables with parameter .

The method of moment estimate and the maximum likelihood estimate of  are .

(3) Let X1,…,Xn be a random sample of binomial distributed random variables with parameter p.

The method of moment estimate and the maximum likelihood estimate of p are X/n.

(4) Let X1,…,Xn be a random sample of Poisson distributed random variables with parameter .

The method of moment estimate and the maximum likelihood estimate of  are .

All the estimates above are unbiased? Some Yes but others No. (will be discussed in class)

Example 15 (Exercise 6.20): random sample of n bike helmets are selected.

X: number among the n that are flawed =0,1,2,…..,n

p=P(flawed)

(a)Maximum likelihood estimate (MLE) of p if n=20 and x=3?

(b)Is the estimator in (a) unbiased?

(c)Maximum likelihood of (1-p)5 (none of the next five helmets examined is flawed)?

(d)Instead of selecting 20 helmets to examine, examine the helmets in succession until 3 flawed ones are found. What would be different in X and p?

Example 16 (Exercise 6.22):

X: the proportion of allotted time that a randomly selected student spends working on a certain aptitude

The pdf of x is f(x;)=, 0x1, >-1.

A random sample of 10 students yield the data: 0.92, 0.79, 0.90, 0.65, 0.86, 0.47, 0.73, 0.97, 0.94, 0.77.

(a) Obtain the MME of  and compute the estimate using the data.

Set E(X)= and then solve for .

The given data yield = 0.80 then the method of moment estimator for  is =3

(b) Obtain the MLE of  and compute the estimate using the data.

L=Likelihood=

ln(L)=

=0 then solve for .

The given data yield -2.4295 then the maximum likelihood estimator for  is =3.1161

Proposition: Under very general conditions on the joint distribution of the sample when the sample size is large, the MLE of any parameter  is approximately unbiased and has a variance that is nearly as small as can be achieved by an estimator.