The Main Idea of Statistical Inference Is to Take a Random Sample from a Population And

The Main Idea of Statistical Inference Is to Take a Random Sample from a Population And

Sumber :

Introduction

The main idea of statistical inference is to take a random sample from a population and then to use the information from the sample to make inferences about particular population characteristics such as the mean (measure of central tendency), the standard deviation (measure of spread) or the proportion of units in the population that have a certain characteristic. Sampling saves money, time, and effort. Additionally, a sample can, in some cases, provide as much information as a corresponding study that would attempt to investigate an entire population-careful collection of data from a sample will often provide better information than a less careful study that tries to look at everything.

We must study the behavior of the mean of sample values from different specified populations. Because a sample examines only part of a population, the sample mean will not exactly equal the corresponding mean of the population. Thus, an important consideration for those planning and interpreting sampling results, is the degree to which sample estimates, such as the sample mean, will agree with the corresponding population characteristic.

In practice, only one sample is usually taken (in some cases such as "survey data analysis" a small "pilot sample" is used to test the data-gathering mechanisms and to get preliminary information for planning the main sampling scheme). However, for purposes of understanding the degree to which sample means will agree with the corresponding population mean, it is useful to consider what would happen if 10, or 50, or 100 separate sampling studies, of the same type, were conducted. How consistent would the results be across these different studies? If we could see that the results from each of the samples would be nearly the same (and nearly correct!), then we would have confidence in the single sample that will actually be used. On the other hand, seeing that answers from the repeated samples were too variable for the needed accuracy would suggest that a different sampling plan (perhaps with a larger sample size) should be used.

Quota Sampling: Quota sampling is availability sampling, but with the constraint that proportionality by strata be preserved. Thus the interviewer will be told to interview so many white male smokers, so many black female nonsmokers, and so on, to improve the representatives of the sample. Maximum variation sampling is a variant of quota sampling, in which the researcher purposively and non-randomly tries to select a set of cases, which exhibit maximal differences on variables of interest. Further variations include extreme or deviant case sampling or typical case sampling.

What Is the Margin of Error

Estimation is the process by which sample data are used to indicate the value of an unknown quantity in a population.

Results of estimation can be expressed as a single value; known as a point estimate, or a range of values, referred to as a confidence interval.

Whenever we use point estimation, we calculate the margin of error associated with that point estimation. For example; for the estimation of the population proportion, by the means of sample proportion (P), the margin of errors calculated often as follows:

±1.96 [P(1-P)/n]1/2

In newspapers and television reports on public opinion pools, the margin of error is often appears in small font at the bottom of a table or screen, respectively. However, reporting the amount of error only, is not informative enough by itself, what is missing is the degree of the confidence in the findings. The more important missing piece of information is the sample size n. that is, how many people participated in the survey, 100 or 100000? By now, you know it well that the larger the sample size the more accurate is the finding, right?

The reported margin of error is the margin of "sampling error". There are many nonsampling errors that can and do affect the accuracy of polls. Here we talk about sampling error. The fact that subgroups have larger sampling error than one must include the following statement: "Other sources of error include but are not limited to, individuals refusing to participate in the interview and inability to connect with the selected number. Every feasible effort is made to obtain a response and reduce the error, but the reader (or the viewer) should be aware that some error is inherent in all research."

If you have a yes/no question in a survey, you probably want to calculate a proportion P of Yes's (or No's). Under simple random sampling survey, the variance of P is P(1-P)/n, ignoring the finite population correction, for large n, say over 30. Now a 95% confidence interval is

P - 1.96 [P(1-P)/n]1/2, P + 1.96 [P(1-P)/n]1/2.

A conservative interval can be calculated, since P(1-P) takes its maximum value when P = 1/2. Replace 1.96 by 2, put P = 1/2 and you have a 95% consevative confidence interval of 1/n1/2. This approximation works well as long as P is not too close to 0 or 1. This useful approximation allows you to calculate approximate 95% confidence intervals.

References and Further Readings:
Casella G., and R. Berger, Statistical Inference, Wadsworth Pub. Co., 2001.
Kish L., Survey Sampling, Wiley, 1995.
Lehmann E., and G. Casella, Theory of Point Estimation, Springer Verlag, New York, 1998.
Levy P., and S. Lemeshow, Sampling of Populations: Methods and Applications, Wiley, 1999.

Sample Size Determination

The question of how large a sample to take arises early in the planning of any survey. This is an important question that should be treated lightly. To take a large sample than is needed to achieve the desired results is wasteful of resources whereas very small samples often lead to that are no practical use of making good decision. The main objective is to obtain both a desirable accuracy and a desirable confidence level with minimum cost.

Pilot Sample: A pilot or preliminary sample must be drawn from the population and the statistics computed from this sample are used in determination of the sample size. Observations used in the pilot sample may be counted as part of the final sample, so that the computed sample size minus the pilot sample size is the number of observations needed to satisfy the total sample size requirement.

People sometimes ask me, what fraction of the population do you need? I answer, "It's irrelevant; accuracy is determined by sample size alone" This answer has to be modified if the sample is a sizable fraction of the population.

For an item scored 0/1 for no/yes, the standard deviation of the item scores is given by SD = [p(1-p)/N] 1/2 where p is the proportion obtaining a score of 1, and N is the sample size.

The standard error of estimate SE (the standard deviation of the range of possible p values based on the pilot sample estimate) is given by SE= SD/N½. Thus, SE is at a maximum when p = 0.5. Thus the worst case scenario occurs when 50% agree, 50% disagree.

The sample size, N, can then be expressed as largest integer less than or equal to 0.25/SE2

Thus, for SE to be 0.01 (i.e. 1%), a sample size of 2500 would be needed; 2%, 625; 3%, 278; 4%, 156, 5%, 100.

Note, incidentally, that as long as the sample is a small fraction of the total population, the actual size of the population is entirely irrelevant for the purposes of this calculation.

Sample sizes with regard to binary data:

n = [t2 N p(1-p)] / [t2 p(1-p) + 2 (N-1)]

with N being the size of the total number of cases, n being the sample size,  the expected error, t being the value taken from the t distribution corresponding to a certain confidence interval, and p being the probability of an event.

For a finite population of size N, the standard error of the sample mean of size n, is:

[(N -n)/(nN)]½

There are several formulas for the sample size needed for a t-test. The simplest one is

n = 2(Z+Z)22/D2

which underestimates the sample size, but is reasonable for large sample sizes. A less inaccurate formula replaces the Z values with t values, and requires iteration, since the df for the t distribution depends on the sample size. The accurate formula uses a non-central t distribution and it also requires iteration.

The simplest approximation is to replace the first Z value in the above formula with the value from the studentized range statistic that is used to derive Tukey's follow-up test. If you don't have sufficiently detailed tables of the studentized range, you can approximate the Tukey follow-up test using a Bonferroni correction. That is, change the first Z value to Z where k is the number of comparisons.

Neither of these solutions is exact and the exact solution is a bit messy. But either of the above approaches is probably close enough, especially if the resulting sample size is larger than (say) 30.

A better stopping rule for conventional statistical tests is as follows:
Test some minimum (pre-determined) number of subjects.
Stop if p-value is equal to or less than .01, or p-value equal to or greater than .36; otherwise, run more subjects.

Obviously, another option is to stop if/when the number of subjects becomes too great for the effect to be of practical interest. This procedure maintains about 0.05.

We may categorized probability proportion to size (PPS) sampling, stratification, and ratio estimation (or any other form of model assisted estimation) as tools that protect one from the results of a very unlucky sample. The first two (PPS sampling and stratification) do this by manipulation of the sampling plan (with PPS sampling conceptually a limiting case of stratification). Model assisted estimation methods such as ratio estimation serve the same purpose by introduction of ancillary information into the estimation procedure. Which tools are preferable depends, as others have said, on costs, availability of information that allows use of these tools, and the potential payoffs (none of these will help much if the stratification/PPS/ratio estimation variable is not well correlated with the response variable of interest).

There are also heuristic methods for determination of sample size. For example, in healthcare behavior and process measurement sampling criteria are designed for a 95% CI of 10 percentage points around a population mean of 0.50; There is a heuristic rule: "If the number of individuals in the target population is smaller than 50 per month, systems do not use sampling procedures but, attempt to collect data from all individuals in the target population."