Chelsea Hutto
PSYC 8000
Exam 1
50 points
Due 2/13/2014
- If my experimental hypothesis were ‘Eating cheese before bed affects the number of nightmares you have’, what would the null hypothesis be? (1 pt)
- Eating cheese before bed gives you more nightmares.
- Eating cheese is linearly related to the number of nightmares you have.
- The number of nightmares you have is not affected by eating cheese before bed.
- Eating cheese before bed gives you fewer nightmares.
- What does a significant test statistic tell us? (1 pt)
- There is an important effect.
- That the test statistic is larger than we would expect if there were no effect in the population.
- The hull hypothesis is false.
- All of the above.
- Give one example of each of the following: (3 pts)
- Nominal variable - A number of a contestant in a competition
- Ordinal variable - Contestants who won a competition, ranked in order such as first, second, and third place.
- Interval variable - Ratings of performance on a performance scale, which are based on a five point scale.
- What is a Type I error? What is one way that researchers can reduce their risk of making a Type I error? (2 pts)
Type 1 error occurs when we believe that there is a genuine effect in our population, when there in reality is not. One method which ensures Type 1 error remains below .05 is by using the Bonferroni Correction, which is conducted by dividing alpha by the number of comparisons (k). Additionally researchers could reduce the alpha level, but this in turn increases the likelihood of Type 2 error.
- What is a Type II error? What is one way that researchers can reduce their risk of making a Type II error? (2 pts)
Type 2 error occurs when we believe that there is no effect in the population when, in reality, there is. Researchers would be able to conduct the power of a test, which gives the probability that a given tests will find an effect assuming that one exists in a population. Using this measure researchers would be able to reduce the likelihood of making a Type 2 error.
- In 1-2 sentences, explain what power is, and one way in which power can be increased.(2 pts)
Power is the probability that a given test will find an effect assuming that one exists in the population. Increasing the sample size increases the power of a test.
- ‘Children can learn a second language differently before the age of 7 than after.’Is this statement: (1 pt)
- A two-tailed hypothesis
- A one-tailed hypothesis
- A null hypothesis
- A non-falsifiable hypothesis
- Under a null hypothesis, a sample value yields a p-value of .15. Which of the following statements is true? (1 pt)
- This finding is statistically significant at the .01 level of significance.
- This finding is not statistically significant.
- This finding is statistically significant at the .05 level of significance.
- This finding is statistically significant at the .001 level of significance.
- Why is the standard error important? (1 pt)
- It tells us the precise value of the variance within the population.
- It gives you a measure of how well your sample parameter represents the population value.
- It is unaffected by outliers.
- It is unaffected by the distribution of scores.
- What is the basic equation for all statistical models? In words, what does this equation mean? (2 pts) The outcome is equal to the model plus error. This equation also means the data we observe can be predicted from the model we choose to fit to the data plus some amount of error.
- A 95% confidence interval is: (1 pt)
- The range of values of the statistic which probably contains the true value of the statistic in the population.
- The range of values of the statistic that we can be 95% confident contains a significant effect in the population.
- The range of values of the statistic which we can by 95% certain does not contain the true population effect.
- The range of values of the statistic which we can be 5% confident contains a significant effect in the population.
- What is an effect size? If you have a particularly large sample, why might it be important to calculate effect sizes in addition to test statistics? (2 pts)An effect size in an objective and usually standardized measure of the magnitude of the observed effect. By calculating the effect size in comparison to other test statistics it gives a more accurate viewpoint since it is less affected by a large sample size as other statistical measures are. Effect sizes are also standardized, therefore enabling them to be compared across studies.
- The Kolmogorov–Smirnov test can be used to test: (1 pt)
- Whether group variances are equal.
- Whether group means differ.
- Whether scores are normally distributed.
- Whether scores are measured at the interval level.
- Which of these variables would be considered not to have met the assumptions of parametric tests based on the normal distribution? (1 pt)
- Reaction time (in seconds)
- Cognitive ability score (scores range from 0-100)
- Temperature
- Gender
- What does the graph below indicate about the normality of our data? (1 pt)
- The P-P plot reveals that the data deviate substantially from normal.
- The P-P plot reveals that the data are normal.
- The P-P plot reveals that data is strongly negatively skewed.
- The P-P plot reveals that data is strongly positively skewed
- We predict an outcome variable from some kind of model. That model is described by one or more ______variables and ______that tell us something about the relationship between the predictor and outcome variable. (1 pt)
- parameter, outcome variables
- predictor, parameters
- dependent, predictors
- outcome, estimates
- What does the assumption of independence mean? (1 pt)
- This assumption means that none of your independent variables are correlated.
- This assumption means that you must use an independent design rather than a repeated-measures design.
- This assumption means that the errors in your model are not related to each other.
- This assumption means that the residuals in your model are not independent.
- Looking at the table below, which of the following statements is the most accurate? (1 pt)
- For the level of musical skill, the data are heavily negatively skewed.
- For the number of hours spent practising, there is an issue with kurtosis.
- For the number of hours spent practising, the data are fairly positively skewed.
- For the number of hours spent practising, there is not an issue with kurtosis.
- Looking at the table below, which of the following statements is correct? (1 pt)
- Levene’s test was significant, F(1, 118) = 0.93, p = .007, indicating that the assumption of homogenity of variance had been met.
- Levene’s test was non-significant, F(1, 118) = 0.01, p = .93, indicating that the assumption of homogenity of variance had been met.
- Levene’s test was non-significant, F(1, 118) = 0.01, p = .93, indicating that the assumption of homogenity of variance had been violated.
- Levene’s test was significant, F(1, 118) = 0.01, p = .93, indicating that the assumption of homogenity of variance had been violated.
- What does the central limit theorem tell us about the relationship between sample size and the sampling distribution of a parameter? (2 pts)
The theory of central limit theorem is based on the assumption that regardless of the shape of the population, parameter estimates of that population will have a normal distribution provided the samples are big enough.
- Kevin has 4 extreme scores at the positive end of his distribution that are causing his data to be positively skewed. To fix this problem, Kevin deletes these 4 scores and carries on with his analyses. Is this the appropriate way for Kevin to have handled these scores? Why or why not? Name one other way that he could have handled these scores. (3 pt)
Trimming doesn't automatically remove outliers, and should only be used if there was a reason to think they came from a different population or were entered wrong; due to this fact trimming was not the most appropriate way for Kevin to have handled his outliers. Kevin could have winsorize his data which substitutes outliers with the highest value that isn't an outlier.
- Which of the following statements about Pearson’s correlation coefficient is not true? (1 pt)
- It can be used as an effect size measure.
- It can be used on ranked data.
- Its value ranges from 0 to 1.
- It can be used on ranked data.
- The correlation between two variables A and B is .12 with a significance of p .01. What can we conclude? (1 pt)
- That there is a substantial relationship between A and B.
- That there is a small relationship between A and B.
- That variable A causes variable B.
- None of the above
- Aurelia wants to determine the correlation between two variables: cognitive ability and job performance. Cognitive ability was measured using a well-established test, and scores range from 0-200. Job performance was measured by having the general manager rank employees from 1-100 in terms of their performance over the last 6 months. What type of correlation should Aurelia use? Why? (2 pts) Spearman's rho is a non-parametric statistic based off of ranked data and can also be useful when dealing with non-normal data. In comparison to Kendall's tau, there is no indication of tied ranks of this sample, which suggests Spearman's rho would be a better measured to be used in this situation.
- The relationship between two variables partialling out the effect that a third variable has on one of those variables can be expressed using a: (1 pt)
- Bivariate correlation
- Point-biserial correlation
- Partial correlation
- Semi-partial correlation
- List and explain 2 reasons why causality cannot be inferred from correlation. (2 pts)
Correlation does not imply causality. Correlation can only show the relationship between two variables. Issues that show correlation does not show causality is the third variable problem which means some other variable (one that was not measured) is responsible for the observed relationship. Additionally, there is no way to determine directionality.
- Which correlation coefficient would you use to look at the correlation between gender and time spent on the phone talking to your mother? (1 pt)
- The point-biserial correlation coefficient, rpb
- The biserial correlation coefficient, rb
- Pearson’s correlation coefficient, r
- Kendall’s correlation coefficient, τ
- How do you determine if your data is curvilinear? If your data is curvilinear, can you use a Pearson’s r correlation? Why or why not? (3 pts)
When using Pearson's r it is important to remember it is based on the assumption of linear relationship rather than curvilinear, therefore, pearson's r would not be a proper correlation to use when dealing with curvilinear data. To determine if your data is curvilinear, using graphs is one approach to visually determine if your data is curvilinear.
- What do the results in the table below show? (1 pt)
Work productivity / Time spent on Facebook
Work productivity / Pearson’s correlation / 1.000 / –.94
Sig. (2-tail) / . / .000
N / 100 / 100
Time spent on Facebook / Pearson’s correlation / –.94 / 1.000
Sig. (2-tail) / .000 / .
N / 100 / 100
- In a sample of 100 people, there was a strong negative but non-significant relationship between work productivity and time spent on Facebook, r= –.94, p > .001.
- In a sample of 100 people, there was a non-significant negative relationship between work productivity and time spent on Facebook, r= –.94, p < .001.
- In a sample of 100 people, there was a strong negative relationship between work productivity and time spent on Facebook, r= –.94, p < .001.
- In a sample of 100 people, there was a weak negative relationship between work productivity and time spent on Facebook, r= –.94, p < .001.
- Looking at the table below, which variables were the most strongly correlated? (1 pt)
Work ethic / Annual income / IQ
Work ethic / Pearson’s correlation / 1.000 / .72 / .66
Sig. (2-tail) / . / .001 / .000
N / 550 / 550 / 550
Annual income / Pearson’s correlation / .72 / 1.000 / .47
Sig. (2-tail) / .000 / . / .03
N / 550 / 550 / 550
IQ / Pearson’s correlation / .66 / .47 / 1.000
Sig. (2-tail) / .000 / .03 / .
N / 550 / 550 / 550
- Annual income and IQ
- Work ethic and annual income
- Work ethic and IQ
- None of the variables were significantly correlated with one another
- How is the coefficient of determination calculated? What does the coefficient of determination tell us in terms of variance? (2 pts)
Coefficient of determination (R2) is a measure of the amount of variability in one variable that is shared by the other. This value is calculated by squaring the correlation of two variables. R2 tells us how much of the variability between two variables are shared (which can be transformed into a percentage). R2 is not to be mistaken as the variance accounted for of one variable by another, which implies causality.
- A psychologist was interested in whether the amount of news people watch (minutes per day) predicts how depressed they are (from 0 = not depressed to 7 = very depressed). What does the standardized beta tell us in the output? (1 pt)
- As news exposure increases by 1 standard deviation, depression decreases by 0.224 of a standard deviation.
- As news exposure decreases by 0.224 standard deviations, depression increases by 1 standard deviation.
- As news exposure increases by 1 minute, depression decreases by 0.224 units.
- As news exposure decreases by 0.224 minutes, depression increases by 1 unit.
- A consumer researcher was interested in what factors influence people's fear responses to horror films. She measured gender and how much a person is prone to believe in things that are not real (fantasy proneness). Fear responses were measured too. In this table, what does the value 847.685 represent? (1 pt)
- The reduction in the error in predicting fear scores when fantasy proneness is added to the model
- The total error in predicting fear scores when both gender and fantasy proneness are included as predictors in the model
- The improvement in prediction of fear resulting from including both gender and fantasy proneness as predictors in the model
- The improvement in prediction of fear resulting from adding fantasy proneness to the model
- A psychologist was interested in whether the amount of news people watch predicts how depressed they are. In this table, what does the value 4.404 represent? (1 pt)
- The ratio of how much the prediction of depression has improved by fitting the model, compared to how much variability there is in depression scores
- The ratio of how much error there is in the model, compared to how much variability there is in depression scores
- The proportion of variance in depression explained by news exposure
- The ratio of how much the prediction of depression has improved by fitting the model, compared to how much error still remains
- Looking at this plot showing the zpred x zresid values for the outcome variable (depression), does there appear to be a problem with homoscedasticity? Why or why not? (2 pts)
There does appear to be a problem with homoscedasticity with this particular data set. In this example the spread of scores for depression were different at each unit of news exposure.
- Which of the following statements about the t-statistic in regression is not true? (1 pt)
- The t-statistic provides some idea of how well a predictor predicts the outcome variable.
- The t-statistic can be used to see whether a predictor variables makes a statistically significant contribution to the regression model.
- The t-statistic is equal to the regression coefficient divided by its standard deviation.
- Thet-statistic tests whether the regression coefficient, b, is equal to 0.