Comments on mid-term exam

Grading.

The number of points allocated to each 5 minute question was 10.

Marks for question 7: a) 4 points, b) 10 points, c) 6 points, d) 10 points.

Total possible points: 150.

The score for each student's lowest 10-point question was then deducted (7a and 7c were combined to form one question here). This total score was computed as a percent out of a total of 140 points.

The class average was 80.

Comments on questions where several students had problems

Q1.Operational definition usually incorporates estimate of whether # cases is statistically greater than the seasonal average. Examples shown in class used the confidence interval.

Q3.Surveys: some students interpreted "survey" as a survey instrument e.g., questionnaire. I did not deduct marks for this, but the term survey normally refers to a study design, not an instrument.

Precision and bias in a survey refer to the estimate resulting from the survey (e.g., prevalence), not to the instruments/measures used in the survey.

Please note that confounding is not an issue in a survey whose primary purpose is to estimate the prevalence of a condition, not to assess the association between an exposure and an outcome.

Q4b.We discussed in class that responsiveness is not necessarily classified as a type of validity. Thus, although responsiveness is the correct answer here, I did not deduct points for answers that gave some type of construct validity.

Q5b.Many of you said that sensitivity and specificity are unchanging characteristics of tests. This was once considered to be the case, but now is known to be incorrect, as discussed in class and the Kramer chapter. Spectrum bias affects sensitivity and specificity of the test. Use of the test in a low risk population generally decreases sensitivity, but increases specificity. The effect on the AUC cannot simply be predicted - it could be the same or perhaps lower than before. Anytime a test is used in a new population, even one of similar risk, it is unwise to expect its performance to be the same as in the original studies. There are many reasons for this; the situation is similar to the difference between efficacy and effectiveness.

Q6b.This is a classic situation in which cluster sampling of households is used. Even without an enumeration of households, a systematic sample can be selected following a "random" starting point.

Q7a.This is cohort study. The cohort is of heart failure patients admitted to hospital. Exposure is defined as cognitive impairment (yes/no). The outcome is in-hospital mortality.

7b.Methods of analysis included: 1) comparison of crude death rates (main limitation is that length of stay is not taken into account), 2) Cox proportional hazards analysis, adjusting for unnamed confounders (takes length of stay into account by censoring patients at time of discharge). (Some students discussed alternative study designs here; the question asked about analysis, not design.)

7c.Alternative to comparison of crude death rates is use of Kaplan-Meier survival curves, which take length of stay into account by censoring patients at time of discharge.

7d.From 606, bias in a cohort study refers to the estimate of effect determined from the comparison of the exposure groups (i.e., the relative risk or the hazard ratio. It does not refer to the representativeness of the study groups. They may be quite unrepresentative of heart failure patients in the community, but as long as there are not important sources of selection, information or confounding bias, the estimate of effect will be unbiassed.

  • Selection bias: in the assembly of the exposure groups. In this example, different probability of hospital admission for heart failure among patients with and without cognitive impairment could cause selection bias. Similarly, admission of patients at different stages of the disease would result in different time zero in the 2 groups, a type of selection bias (e.g., if patients with cognitive impairment are admitted more quickly than those without). Misclassification of exposure could also cause selection bias. Selection bias due to differential attrition in exposure groups is probably not a problem here because the outcome would be known in hospitalized patients.
  • Information bias: differential ascertainment of outcome in 2 exposure groups. For example, if patients with cognitive impairment stay in hospital longer, there is greater opportunity to observe the outcome (death).
  • Confounding by factors associated with cognitive impairment and mortality, but not in the causal chain: e.g., age, severity of heart failure, comorbidities.