Telephone administered cognitive behaviour therapy for treatment of obsessive compulsive disorder (OCD): Randomised controlled non-inferiority trial

Did the study ask a clearly focused question?

Population studied was 77 patients suffering from OCD.

Intervention given was ten one-hour individual sessions with a therapist, face-to-face, or eight thirty-minute individual sessions with a therapist over the telephone sandwiched between two one-hour individual sessions, face-to-face.No attempt is made to separate these two variables: mode of intervention delivery and duration of intervention, making it impossible to know which of the two was (more) effective here.This is what is known as a confounding factor and is a major flaw of this study.

Outcomes considered were scores on the Yale Brown OCD checklist and the Beck depression inventory before and after treatment.

Was this a randomised controlled trial and was it appropriately so?

Noninferiority studies have some important differences from traditional RCTs. The main problem is that they attempt to do that most difficult of scientific things: prove (or demonstrate) a negative. “A well-executed clinical trial that correctly demonstrates the treatments to be similar can not be distinguished, on the basis of the data alone, from a poorly executed trial that fails to find a true difference.” (Snapinn S. Noninferiority trails. Current Controlled Trials in Cardiovascular Medicine. 2000; 1(1): 19–21)

Were participants appropriately allocated to intervention and control groups?

Block randomisation was performed by the therapists and treatment allocation by theprincipal investigator (PI). Sounds as if clients were randomised to one of two groups at the first visit, and the two groups allocated to one of the two treatments later by the PI. Participant flow chart clearly describes recruitment and allocation.

The pretest scores on the Beck depression inventory were significantly higher for the telephone arm of the trial.

Were participants, staff and study personnel ‘blind’ to participants’ study group?

Only the researchers (who also acted as the assessors) were claimed to be blinded. For obvious reasons both the clients and the therapists would have been aware in each case of how the intervention was being delivered (face to face or by telephone). Nine of the 72 clients revealed their treatment status to the assessor, and the assessor correctly guessed the treatment status of a further 35 clients, but guessed incorrectly for the remaining 28.

Were all of the participants who entered the trial accounted for at its conclusion?

Participant flow chart shows this clearly. Only four participants failed to complete the intervention, but a further three participants in the face-to-face group failed to complete the six month follow up. (These were clients whose Yale Brown scores immediately after treatment were poorer than average for that group.)

Were the participants in all groups followed up and data collected in the same way?

So far as we can tell, although the fact that the assessors sometimes knew which clients had been in which arm of the trial may have affected their judgement. No indication is given of whether the administration of the assessment tools may have been affected by this knowledge.

Did the study have enough participants to minimise the play of chance?

Yes. The section Sample size and statistical methods describes how the study was powered.

How are the results presented and what is the main result?

The Yale Brown OCD scores for each arm of the trial showed a considerablereduction. The Beck depression inventory scores for each arm of the trial also showed a considerablereduction. The baseline scores for the telephone group were higher for this second measure, but the scores for both groups were similar post-treatment.Client satisfaction with the intervention they received was only marginally higher for the face-to-face arm of the trial.

How precise are these results?

No p values or confidence intervals are calculated to compare the effects of the two interventions. This may be because in noninferiority trials the hypothesised outcome would demonstrate no difference between the two treatments so a high p value would be a positive finding.

Were all important outcomes considered so the results can be applied?

If thirty minute telephone interventions are found to be about as good as one-hour, face-to-face interventions, might be worth investigating whether reducing the interventions still further would be equally effective. (Would sending text messages at random intervals saying, “Stop doing that,” work?)

No demographic breakdown of participants was offered. Would telephone interventions be equally acceptable to both genders, to people of different ethnicities and to patients from all socio-economic classes? Might be reasons why this would not be the case.

Certainly suggests that this might be a fairer way of ensuring that as many patients as possible have access to a limited service.