APPENDIX I.
METHODS of the INTERNATIONAL CONSENSUS CONFERENCE ON ULTRASOUND VASCULAR ACCESS
A. Panel selection
Experts in ultrasound-guided cannulation were identified based on a minimum of two peer-reviewed articles related to this topic during the past 10 years, taking into consideration the proportionality as determined by a MEDLINE literature search from 1985 to October 2010. Consideration was also given to including members who had experience with evidence-based methodology and/or guideline development.
B. Literature search strategy
The literature search was performed in 2 ways. The first method entailed a systematic search by multiple panel experts to avoid selection bias. The medical subject headings included ‘ultrasound’, ‘central venous access’, ‘arterial cannulation’, ‘vascular access’, peripherally inserted central catheter (PICC)’, ‘complications’, and ‘training’. A professional medical librarian based on selected articles from the expert panel supplemented this first search with a hand search. The second method entailed a systematic search of English language articles from 1985 to 2010 by an epidemiologist assisted by the librarian. The two bibliographies were then compared for thoroughness and consistency.
C. Literature evaluation strategy
The GRADE method utilised two phases in the development of these evidence-based recommendations. This methodology has been previously detailed in the published literature. There are 15 factors that are typically considered in the GRADE process and these are illustrated in Table 1.
a. First Phase
The level of evidence quality was scored according to 9 factors summarised in Section B of Table 1. The final classification of evidence quality was divided into 3 levels as summarised in Table 2.
b. Second Phase
The transformation of evidence into a recommendation was a function of the panel evaluation of the 5 factors summarised in Section C of Table 1. The GRADE system has not standardised this decision-making process of the expert panel. In an effort to standardise this evidence processing, the methodology committee of this working group selected the Rand Appropriateness Method (RAM). Details of the RAM have been previously published. The combined GRADE-RAND methodology is briefly described in the next section.
D. Expert Panel Activities
The expert panel met in Amsterdam (World Conference on Vascular Access, June 15, 2010) and Rome (WINFOCUS World Conference-GAVeCeLT Meeting, October 8, 2010). The experts formulated draft recommendations before each conference to serve as a foundation for subsequent discussion and evaluation. The expert panel was updated through short presentations about the literature search results and subsequent interpretation for drafting of the proposed recommendations. After a single round of face-to-face debating, anonymous voting rounds were conducted, followed subsequently by internet-based voting rounds. The voting process required expert judgment utilising GRADE factors such as outcome importance and evidence-to-recommendation transformers summarised in Table 1 (Section C) and Table 3. The algorithm in Figure 1 depicts the final rendering of disagreement/agreement graded by the degrees of agreement. This process provided a structured and validated method for expert panel activities. In addition, it standardised statistical methodology for determining the degree of agreement to serve as a foundation for deciding about the recommendation grade (weak versus strong).
REFERENCES
1.Elbarbary M, Melniker LA, Volpicelli G (2010) Development of evidence-based clinical recommendations and consensus statements in critical ultrasound field: why and how? Crit Ultrasound J; 2: 93-95
2. Last accessed 26th of October 2011.
3.Brouwers MC, Kho ME, Browman GP (2010) AGREE Next Steps Consortium. AGREE II: advancing guideline development, reporting and evaluation in health care. CMAJ ;182:E839-42. Epub 2010 Jul 5
4. Last accessed on 26h of October 2011
5.Pasterski V, Prentice P, Hughes I (2010) Impact of the consensus statement and the new DSD classification system. Best Pract Res Clin Endocrinol Metab ;24:187-95
6.Franklin R, Jacobs J, Krogmann O (2008) Nomenclature for congenital and paediatric cardiac disease: historical perspectives and The International Pediatric and Congenital Cardiac Code. Cardiol Young ;18 Suppl 2:70-80
7. Last accessed on 26th October 2011
8.GRADE guidelines: A new series of articles in the Journal of Clinical Epidemiology (2011) Journal of Clinical Epidemiology;64:380-382
9.GRADE guidelines: 1. Introduction—GRADE evidence profiles and summary of findings tables (2011) Journal of Clinical Epidemiology;64:383-394
10.GRADE guidelines: 2. Framing the question and deciding on important outcomes (2011) Journal of Clinical Epidemiology;64:395-400
11.GRADE guidelines: 3. Rating the quality of evidence (2011) Journal of Clinical Epidemiology;64:401-406
12.GRADE guidelines: 4. Rating the quality of evidence—study limitations (risk of bias) (2011) Journal of Clinical Epidemiology;64:407-415
13.GRADE guidelines: 5. Rating the quality of evidence—publication bias (2011) Journal of Clinical Epidemiology;64:1277-1282
14.GRADE guidelines 6. Rating the quality of evidence—imprecision (2011) Journal of Clinical Epidemiology;64:1283-1293
15.GRADE guidelines: 7. Rating the quality of evidence—inconsistency (2011) Journal of Clinical Epidemiology;64:1294-1302
16.GRADE guidelines: 8. Rating the quality of evidence—indirectness (2011) Journal of Clinical Epidemiology;64;1303-1310
17.GRADE guidelines: 9. Rating up the quality of evidence (2011) Journal of Clinical Epidemiology;64:1311-1316
18.Guyatt G, Rennie D, Meade M (2008) Users' Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice. McGraw-Hill, New York
19.Guyatt G, Rennie D, Meade M et al. for the GRADE Working Group (2008) Rating quality of evidence and strength of recommendations GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. Br Med J;336:924-926
20.Brozek N et al. for GRADE Working Group (2009) Grading quality of evidence and strength of recommendations in clinical practice guidelines: Part 2 of 3. The GRADE approach to grading quality of evidence about diagnostic tests and strategies. Allergy;64:1109-16
21. Last accessed on 26th October 2011
22.Fitch K et al (2001) The RAND/UCLA Appropriateness Method User's Manual. By RAND Corporation, Arlington, VA22202-5050
23.González N et al (2009) Review of the utilization of the RAND appropriateness method in the biomedical literature (1999-2004), Gac Sanit;23:232-7X
24.College of Chest Physicians Delphi Consensus Statement (2001) Chest;119
25.Elbarbary M et al.(2010) Minimizing panel’s bias during Guideline development process by combination of GRADE with Rand Appropriateness Method. Unpublished data- personal communication with corresponding author
Table 1- The 15 GRADE factors
factor 1
Outcome factor / Critical / Important / Less Important / Not important
SectionB.
Factors 2-10
The 9 GRADE quality factors / study design Quality starting factor / Quality of evidence / The 5 downgraders
Quality is lowered if $ / The 3 Upgraders
Quality is raised if $
Randomized trials / High / Limitations of design
-1 Serious
-2 Very serious
Inconsistency
-1 Serious
-2 Very serious
Indirectness
-1 Serious
-2 Very serious
Imprecision
-1 Serious
-2 Very serious
Publication bias
-1 Likely
-2 Very likely / Large effect
+ 1 Large
+ 2 Very large
Dose response
+ 1 Evidence of a gradient
Antagonistic bias
+ 1 All plausible confounding would reduce the effect,
or
+ 1 Would suggest a spurious effect when results show no effect
Moderate
Observational studies / Low*
Total Points / Very low*
Section C. Factors 11-15@
The 5 GRADE transformers / Overall judgment on outcome(s) / Critical / Important / Less important
Overall quality of evidence across outcomes / High / moderate / Low (& very low)
Benefit/cost ratio / favorable / uncertain / unfavorable
Benefit / harm ratio / favorable / uncertain / unfavorable
Certainty about similarity in values/preferences / Highly similar / uncertain / widely variable
$ 1 = move up or down one point-grade (for example from high to intermediate)
2 = move up or down two points-grades (for example from high to low)
RCT= 4 points Observational studies = 2 points Low and very low levels can be combined in one level
Table 2: Levels of quality of evidence
Level* / #Points / Quality / InterpretationA / 4 / High / Further research is very unlikely to change our confidence in the estimate of effect or accuracy.
B / = 3 / Moderate / Further research is likely to have an important impact on our confidence in the estimate of effect or accuracy and may change the estimate.
C / 2 / Low* / Further research is very likely to have an important impact on our confidence in the estimate of effect or accuracy and is likely to change the estimate...... OR Any estimate of effect or accuracy is very uncertain(very low)
*Level C = can be divided into low (points =2) and very low (points=1 or less)
# Points are calculated based on the 9 GRADE quality factors (Table 1 section B)
Table 3: E-to-R table (Evidence-to-Recommendation table)
Draft Recommendation
Does the draft recommendation above address strategy that has more than one outcome YESNO†
The 5 Transforming Factors / Decision / Explanation1. Multiple Outcomes Importance
Outcome 1______its rank_____
Outcome 2______its rank_____
Outcome 3______its rank_____
The more important the outcome, the more likely is a strong recommendation / The most important outcome is
______/ Rank: 9 =extremely important (critical) 1=extremely unimportant with 3 regions 7-9 important, 4-6 less important and 1-3 unimportant
Your notes:
2. Quality of evidence*
Outcome 1______its evidence quality_____
Outcome 2______its evidence quality_____
Outcome 3______its evidence quality_____
The higher the quality of evidence, the more likely is a strong recommendation$ / The overall quality across outcomes
High
moderate
low / If multiple outcomes, overall quality will follow that of the most important outcome (e.g. of the critical). If multiple equal outcomes (e.g. all have equal importance), then follow the least quality
Your notes:
3. Benefit /Harm ratio
The larger the difference between the desirable and undesirable consequences and the certainty around that difference, the more likely a strong recommendation. The smaller the net benefit and the lower the certainty for that benefit, the more likely is a conditional/weak recommendation. / 9
8
7
6
5
4
3
2
1 / 9 =extremely favorable 1=extremely unfavorable with 3 regions 7-9 favorable, 4-6 uncertain and 1-3 unfavorable
Your notes:
4. Benefit/Cost ratio
The higher the costs of an intervention and other cost related to the decision – that is, the more resources consumed – the more likely is a conditional/weak recommendation. Are the resources consumed worth the expected benefit? / 9
8
7
6
5
4
3
2
1 / 9 =extremely favorable 1=extremely unfavorable with 3 regions 7-9 favorable, 4-6 uncertain and 1-3 unfavorable,
Your notes:
5. Degree of certainty about similarity in values/preferences
The smaller the variability or the greater the certainty around values and preferences, the more likely is a strong recommendation / 9
8
7
6
5
4
3
2
1 / 9 =extremely certain of similarity 1=extremely certain of variability with 3 regions 7-9 expected similarity, 4-6 uncertain and 1-3 expected wide variability in values and preferences
Your notes:
† If No (i.e., one outcome) then please list in factor 1&2 the previously determined initial ranking of outcome and quality level guided by the Initial quality of evidence that is presented to the panel in the Summary Of Findings (SOF) tables (ideally should be determined by independent methodologist). $Some exceptions may apply
After assessment of the above 5 factors, Please rank your approval (appropriateness) of the above draft recommendation.
Keeping in mind that 9 =totally approve (extremely appropriate) 1= totally disapprove (extremely inappropriate) With 3 regions:
Approval (Appropriate) region: 7-9; Uncertain region: 4-6; Disapproval (Inappropriate) region: 1-3.
My vote for the above draft recommendation is:
987654321