A tool for the classification of study designs in systematic reviews of interventions and exposures Meera Viswanathan, PhD for the University of Alberta EPC - PowerPoint PPT Presentation

1 / 15
About This Presentation
Title:

A tool for the classification of study designs in systematic reviews of interventions and exposures Meera Viswanathan, PhD for the University of Alberta EPC

Description:

A tool for the classification of study designs in systematic reviews of interventions and exposures Meera Viswanathan, PhD for the University of Alberta EPC – PowerPoint PPT presentation

Number of Views:132
Avg rating:3.0/5.0
Slides: 16
Provided by: bernier2
Category:

less

Transcript and Presenter's Notes

Title: A tool for the classification of study designs in systematic reviews of interventions and exposures Meera Viswanathan, PhD for the University of Alberta EPC


1
A tool for the classification of study designs in
systematic reviews of interventions and
exposuresMeera Viswanathan, PhDfor the
University of Alberta EPC
  • AHRQ ConferenceSeptember 2009

2
Steering Committee
  • Ken Bond, UAEPC
  • Donna Dryden, UAEPC
  • Lisa Hartling, UAEPC
  • Krystal Harvey, UAEPC
  • P. Lina Santaguida, McMaster EPC
  • Karen Siegel, AHRQ
  • Meera Viswanathan, RTI-UNC EPC

3
Background
  • EPC reports, particularly comparative
    effectiveness reviews, are increasingly including
    evidence from nonrandomized and observational
    designs
  • In systematic reviews, study design
    classification is essential for study selection,
    risk of bias assessment, approach to data
    analysis (e.g., pooling), interpretation of
    results, grading body of evidence
  • Assignment of study designs is often given
    inadequate attention

4
Objectives
  1. Identify tools for classification of studies by
    design
  2. Select a classification tool for evaluation
  3. Develop guidelines for application of the tool
  4. Test the tool for accuracy and inter-rater
    reliability

5
Objective 1 Identification of tools
31 organizations/individuals contacted
11 organizations/individuals responded
23 classification tools received
10 tools selected for closer evaluation
1 tool selected for modification and testing
6
Objective 2 Tool selection
  • Steering Committee ranked tools based on
  • Ease of use
  • Unique classification for each study design
  • Unambiguous nomenclature and decision
    rules/definitions
  • Comprehensiveness
  • Potentially allows for identification of threats
    to validity and provides a guide to strength of
    inference
  • Developed by a well-established organization

7
Objective 3 Tool development
  • Three top-ranked tools
  • Cochrane Non-Randomised Studies Methods Group
  • American Dietetic Association
  • RTI-UNC
  • Incorporated positive elements of other tools
  • Developed glossary

8
Objective 4 Testing round 1
Overall agreement (30 studies, 6 testers) ?0.26 (fair)
Graduate level training complete (3 testers) ?0.38 (fair)
Graduate level training in progress (3 testers) ?0.17 (slight)
Item agreement
6/6 testers agreed 0
5/6 testers agreed 7 (23)
4/6 testers agreed 5 (17)
3/6 testers agreed 9 (30)
2/6 testers agreed 8 (27)
No agreement 1 (3)
9
Objective 4 Testing round 1
  • No clear patterns in disagreements
  • Disagreements occurred at all decision points
  • Tool vs. studies
  • Variations in application of the tool

10
Objective 4 Reference standard
Overall agreement (30 studies, 3 raters) ?0.33 (fair)

Item agreement
3/3 raters agreed 7 (23)
2/3 raters agreed 14 (47)
No agreement 9 (30)
11
Objective 4 Testing round 2
Overall agreement (15 studies, 6 testers) ?0.45 (moderate)
Graduate level training complete (3 testers) ?0.45 (moderate)
Graduate level training in progress (3 testers) ?0.39 (fair)
Item agreement
6/6 testers agreed 3 (20)
5/6 testers agreed 2 (13)
4/6 testers agreed 6 (40)
3/6 testers agreed 2 (13)
2/6 testers agreed 2 (13)
No agreement 0
12
Discussion
  • Moderate reliability, low agreement with
    reference standard
  • Studies vs. tool as source of disagreement
  • tool not comprehensive, e.g., quasi-experimental
    designs
  • studies challenging, e.g., sample of difficult
    studies, poor study reporting
  • To optimize agreement and reliability
  • training in research methods
  • training in use of tool
  • pilot testing
  • decision rules

13
Next Steps
  • Test within a real systematic review
  • Further testing for specific study designs
  • Further evaluation of differences in reliability
    by education, training, and experience

14
Acknowledgments
  • Ahmed Abou-Setta
  • Liza Bialy
  • Michele Hamm
  • Nicola Hooton
  • David Jones
  • Andrea Milne
  • Kelly Russell
  • Jennifer Seida
  • Kai Wong
  • Ben Vandermeer (statistical analysis)

15
Questions?
University of Alberta EPCEdmonton, Alberta,
Canada
Write a Comment
User Comments (0)
About PowerShow.com