Skip Navigation Archive: U.S. Department of Health and Human Services U.S. Department of Health and Human Services
Archive: Agency for Healthcare Research Quality www.ahrq.gov
Archival print banner

This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information.

Please go to www.ahrq.gov for current information.

Development of a Tool to Evaluate the Quality of Non-randomized Studies of Interventions or Exposures

Slide presentation from the AHRQ 2009 conference.

On September 15, 2009, Nancy D Berkman, PhD & Meera Viswanathan, PhD made this presentation at the 2009 Annual Conference. Select to access the PowerPoint® presentation (334 KB).


Slide 1

Slide 1. Development of a Tool to Evaluate the Quality of Non-randomized Studies of Interventions or Exposures.

Development of a Tool to Evaluate the Quality of Non-randomized Studies of Interventions or Exposures

Presented by
Nancy D Berkman, PhD & Meera Viswanathan, PhD
Presented at.AHRQ 2009 Annual Conference
Bethesda, Maryland.September 15, 2009

 

Slide 2

Slide 2. Acknowledgements

Acknowledgements

  • Project funding provided by
    • Phase 1:
      • Grant from RTI Independent Research and Development (IR&D) funds
    • Phase 2:
      • Contract from Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services, through the Evidence-based Practice Centers program (EPC)

 

Slide 3

Slide 3. Context for the Project

Context for the Project

  • Increasing demand to include non-randomized studies in systematic literature reviews and comparative effectiveness reviews to capture
    • The effects of interventions or exposures on a more broadly defined population than can be observed through RCTs
    • Topics where RCTs would be logistically or ethically inappropriate
    • Longer term outcomes and harms (side effects)
  • The trade-off for wider applicability of findings among observational studies, compared with RCTs, is a potentially wider range of sources of bias, including in selection, performance, detection of effects, and attrition.

 

Slide 4

Slide 4. Background: Rating the Quality of Non-randomized Studies

Background: Rating the Quality of Non-randomized Studies

The quality (internal validity) of each study included in a review needs to be evaluated:

  • Well-established criteria and instruments exist for evaluating the quality of RCTs, but not non-randomized (observational) studies
  • PIs conducting systematic reviews generally lack access to validated and adaptable instruments for evaluating the quality of observational studies.
  • Each new review often develops its own quality rating tool, "reinvents the wheel", leading to "inconsistent standards" within and across reviews.

 

Slide 5

Slide 4. Background: Rating the Quality of Non-randomized Studies

Project Goals

To create a practical and validated tool for evaluating the quality of non-randomized studies of interventions or exposures that is:

  • Reflects a comprehensive theoretical framework: captures all relevant domains
  • Broad applicability: can be used "off the shelf" by different PIs
  • Modifiable: can be adapted to different topic areas
  • Easy to use and understand: can be used by reviewers with varying levels of expertise or experience
  • Validated: users can be confident of their evaluation of study quality
  • Advances the methodology in the field
  • Disseminated widely

 

Slide 6

Slide 6. Methods: Phase 1

Methods: Phase 1

Item development

  • Reviewed the literature on the evaluation of the quality of observational studies
  • Collected quality review items used in early tools to evaluate non-RCTs through
    • Published literature
    • 90 AHRQ-sponsored EPC reviews
  • Categorized all potential items into the 12 quality domains identified in Evaluating non-randomized intervention studies (Deeks et al., 2003)

 

Slide 7

Slide 7. Methods: Phase 1 (continued)

Methods: Phase 1 (continued)

Item Bank development

  • Selected the best items for measuring each of the included domains
  • Modified selected items where necessary to ensure that critical domains were included and to improve readability
  • Developed a pre-specified set of responses
  • Developed explanatory text to be used by PIs and abstractors to individualize as well as standardize interpretation

 

Slide 8

Slide 8. Methods: Phase 2

Methods: Phase 2

  • Technical Expert Panel input
    • Conceptual framework to ensure that we included all relevant domains
    • Face validity
  • Cognitive interviews with potential users
    • Readability
    • Conceptualization
  • Validation
    • Content/face validity
    • Inter-rater reliability testing

 

Slide 9

Slide 9. Conceptual Underpinnings of the Instrument

Conceptual Underpinnings of the Instrument

Evaluation of quality can rely on either a description of methods or an assessment of validity and precision

  • Methods description approach
    • Follows the reporting structure of many manuscripts
    • Relies less on judgment than on reporting
  • Validity and precision approach
    • What we really care about
    • More challenging to evaluate
    • Greater reliance on judgment

 

Slide 10

Slide 10. Domains for Quality Evaluation Approaches

Domains for quality evaluation approaches

Methods description approach

  • Background/context
  • Sample definition and selection
  • Intervention/exposure
  • Creation of treatment groups
  • Follow-up
  • Specification of outcomes
  • Analysis: comparability of groups
  • Analysis: outcomes
  • Interpretation
Validity and precision approach
  • Selection bias
  • Performance bias
  • Information bias
  • Detection bias
  • Attrition bias
  • Reporting bias
  • Precision

 

Slide 11

Slide 11. Tool Results

Tool Results

  • Comprehensive: bank of 39 questions
  • Modifiable: includes relevant items appropriate for all non-randomized study types
  • Easy to use: instructions for PIs and abstractors to assist in appropriate interpretation of questions. Example:

What is the level of detail in describing the intervention or exposure? [PI: specify which details need to be stated, e.g., intensity, duration, frequency, route, setting, and timing of intervention/exposure. For case-control studies, consider if the condition, timing, frequency, and setting of symptoms is provided in the case definition]

 

Slide 12

Slide 12. Next Steps

Next Steps

  • Finalize inter-rater reliability results
  • Publish findings and disseminate the tool
  • Proposed Phase III:
    • Design specific validation including inter-rater reliability testing by study type
    • Reduce the number of questions needed to address specific domains
    • Develop a web-based platform for generating design and topic-specific instruments from the item bank.
Current as of December 2009
Internet Citation: Development of a Tool to Evaluate the Quality of Non-randomized Studies of Interventions or Exposures. December 2009. Agency for Healthcare Research and Quality, Rockville, MD. http://archive.ahrq.gov/news/events/conference/2009/berkman-viswanathan/index.html

 

The information on this page is archived and provided for reference purposes only.

 

AHRQ Advancing Excellence in Health Care