Testimony on Comparative Effectiveness Research
Michael E Stuart, Delfini Group
Delivered Via Electronic Mail
Comments Regarding The Comparative Effectiveness Program
Comparative effectiveness research (CER) has great potential but there is a high likelihood that study methodology will not be appropriately applied to key questions and we will have more of the same—results from non-valid primary studies and systematic reviews. Currently much of the published medical literature is neither valid or clinically useful. Estimates range for 70% to 95%. Yet results of these studies are being applied in clinical care and both policy and clinical decisions are being made based on fatally flawed studies. I will focus on therapy only. (Some different considerations apply to screening, diagnostic testing).
Example: We have seen numerous database and observational studies published for therapeutic interventions (e.g., CABG vs angioplasty vs medical therapy). These studies cannot provide useful information regarding effectiveness in reducing mortality. Why? Because the study design (observational study) is not reliable for drawing cause and effect conclusions.
There is also a huge problem even if randomized controlled trials (RCTs) are used to compare interventions. RCTs need to be valid (probably true) before a reader accepts the results (and of course the conclusions). Most readers of RCTs do not know how to assess validity of studies and therefore accept results of low-quality (high likelihood of bias) studies. Criteria such as the Jadad scale are inadequate yet keep being used by many groups to assess internal validity.
Recent examples pointing this out are a meta-analysis of antioxidents showing that high-bias RCTs showed no effect on mortality but low-bias RCTs showed increased mortality another meta-analysis with low-bias RCTs showing that perioperative beta-blockers in non-cardiac surgery result in more harms than benefits (and yet Medicare has a performance measure encouraging perioperative beta blockers in non-cardiac surgery). So the big danger in CER is the high-likelhood that short-cuts will be taken and low-quality studies will result.
We do not need more unreliable science. What we need to inform decisions is valid and clinically useful RCTs and systematic reviews, not fatally flawed RCTs and fatally flawed systematic reviews.Unfortunately, currently we have investigators and readers who willingly accept low-quality comparisons.
Michael E Stuart MD
President and Medical Director, Delfini Group
Clinical Asst Professor, UW School of Medicine
6831 31st Ave N.E.
Seattle, Washington 98115