This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information.
Please go to www.ahrq.gov for current information.
Hospitals share quality of care report cards with administrators and key staff but doubt their value
Report cards that States use to evaluate the quality of hospital care are often based on the percentage of hospital patients with a particular diagnosis who experienced an adverse outcome, adjusting for their risk factors. For instance, hospitals with a higher-than-expected number of deaths among cardiac surgery patients would be rated poorly. Hospital leaders view these State quality-of-care report cards with little enthusiasm and say they have limited usefulness, according to a study supported by the Agency for Health Care Policy and Research (HS08574).
Only 8 percent of hospital leaders in California and 22 percent in New York rated their State report "very good or excellent" in facilitating quality improvement. Two key concerns were the excessive delay before patient outcomes data were released and the lack of specific information about how to modify processes of care to improve patient outcomes.
The California Hospital Outcomes Project (CHOP) began in 1991 to produce annual reports on risk-adjusted outcomes at acute hospitals using coded hospital discharge abstracts. New York's Cardiac Surgery Reporting System (CSRS) began in 1989 with creation of a special clinical data system for cardiac surgery. Patrick S. Romano, M.D., M.P.H., and his colleagues at the University of California, Davis' School of Medicine surveyed leaders of the 398 hospitals listed in the 1996 CHOP report and the 31 hospitals listed in the 1996 CSRS report; the response rate was 73 percent in California and 87 percent in New York.
Over 90 percent of hospitals in both States shared or discussed the outcomes report with high-level administrators and quality improvement staff. Leaders at hospitals with the lowest mortality rates rated the overall quality of the CHOP report significantly better, found it to be more useful, and better understood its risk adjustment methods than did leaders at hospitals with moderate or high mortality rates. Hospital leaders tend to blame the messenger when their facilities are rated poorly and argue that the risk-adjustment methods are inadequate, explain the researchers.
In conclusion, Dr. Romano and his colleagues point out that the New York report was rated significantly better and judged to be more useful by hospital leaders than the California report. Although the reasons for this difference are unclear, it does have policy implications for other States and organizations that are trying to determine how best to report outcomes.
See "Grading the graders: How hospitals in California and New York perceive and interpret their report cards," by Dr. Romano, Julie A. Rainwater, Ph.D., and Deirdre Antonius, in Medical Care 37(3), pp. 295-305, 1999.
Return to Contents
Proceed to Next Article