Chapter 5. Discussion

Health Care Efficiency Measures: Identification, Categorization, and Evaluation


Publication Bias

Our literature search procedures were extensive and included canvassing experts from academia, industry, and our peer reviewers regarding studies we may have missed. However, we can never be sure that we identified all the relevant published literature. We also excluded studies from non-U.S. data sources, primarily because we judged the studies done on U.S. data would be most relevant. It is possible, however, that adding the non-U.S. literature would have identified additional measures of potential interest.

Study Quality

An important limitation common to systematic reviews is the quality of the original studies. A substantial amount of work has been done to identify criteria in the design and execution of the studies of the effectiveness of health care interventions, and these criteria are routinely used in systematic reviews of interventions. However, we are unaware of any such agreed-upon criteria that assess the design or execution of a study of a health care efficiency measure. We did evaluate whether or not studies assessed the scientific soundness of their measures (and found this mostly lacking).

Return to Contents


We found little overlap between the measures published in the peer-reviewed literature and those in the grey literature suggesting that the driving forces behind research and practice result in very different choices of measure. We found gaps in some measurement areas including: no established measures of social efficiency, few measures that evaluated health outcomes as the output, and few measures of providers other than hospitals and physicians.

Efficiency measures have been subjected to relatively few rigorous evaluations of their performance characteristics, including reliability (over time, by entity), validity, and sensitivity to methods used. Measurement scientists would prefer that steps be taken to improve these metrics in the laboratory before implementing them in operational uses. Purchasers and health plans are willing to use measures without such testing under the belief that the measures will improve with use.

The lack of consensus among stakeholders in defining and accepting efficiency measures that motivated this study remained evident through the interviews we conducted. An ongoing process to develop consensus among those demanding and using efficiency measures will likely improve the products available for use.

Return to Contents

Future Research

Research is already underway to evaluate vendor-developed tools for scientific soundness, feasibility, and actionability. For example, we identified studies being done or funded by the General Accounting Office, MedPAC, CMS, Department of Labor, Massachusetts Medical Society, and the Society of Actuaries. A research agenda is needed in this area to build on this work. We summarize some of the key areas for future research but do not intend the order to signal any particular priority.

Filling Gaps in Existing Measures

Several stakeholders recognize the importance of using efficiency and effectiveness metrics together but relatively little research has been done on the options for constructing such approaches to measurement.

We found few measures of efficiency that used health outcomes as the output measure. Physicians and patients are likely to be interested in measures that account for the costs of producing desirable outcomes. We highlight some of the challenges of doing this that are parallel to the challenges of using outcomes measures in other accountability applications; thus, a program of research designed to advance both areas would be welcome.

We found a number of gaps in the availability of efficiency measures within the classification system of our typology. For example, we found no measures of social efficiency, which might reflect the choice of U.S.-based research. Nonetheless, such measures may advance discussions related to equity and resource allocation choices as various cost containment strategies are evaluated.

Evaluating and Testing Scientific Soundness

There are a variety of methodological questions that should be investigated to better understand the degree to which efficiency measures are producing reliable and valid information. Some of the key issues include whether there is enough information to evaluate performance (e.g., sample sizes); whether the information is reliable over time and in different purchaser data sets (e.g., does one get the same result when examining performance in the commercial versus the Medicare market?); methods for constructing appropriate comparison groups for physicians, hospitals, health plans, markets; methods for assigning responsibility (attribution) for costs to different entities; and the use of different methods for assigning prices to services.

Evaluating and Improving Feasibility

One area of investigation is the opportunities for creating easy-to-use products based on methods such as DEA or SFA. This would require work to bridge from tools used for academic research to tools that could be used in operational applications.

Another set of investigations is identifying data sources or variables useful for expanding inputs and outputs measured (e.g., measuring capital requirements or investment, accounting for teaching status or charity care).

Making Measures More Actionable

Considerable research needs to be conducted to develop and test tools for decision makers to use for improving health care efficiency (e.g., relative drivers of costs, best practices in efficient care delivery, feedback and reporting methods) and for making choices among providers and plans. Research could also identify areas for national focus on reducing waste and inefficiency in health care. The relative utility of measurement and reporting on efficiency versus other methods (Toyota's Lean approach, Six Sigma) could also be worthwhile for setting national priorities.

Page last reviewed April 2008
Internet Citation: Chapter 5. Discussion: Health Care Efficiency Measures: Identification, Categorization, and Evaluation. April 2008. Agency for Healthcare Research and Quality, Rockville, MD.