Evaluating the Impact of Value-Based Purchasing: A Guide for Purchaser
A Guide for Purchasers
Why Evaluate Value-Based Purchasing Activities?
With health insurance premiums on the rise (Gabel et al., 2001) and cost pressures continuing to mount, both public and private purchasers are increasingly concerned about developing sound strategies, and chief financial officers and corporate executives are demanding evidence of their impact on both quality and costs. As a result, perhaps the most important reason to evaluate the impact of VBP activities is to produce objective and defensible estimates that will allow you to determine how best to use scarce organizational resources to maximize the value of health care. Armed with detailed analysis and sound information from evaluations, you will be able to decide whether to continue what you are doing, try other tactics, or pursue a completely different strategy.
The evaluation of VBP activities is also important to developing a body of evidence that all purchasers can draw on when choosing among purchasing strategies and specific activities. To the extent that purchasers are willing to use the methods described in this guide to evaluate VBP activities and to share the results of these evaluations with other purchasers, the entire purchasing community should benefit by knowing how effective different VBP activities have been in practice and the conditions under which those evaluations have been conducted.
A third reason to evaluate VBP activities is to demonstrate to health plans and providers whether the initiative is making a difference. Since many activities require that health care organizations contribute money, staff time, or other resources, purchasers often need to be able to justify that investment. Also, if you plan to incorporate VBP activities into your negotiations with plans or providers, evaluation results can help you explain your demands or defend any information you share about their cost or quality performance relative to others.
Of course, VBP activities vary widely among purchasers. And purchasers will vary in the degree of rigor and the amount of resources that they wish to apply to evaluation. Because of this substantial heterogeneity, this guide is not intended as a detailed manual but as a broad overview of the options for evaluation design and the many issues that value-based purchasers need to consider. Purchasers interested in pursuing any of these options may want to seek out more detailed sources as well as partners with experience in evaluating purchasing activities.
Assessing the Return on Investment
Some purchasers, especially private employers, have become increasingly interested in determining the return on their investment in value-based purchasing activities. With smaller budgets forcing them to choose among competing uses for their scarce resources, more and more employers will need a way to decide whether to initiate or continue supporting VBP efforts.
A calculation of return on investment (ROI) is possible. Conceptually, all final outcome variables could be expressed as dollar values, which would allow you to compute an ROI where the investment would be represented by the cost of the VBP activity and the return would be measured by the changes in health care and other costs plus the value of changes in health care quality and other outcomes of interest, such as productivity. This approach is analogous to a cost-benefit analysis in which all outcome variables are expressed in dollar terms.
However, to make this calculation, you have to be willing and able to associate a financial value with a change in quality measures, such as an improvement in enrollee satisfaction, better access to care, or a reduction in medical errors. You also have to decide how broadly or narrowly to measure the "return." For example, if your VBP activity succeeded in improving the performance of all local hospitals, do you look only at the impact on your employees (few of whom may have used the hospital in a given period of time) or do you consider the benefits to the community? Similarly, if your program targeted improvements in care for a chronic disease, your return may not be measurable in the form of lower premiums but in lower absenteeism and higher productivity.
Steps for Evaluating Value-Based Purchasing Activities
This section discusses what value-based purchasers need to do to evaluate the impact of their activities and provides a straightforward overview of the research designs that purchasers may want to consider and the factors that need to be weighed in their decisions.
A thorough and useful evaluation of VBP activities requires the following five steps:
- Define your value-based purchasing activities and their goals.
- Determine the necessity, appropriateness, and feasibility of an evaluation.
- Choose a research design to assess the impact of VBP activities.
- Implement the research.
- Task 1: Identify appropriate measures.
- Task 2: Collect the data.
- Task 3: Analyze the data.
- Summarize the results and interpret implications for purchasing activities.
Although this guide discusses each step sequentially, these steps should not be regarded as independent. Quite often, the decisions made in one step are influenced by choices in other steps. For example, although the steps imply that you would choose the evaluation design before selecting measures and collecting data, the choice of study design is often driven by what data are available to the purchaser. (Go to "How Do You Choose a Research Design?").
Applying the Principles of Program Evaluation
This guide applies the principles of program evaluation to the evaluation of VBP activities. The term "program evaluation" refers to the process of using research techniques to systematically investigate the impact or worth of programmatic activities, interventions, or policy initiatives. Program evaluation is a standard component of the curriculum in many policy and business schools in the United States.
While program evaluation is typically conducted in the context of public policy, the literature and methods are well established and the approaches are easily adapted to the needs of value-based purchasers. The steps and advice offered in this guide reflect the widely accepted principles of program evaluation.
If you would like more information, the list of references at the end of this guide suggests relevant textbooks and other useful references. However, keep in mind that none of these resources focuses exclusively on the VBP activities conducted by health care purchasers.
Step 1. Define Your Value-Based Purchasing Activities and Their Goals
The first step is to list your VBP activities and link them to the final and intermediate outcomes that each activity is meant to achieve. While it sounds simple, this crucial step is often complicated by two factors. First, not all VBP activities can be expected to have separately identifiable outcomes. And second, purchasers may have trouble deciding which of those outcomes really matter to them.
First Challenge: Sorting Out Related Activities
VBP activities are frequently composed of multiple elements that collectively are intended to have an effect on outcomes. For example, before contracting with health plans on behalf of employees, many employers issue detailed requests for proposals (RFPs) to obtain information and bids. These RFPs usually contain multiple provisions and requirements, which together may be intended to produce several outcomes, such as adequate access to health care providers and acceptable levels of quality at the minimum possible cost. However, it may be difficult to independently link each of the separate provisions in the RFP to their intended outcomes. For outcomes such as access, the VBP activity might be defined as the entire RFP rather than its separate provisions. For other outcomes, such as the quality of preventive care, the VBP activity might be defined as a specific provision, such as the requirement to report HEDIS® data.
The process of linking activities to intended outcomes will help you determine whether the elements of VBP activities should be combined or examined separately. Generally speaking, elements that collectively are designed to achieve common outcomes should be lumped together as a single VBP activity for purposes of evaluation, while elements that are expected to have individual effects on outcomes should be considered separate VBP activities.
You may want to begin by documenting all of the major and minor VBP activities in which the purchasing entity is engaged and their associated objectives. While you may already know which activity you want to evaluate, this task will enable you to identify other VBP practices that could affect your results. Typically, the leadership of an employer's benefits department or the procurement department for a government purchaser is the starting point for information on VBP efforts. However, coordination among the units responsible for health care purchasing within an organization is crucial: Many large employers engage in regional purchasing and contracting; and many government purchasing programs, such as State Medicaid programs, involve multiple agencies such as the health department, insurance department, and department of social services.
Recognizing that each purchaser may define VBP activities differently, Table 2 provides a list of some of the most common VBP activities purchasers are currently engaged in and examples of the intermediate and final outcomes that these activities could influence. To ensure that the evaluation process is feasible and that the process produces useful information, be sure that the outcomes you identify for each activity are measurable. For more detail on this issue, refer to Step 4, which specifically addresses the challenges associated with measuring intermediate and final outcomes for the evaluation of VBP activities.
Second Challenge: Deciding What Matters
Your definition of relevant outcomes for VBP activities will also depend on what matters the most to the purchasing organization. One issue is your time horizon; if you have a long-term perspective for your VBP initiatives, you may be able to focus on objectives that would not be feasible or observable in the short term. Another question is whether you want to adopt a narrow perspective that considers only outcomes that directly affect you as a purchaser, or a broad perspective that also includes indirect outcomes (i.e., the activity's impact on the larger community). If you choose a narrow scope, you may consider certain outcomes to be irrelevant. For example, if you wanted to know the relationship between a VBP activity and the employer's costs, changes in employees' out-of-pocket costs would not be a pertinent outcome. Similarly, a definition of relevant health outcomes from the employer perspective might emphasize lost productivity due to poor health as opposed to a more general measure of employee health status. Under a business definition, employers would only value health outcomes to the extent that poor outcomes hurt the employer in the labor market or through lost productivity.
Expert Advice: Define Outcomes Broadly
Experts in cost-effectiveness evaluations (for example, Gold et al., 1996) recommend conducting these assessments from the societal perspective, which entails the broadest inclusion criteria for measuring costs and outcomes. Even if you prefer a relatively narrow scope, you may want to define outcomes broadly if only because one of the primary objectives of providing health benefits is to attract and retain workers. From that point of view, effects related to employee co-payments or health outcomes such as mortality and morbidity have value beyond their direct effects on firm productivity.
Step 2. Determine the Necessity, Appropriateness, and Feasibility of an Evaluation
The purpose of an evaluation is to provide information that purchasers can use to design and fine-tune their purchasing strategies. For instance, purchasers might use the information gained from evaluations of VBP activities to improve their position in negotiations with contractors and vendors, to account for the level of organizational resources allocated to VBP activities, and to determine whether to expand current levels of activity.
For that reason, the decision to conduct an evaluation should be driven by the likelihood that the findings and lessons learned from the evaluation can and will significantly inform future decisions. Thus, the second step in the evaluation process involves an internal assessment of the value of formally evaluating the various VBP activities identified in the first step. To estimate the "value," you would want to consider both the likely benefit of the information expected from the evaluation as well as the costs of conducting the evaluation. You can then proceed with the evaluation process for those activities that provide sufficient utility given the costs. Although this process appears rather formal, usually purchasers can quickly narrow the list of all VBP activities to a subset of VBP activities for which a formal evaluation would be appropriate.
In addition to this exercise, there are a number of issues that purchasers should try to resolve before going forward with an evaluation. This section discusses some questions that can help you decide whether an evaluation would be both feasible and useful.
How Well Was the VBP Activity Implemented?
Despite the best of plans, VBP activities do not necessarily happen the way you envision them. It is important to assess whether your VBP activity was actually implemented as planned because it influences whether and how the evaluation should be conducted and how the results should be interpreted. For example, suppose a purchaser developed and issued a health plan report card for employees in an effort to steer enrollment to better performing plans; but because of budgetary concerns and production delays, only a handful of employees received or had access to the report card during the open enrollment period.
In this case, the purchaser must first assess the VBP activity; a presumption that the activity was implemented appropriately could lead the purchaser to conclude that report cards could not be effective, which may not be true. The lack of observed effectiveness could be due to the shortcomings of the implementation of the VBP activity rather than the inefficacy of the activity itself. The purchaser can then consider whether to postpone the evaluation or modify it in order to work with what did happen. In the illustration presented here, the purchaser might choose to pursue a more limited evaluation focused on those employees who did see the report card.
How Strong a Relationship Do You Want To See?
Before embarking on an evaluation, you will need to decide what kind of relationship between VBP activities and outcomes you want to see. Depending on the activity and how you expect to use the findings, it may be sufficient to simply establish a correlation between a VBP activity and an outcome, without really knowing how strong that correlation is or why it exists. In other cases, you might require evidence of a causal relationship. Generally speaking, greater rigor from a research perspective requires more resources (possibly including outside consultants) and more time. If neither of those is available, a definitive study may not be an option.
Is It Too Soon To See an Effect?
The research designs discussed in this guide typically assume that the effects of the VBP program are realized immediately after the program is initiated. But it can take years for an effect to take place. In addition, if the VBP activity continues over a period of years, the effects may be cumulative. As a result, negative findings may simply reflect an evaluation that occurred too early.
To incorporate lags into the research design, you will need multiple years of data as well as a hypothesis regarding the appropriate lag, although the lag time can sometimes be determined statistically. For researchers, the primary concern when investigating lagged effects is that the longer the lag between the VBP intervention and the hypothesized effects, the greater the chance that a confounding factor or event is responsible for the finding. Moreover, many VBP programs evolve over time. It can be difficult for an evaluation to determine whether an effect detected in year 2 is a lagged effect of the year 1 intervention or a contemporaneous effect of the year 2 program.
A related problem is that effects may wane over time. In some cases, VBP activities have a larger effect initially because the participants start out enthusiastic and the new activity has garnered substantial attention. Over time, individuals and organizations may lapse into less attentive and active pursuit of the program's goals.
Step 3. Choose a Research Design To Assess the Impact of VBP Activities
A research design is a detailed plan for the systematic investigation of a phenomenon. In much evaluation research, the primary purpose is to investigate the impact of some intervention, program, service, or set of activities on one or more dependent or outcome variables that can be observed. A number of different research designs lend themselves to the task of assessing whether a VBP activity had some short-term impact and/or achieved a longer-term outcome of interest. Each represents a somewhat different way of gauging the degree to which the intervention led to a positive or negative change in the variables of interest.
Broadly speaking, research designs can be categorized into two groups: those that use qualitative methods and those that use quantitative methods. These approaches are different, but they can complement each other and are often used in combination. The exact distinction between the two approaches is less important than understanding that both have their own strengths and weaknesses and that each is appropriate under certain conditions. This section of the guide describes common qualitative and quantitative research designs and methods that are useful for evaluating VBP activities and offers some guidance for choosing among them.
Sources of Information on Research Designs
For a more formal discussion of research designs, please refer to the following resources and other readings listed in the bibliography:
Babbie E. The Practice of Social Research, 8th ed. Belmont, CA: Wadsworth Publishing Company; 1998.
Bailey DM. Research for the Health Professional: A Practical Guide, 2nd ed. Philadelphia, PA: F.A. Davis Company; 1997.
Campbell DT, Stanley JC. Experimental and Quasi-Experimental Designs for Research. Dallas, TX: Houghton Mifflin Company; 1963.
Fink A. Evaluation Fundamentals: Guiding Health Programs, Research and Policy. Newbury Park, CA: Sage Publications; 1993.
Milstein RL, Wetterhall SF et al., Framework for Program Evaluation. Morbidity and Mortality Weekly Report 1999, 48 (No. RR-11).
Patton MQ. Utilization-Focused Evaluation, 3rd ed. Thousand Oaks, CA: Sage Publications; 1997.
Shortell S, Richardson WC. Health Program Evaluation. Saint Louis, MO: Mosby; 1978.
Yeaton W, Camberg L. Program Evaluation for Managers. Boston, MA: Management Decision and Research Center. Health Services Research and Development Services, Office of Research and Development, Department of Veterans Affairs; 1997.
Qualitative Research Designs
Qualitative research methods play a valuable role in evaluations by shedding light on uncertain situations, and revealing and clarifying important relationships that quantitative methods often miss or ignore (Sofaer, 1999). For instance, through qualitative research, evaluators can learn whether employees truly understand the material presented in report cards, whether that material is relevant to their information needs, and why health plans may not be using the information in the ways that purchasers expected. These methods can also support the development of testable hypotheses that evaluators can then explore further by collecting and analyzing quantitative data. Finally, qualitative research can help to explain findings from quantitative studies. For example, data from a quantitative analysis might show no improvements in quality a year after an intervention, but a qualitative analysis conducted at the same time or soon afterwards might reveal changes in attitudes, behaviors, or processes that are likely to lead to measurable improvements.
While there are a variety of qualitative methods, three research designs are especially relevant for evaluating VBP activities:
- Case studies.
- Focus groups.
This section offers a brief overview of each of these three approaches. For additional information, please consult the citations provided in the bibliography.
Case Studies. Case studies involve one or more short but intensive exposures to one or more settings (such as a city) or groups of organizations linked by some common activity or experience. For example, evaluators could conduct site visits to all of the health plans involved in a VBP activity to learn how each organization is responding to the purchaser's initiative and how the activity affects the different departments within each organization (e.g., quality improvement managers, physicians, nurse managers).
Although a case study can focus on only one setting or entity, most studies identify and investigate a sample of cases that are believed to be particularly important on a relevant dimension (e.g., the health plans enroll at least 10 percent of the employer's covered lives) and appear to lend themselves to useful comparisons and insights (e.g., the plans vary in geography or in their care delivery models). The primary tools used to analyze cases include interviews with key informants, structured observations, and the collection and analysis of documents.
The choice of sample and the methods used to conduct case studies play an important role in determining the usefulness of this approach. For more detail on the available options, consult Babbie, 1998; Ragin, 1999; and Sofaer, 1999.
Particularly in the early stages of a VBP activity, case studies can be useful for identifying challenges and assessing the likelihood of success. For example, as noted earlier, the Leapfrog Group is a coalition of purchasers that is trying to reduce medical errors, increase patient safety, and improve the quality of health care by, among other things, encouraging hospitals to use computerized prescription order entry (CPOE) systems. Since Leapfrog members recognize that the adoption of these systems takes time, one approach they are using to assess whether this VBP activity will be successful is to conduct site visits to learn about hospitals' implementation plans for CPOE.
Advantages of this approach. Case studies are useful for developing hypotheses about the relationship between VBP activities and intended outcomes. While they cannot establish causality, they can provide insights valuable for decisionmaking purposes. Through case studies, for example, purchasers could learn that a VBP activity focused on health plans is making little progress because it lacks an educational component that targets physicians. Depending on the design and objectives, case studies can also be conducted quickly and inexpensively to provide an initial status report on the effects of a purchaser's initiatives.
Drawbacks of this approach. Thorough case studies that involve multiple site visits and interviews can be time consuming and costly. Moreover, though all study designs are subject to researcher bias, it is harder to identify and control for such bias in case studies. Finally, because cases are often selected non-randomly, they typically do not represent a larger population. Consequently, the findings may not be generalizable to other cases outside the sample.
Example: Case Studies
Use of Performance Measures for Quality Improvement in Managed Care Organizations
Description of the Research Activity. Researchers conducted case studies to better understand how managed care plans use performance measures for quality improvement and to identify the strengths and weaknesses of standardized performance measures that are currently being used, such as HEDIS® measures and CAHPS® measures. The results are intended to be of interest to purchasers that value health plans that engage in quality improvement activities for the benefit of all plan members.
Evaluators. The evaluation was done by academic researchers from Pennsylvania State University, the RAND Corporation, and AHRQ.
Research Design. The evaluation involved case studies of a non-random sample of 24 managed care plans in four States: Pennsylvania, Maryland, Kansas, and Washington.
Methods. After developing and pilot testing a set of interview protocols tailored to each type of respondent (pilot tests were conducted with four plans in New Jersey), evaluators developed a single interview instrument that could be administered to all respondents and then used this instrument to conduct exploratory qualitative research. The questions covered a variety of topics related to organizational and operational characteristics that affect the clinical and service quality improvement activities of the health plan.
Two study authors conducted separate 1-hour tape-recorded telephone interviews with multiple respondents from each health plan. They interviewed 42 respondents for an overall response rate of 58.3 percent, with a mean of 1.8 respondents per plan. Respondents included chief executive officers, medical directors, and quality improvement directors. One interviewer drafted notes from the tape-recorded interviews and gave these notes to the other interviewer to review for accuracy. The interviewers then used the final version of the notes to create a detailed spreadsheet entry for each interview. The spreadsheet facilitated frequency counts and calculations for quantifiable data and aided in sorting and grouping interviews for qualitative analysis. To develop the reported findings, the authors of the study held several discussions to achieve a consensus.
Results. The evaluators found that all of the participating managed care organizations used performance measures for quality improvement, but the degree and sophistication of use varied. Many of the respondent plans used performance measures to target quality improvement initiatives, evaluate current performance, establish goals for quality improvement, identify the root cause of problems, and monitor performance. The results suggest that performance measurement is useful for improving quality in addition to informing external constituents.
However, additional research is needed to understand how to maximize the benefit of measurement, and to quantify the degree of variation in quality improvement activities and the organizational and operational characteristics associated with successful quality improvement programs.
Advantages and Disadvantages of the Evaluation Strategy. The primary advantage of the exploratory case study design was the ability to obtain in-depth information about quality improvement strategies and programs for a sample of managed care organizations. Since no existing database with such information exists, and since a significant amount of detail was required, individual phone interviews with multiple members of the same organization proved to be very valuable. The information obtained through the interviews and secondary data analysis led to the formulation of hypotheses that can be more formally tested.
The major disadvantage of this approach was the inability to generalize the results to a larger population of managed care organizations. The case study design could not adequately control for important organizational and market characteristics that might have a differential impact on organizations.
Source: Scanlon DP, Darby C, Rolph E et al., The Role of Performance Measures for Improving Quality in Managed Care Organizations. Health Services Research 2001;36(3):619-41.