Skip Navigation U.S. Department of Health and Human Services www.hhs.gov
Agency for Healthcare Research Quality www.ahrq.gov
Archive print banner

This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information.

Please go to www.ahrq.gov for current information.

Ambulatory Care Quality Alliance: Invitational Meeting

Report of the Performance Measurement Workgroup

Kevin Weiss, American College of Physicians

Kevin Weiss noted that the Ambulatory Care Quality Alliance (AQA) has accomplished a great deal in the past year, including endorsing a starter set of 26 ambulatory care measures. These include, he said, measures dealing with:

  • Prevention, including: Screenings, smoking advice, and immunizations.
  • Acute and chronic care, including: Diabetes, congestive heart failure, coronary artery disease, asthma, and depression, and maternal and pre-natal care.
  • Measures developed by National Committee for Quality Assurance (NCQA) and the American Medical Association (AMA) Physician Consortium.
  • Measures endorsed by the National Quality Forum (NQF).

Our current focus, said Weiss, includes measures addressing specialty care, efficiency (cost of care), and patient experience of care, as well as composite measures.

Weiss outlined what he hoped to accomplish during the meeting:

  • Review and endorse the revised AQA Parameters for Selecting Ambulatory Care Performance Measures.2
  • Review and endorse the Principles of Efficiency Measures. (The intent, said Weiss, was to reflect the cost of care discussions as well as comments made regarding surgical and procedural focuses.)
  • Review the cost of care measures discussion paper.
  • Discuss selected conditions for future analysis.
  • Review and endorse the patient experience of care survey (A-CAHPS).
  • Review the draft document on the use of registry data to assess physician performance.

Weiss said that the intent of the workgroup has been to reflect the cost of care discussions as well as comments made regarding surgical and procedural focuses. He added that a key issue has been to reduce one measure related to cost of care and effectiveness. Finally, Weiss stressed that the workgroup did not see these measures as controversial when it voted to send the measures to the AQA for approval.

AQA parameters for selecting ambulatory care performance measures

Motion: To approve the revised AQA Parameters for Selecting Ambulatory Care Performance Measures.

Result: The motion was adopted.


Discussion

One participant asked whether the principles covered all age groups. The Institute of Medicine (IOM) report discusses doing so, replied Weiss. In response, the participant said that the IOM report was seriously amiss in not including children. She asked that the AQA make explicit that the parameters include children. Another participant, however, expressed concern about explicitly calling out one group or otherwise developing a laundry list of who is covered.

Weiss called for a vote to approve the revised document with minor revisions. The motion was adopted.

AQA Principles of "Efficiency" Measures

Weiss noted that the workgroup had come up with new definitions in the AQA Principles of "Efficiency" Measures. These speak to several issues about measurement that affect efficiency, he said.

Discussion

One participant said that the definitions have repercussions for the health care system. He suggested that the workgroup needed to discuss a better definition of cost-effective care.

Another participant asked why, under "Cost of Care," the workgroup had added a footnote that said this is "commonly referred to in the marketplace as "efficiency.'" Why was this footnoted, he asked, rather than being incorporated into the text? In response, Weiss said the workgroup thought of it as a transitional issue. The footnote is designed to give vendors and their purchasers a way to cross-map in the short term, he said, and we hope to sunset the footnote next year.

A third participant asked whether it was appropriate to separate out quality of care from the other five IOM-specified health care aims. She noted that when cost of care and efficiency were included, it allowed people to think at the population level. This is a really important issue, she said.

Weiss noted that the issue had been raised in the workgroup but had not generated much discussion. He said he wanted to make sure AQA was consistent with the IOM report and asked for comment from others in the room.

"I would say that quality of care is a broader issue that includes those five other aims," said one participant. A second person said that the IOM was talking about waste and overuse. He asked if there are aspects of cost or waste/overuse/misuse that can be construed and measured as quality issues (i.e. overaggressive ordering of tests). The IOM defined them as related, he added, but what we're talking about here is separable.

One participant said that his sense was that the goal was to have a system that is safe, equitable, patient-centered, and so forth. He noted that the bullet on quality of care suggested that the aim was to measure quality toward achieving these goals. He added that it was important to be able to measure whether the cost of care is appropriate. Another person observed that the goal was to say that all these health care aims should be delivered at the lowest possible cost.

One participant noted that "efficiency" was very complex, and that people had different perspectives on it. She said that while the workgroup had tried to define the terms, some of them seemed "a bit techie." While the six IOM aims are not sacred, she said, they have been widely used in accountability and education in health care. Finally, she cautioned about not stripping any one out of the list. This is a financially driven set of definitions, she said, and we need to be thoughtful about rearranging them.

Another participant noted that one aim of the workgroup was to address what was going on in the marketplace. She noted that CMS has demanded that cost and resource use be measured. As a result, she said, the workgroup discussed whether it was appropriate, and how, to link these to clinical quality measures. I think the IOM is trying to make more specific micro-definitions relative to what's going on in the market, she said. Finally, she asked for feedback from participants representing the health plans.

This discussion has been valuable for framing cost of care and how it is used, said another participant, who suggested that the introduction to the principles include a discussion about examining cost of care in relationship to other elements. Someone else suggested identifying cost of care as a specific IOM measure.

It sounds like efficiency doesn't mean quality, said one participant. She noted that cost of care is the focus of most discussions these days, and said that a deeper measure was needed going forward. She also observed that there wasn't a need to separate efficiency from quality. Just say the first level is cost, she said.

The Institute of Medicine talks about aims while we talk about measures, said another participant, who added that the IOM did not talk about measures within their context. It is paradoxical to separate the IOM aims from the list, he said. Instead he suggested change to the AQA document to distinguish the list of aims from the measures.

Another participant suggested reorganizing the four definitions so that efficiency of care comes first, followed by cost of care and then quality of care. In response, Weiss warned that reorganizing the definitions would require a long discussion and he said he thought it shouldn't be done unless there was a groundswell of agreement.

Another person suggested a different reorganization, including taking the part of the definition of quality of care that discusses performance and moving it into the definition on efficiency of care. Then you don't need a separate definition for quality of care, she said, which is the one giving us all the trouble.

Weiss said that the last suggestion may indeed be workable.

One participant noted that he had studied economics. It bothers me that we're starting to use words in ways others in the world do not, he said. He added that quality can be achieved at low cost or at high cost—and that cost isn't usually a part of quality.

Weiss worked out revised language on cost and performance:

Efficiency of care is a measure of the relationship of the cost of care associated with a specific level of performance measured with respect to the other five IOM aims of quality.

One person said that a physician's greatest fear is that everything will be based on cost. In response, Weiss stressed that this is merely an endorsement of the IOM, and he noted that efficiency doesn't just refer to cost.

Motion: To approve the AQA Principles of "Efficiency" Measures, as revised.

Result: The motion was approved unanimously.


Patient Experience of Care Survey (A-CAHPS)

Weiss noted that the National Quality Forum (NQF) was planning to consider the patient experience of care survey, and he said that A-CAHPS work indicates that this will be a solid instrument. He then invited discussion on the matter.

One participant expressed concern about the tone of the questions and said that physicians will look awful based on negatively worded questions. She added that the survey was too long and that it implied boundless patient expectations. A second participant wondered whether only dissatisfied people might take the time and effort to complete the survey.

Weiss said that the questions were intended to provoke a response. He also observed that others did not consider the current length of the survey to be burdensome.

One person observed that the survey only refers to a single doctor. As we move forward to team care, he said, perhaps future surveys should refer to "the team that cares for me."

One participant noted that in the real world if a patient was seen 11 months ago, he might not remember many details. If you ask a yes/ no question, he asked, is a patient more apt to answer yes if he doesn't remember?

Another participant noted that lateness affects how physicians provide care, and said he wanted to see a question about whether a patient is late for an appointment.

I'm confused about the timing of the process, said another person. Why are we endorsing the survey and in what stage of development is it? In response, another participant said that while the survey wasn't static the items are pretty much solid. He added that there may some deletions or additions, but that it was pretty close to what would be submitted to NQF.

Public accountability is difficult and painful, said one participant. This is neither the first nor the last time that reservations will be expressed, he said.

One person sought clarification on whether AQA participants were being asked to approve the survey concept, after which the workgroup would consider implementation issues (i.e., cost, burden to physicians, whether each plan should do this by itself, and aggregation). In response, one person said that his organization was collecting data on sample size and cost of administration.

In response to a question about the age of patients in the survey, Weiss noted that children are not currently included. Another person added that the American Board of Pediatrics is testing a pediatric version of the survey.

There were a question about who would implement the survey, and two questions about the scope of the survey. One person asked whether it was focused on assessing physician practices. Another wondered whether it was intended for all specialties and scenarios. In response, a member of the workgroup said that the workgroup could discuss whether the survey meets the needs of all populations. I think now it's intended for most adults in most settings, he said. If the workgroup sees the need for other surveys, he added, the workgroup will say so.

A participant questioned whether the survey addressed the need for short-term focus for care delivered. This is longitudinal data, he said, and you will get mixed results in how it is interpreted. He suggested looking at existing surveys for specific disciplines.

My initial reaction is that the survey is way too long and complicated for many patients, said another participant. Can the questions be prioritized, with shorter and longer versions? Yet another person said it would be helpful to get a sense of how the survey has been used and its degree of success. I need to know this to offer support, she said.

Weiss stressed that, to the workgroup's knowledge, no care survey had been more thoroughly tested. Supplemental questions are open for discussion, he said, and other surveys for specific populations may be appropriate. He urged AQA participants, however, not to preclude the need for a base survey.

Carolyn Clancy stepped in and said that people involved with the survey said they would be happy to come in and discuss it with the AQA. She added that there might also be a need for supplemental items.

Are you saying let's go forward with this survey for adults? asked one person. She also asked how the survey would work for older children, such as a 17-year-old getting contraception. In response, Weiss said he would bring back to the workgroup the question of surveys for other populations.

Another person suggested that the survey needed more review, as some questions about communications and access were relevant to most care, while others seemed to relate more to primary care.

One person noted that the survey testing process had involved looking at which questions were applicable and how to modify them to make them more so. We developed a core set of items for all specialties, he said, and then we will look at supplemental questions. If the AQA approves, added Weiss, then we will establish a subcommittee to address the key issues raised at this meeting and report back and ask for endorsement of additional documents.

Motion: To endorse the patient experience of care survey.

Result: The motion was adopted with two abstentions.


Cost of Care Measurement

Weiss said that workgroup wanted to introduce an early draft of the cost of care measures and to discuss the process for selecting conditions and procedures for cost of care measurement. He noted the need to set parameters for discussion and to get actionable measures of cost of care. Actionable, he said, means information physicians can do something with.

He referred participants' attention to the document's overview, and said the aim of the workgroup was to develop general principles of cost of care measures and a parsimonious "starter" set of cost of care measures that:

  • Align with existing clinical quality measures.
  • Address prevalence, resource use, and practice variations.
  • Measure or identify overall or average cost drivers.

Weiss explained that the document discusses the specific elements of what the starter set should achieve. The current five elements include:

  • Preprocessing.
  • Episode groupers.
  • Adjustments.
  • Implementation rules.
  • Cost methodology (metrics)

He stressed that the list was not inclusive and could indeed be expanded. Weiss added that the NCQA was trying to think through what implementation rules might look like.

Weiss also pointed participants' attention to the list of conditions for preferred ranking. He said the workgroup would undertake a modified Delphi technique and have it formally ranked for the May AQA meeting. He said that the workgroup expected to have a set of measures for endorsement in 8 months.

Finally, Weiss asked purchasers, vendors, and the health plans to work with his workgroup to allow the tension of standardization to be most precise without creating havoc in the marketplace. We want to push standardization to the max, he said.

Discussion

One member of the workgroup said that the workgroup had looked at the IOM report and the Medicare Payment Advisory Commission's (MedPAC) work addressing the major drivers of the health care system. She then asked participants if there were any conditions missing. If so, please let us know as soon as possible, she said.

One participant noted that the cost of care is different for a 5-year-old child with cardiac arrhythmia than for someone who is age 80. That discussion would happen in the implementation rules, replied Weiss. A second person asked about using appropriateness, not just episode methodology, to avoid comparing kids to adults. What if someone does 10,000 procedures efficiently; is doing 10,000 appropriate?

Weiss noted that his workgroup had not yet addressed appropriateness—but that the topic was on the agenda. There's a lot of interest in this issue going forward, he said.

One participant noted that CMS has struggled with the selection of conditions. The methodology for choosing is not always clear, he said. We stressed the need to ensure that the principles that are approved go into the selection process. He expressed some concern that selection by consensus (the ones people think are best) may not necessarily lead to the best measures. He noted that sometimes high-cost, high-volume procedures are selected but then the scoring finds little cost savings or impact on quality. Finally, he said that while using the top 10 conditions early on might be the most pragmatic move, it was important to consider conditions within the framework established by the principles in the future. In response, Weiss stressed that the discussion will not be about favorites.

Regarding mental illness, one participant asked whether there would be measures of degree of coverage of patient perception about their ability to pay. [Note: concern here relates to variations in coverage policies that may impede delivery of recommended services.]

In response, Weiss stressed the need to think about copayments and those other unique organizational aspects of care that can dramatically affect the cost of care. He added, as an example, that the implementation rules may reflect the need to stratify by plan type or other criteria.

The same participant asked about the methodologies for attribution. Is the workgroup discussing how to handle multi-specialty group practices? she asked. She noted that perhaps it would be helpful to report at the individual level even if the individuals decide to practice as a group. Weiss said the workgroup had not discussed the question yet.

Another participant addressed the issue of ranking. He said that ranking at the physician level would assume that all physician specialties are covered in the same way. He noted that this was not yet reflected in the document.

One person asked about symptoms that are not diagnosed, and suggested the workgroup consider things about diagnoses that may be relevant to cost of care. Weiss said that it was a good suggestion, and he asked participants to bring to the workgroup's attention anything that they see missing. We are looking for an appropriate list on the table to work with, he said.

One participant endorsed the idea of using a politically modified Delphi process. We need to look at what CMS and Congress are doing to decide the top 10. He suggested that the workgroup look at the episode groupers between now and May.

Wrapping up the discussion, Weiss said that the workgroup would re-present a refined draft of the key elements for cost of care measurement. He also noted that new subgroups had been formed on:

  1. Chronic and acute care measures
  2. Surgery procedure measures.

He added that a cost of care exploratory task force would also be needed.

Use of Registry Data to Assess Physician Performance

Weiss presented a first draft of the AQA Principles in the Use of Registries for Measuring Physician Performance. He said that if aggregated clinical data exist, they should be used. (He noted, for example, that the Society of Thoracic Surgeons has some.) This can be an important tool if the model is there and it works, said Weiss.

Weiss noted that CMS held a recent meeting on the use of registries. He said that both surgical communities and procedural specialties (such as the American College of Cardiology) have created them, and that the boards have taken on registry models. He also noted that charts abstracted by physicians and put into a central Web repository could then be given back to physicians to improve quality.

Since it was clear that registries should be examined, we are looking for guidelines for doing so, said Weiss. As a result, he continued, the workgroup decided to focus on principles for the use of registries for quality purposes. In this process, he said, our aim is to focus as much as possible on individual physicians.

Carolyn Clancy noted that AHRQ was funding an outside group to look at the benefits of registries and come up with guidelines for them. She suggested that, in the future, people were likely to see physician specialty boards say, if you want to be a fellow or be certified, then you need to report.

Discussion

Opening the discussion, a member of the workgroup stressed that the discussion was now focused on principles and not the work details.

How does this mesh with aggregation? asked one participant, who noted that there had been some wonderful work on registries contributed by surgical specialties. We're making great progress, she said, but now purchasers and consumers need to use them in tandem.

One participant suggested that the document needs to address how registries are set up, how they are to be used, and by whom. Another participant suggested that the proposed aggregation should also include standardization.

Will the aggregations intersect with the administration of data? asked one participant. I hope so, said Weiss, noting that the question comes back to the issue of appropriateness.

Weiss also listed the workgroup's goals for 2006. He said they include: 

  • Endorsing patient experience of care measures.
  • Endorsing a starter set of surgical and medical specialty care measures.
  • Endorsing rules/logic for cost of care measures specific to key conditions that drive utilization, cost, and inappropriate care.
  • Continuing to develop registry principles and uses.
  • Starting work on composition measures.
  • Other work as the marketplace demands.

One participant noted that the NQF has a national technical panel on quality measures. How do they affect these goals? he asked. In response, Weiss said that it was his sense that the work of both NQF and NCQA will be important.

Closing out the discussion on performance measurement, Carolyn Clancy reminded participants that huge progress has been made. She said she was thrilled that AQA was expanding its scope of work and embracing a larger array of specialty groups. She wondered first whether the AQA needed a new name. She also noted that the AQA steering committee has agreed to revisit the issue of resources. Finally, Clancy noted that the communications staff at a variety of organizations is talking about briefing reporters on the AQA's progress.


2. A copy of the revised AQA Parameters for Selecting Ambulatory Care Performance Measures, along with the other documents discussed by the Workgroup on Performance Measurement, are available at http://www.ambulatoryqualityalliance.org/january12meeting/performancemeasurement.


Previous Section Previous Section         Contents         Next Section Next Section


 

The information on this page is archived and provided for reference purposes only.

 

AHRQ Advancing Excellence in Health Care