Skip Navigation U.S. Department of Health and Human Services
Agency for Healthcare Research Quality
Archive print banner

This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: Let us know the nature of the problem, the Web address of what you want, and your contact information.

Please go to for current information.

AQA Second Invitational Meeting Summary

Session on Reporting

Improving Data Reporting

Randy Johnson, Motorola

Randy Johnson said that the workgroup on data reporting had focused its attention primarily on quality improvement. We recognize the need to focus on reporting that addresses quality, efficiency, equity, and patient satisfaction, he said.

Johnson said the discussion had reinforced what many of the workgroup members already believed—that reports can be important tools. He cited a number of uses, including:

  • Helping physicians and hospitals address internal efficiency and make quality improvements.
  • Informing patient health care decisions regarding providers, hospitals, and health plans.
  • Helping purchasers in health care plan design.
  • Helping purchaser and health care systems development.
  • Building the network for pay for performance.
  • Improving the network for existing health care.

He added that reports need to be meaningful, valuable, and useful, and they need to reflect gender and age, be risk adjusted, and incorporate a sufficient number of incidents to be valid.

The objective of the workgroup, which Johnson admitted the group hadn't really followed, was to develop a template or model for reporting and to enable differentiation based on quality and cost. We think these need to be available in a variety of formats (e.g., an employer needs different information than a hospital). In retrospect, he said, perhaps our objective should be to develop principles for reporting.

Johnson summarized the areas in which stakeholders appeared to be in agreement. He said participants agreed that the reports should:

  • Differentiate provider performance for quality and efficiency.
  • Enhance quality and efficiency.
  • Be based on measures that are actionable.
  • Create an end product that provides a benchmark that can be used and a basis for certification.

In addition, he said, it's important that the data can be used in multiple ways. He added that it was also important to collect data and report them at different levels, and to take into account differences in health care literacy (i.e., the need to report differently to different audiences).

The workgroup also posed a number of outstanding questions, said Johnson:

  • Where should the reports be focused (i.e., individual physicians, a physician group, a hospital)?
  • What responsibility do physicians have for the performance of the larger systems in which they may work?
  • Do we need models, or do we need principles?

Turning to next steps, Johnson stressed that it was important not to allow the goal of perfection to slow down or stop the process. If our approach is on developing report templates, he said, the question is whether we should allow those with expertise to design their own (physicians building reports for physicians, consumers for consumers, and so forth). He wrapped up his remarks by noting that employers and consumer groups have been looking for reports for some time, adding that he hoped to see a consumer-style report by 2007.


Opening up the discussion, one participant expressed a general concern about the overall conversation. Where's the value-added, she asked, and where is it productive for us to expend our energy? The participant said she thought the group had limited experience in efficiency metrics (and how to feed efficiency information back to physicians in a way that is actionable). She also questioned the environment for reporting, noting that the message needs to be different if it's steering consumers to/from a physician, aimed at determining performance bonuses, or being used for State licensing purposes.

More important, said the participant, how do we engage practicing physicians in this process? She said that a lot of the day's discussion had dealt with quality measurement as an external activity, rather than integrated into how physicians practice on a day-to-day basis. So we're building support tools and information products around that process, she said, rather than being in the business of grading or evaluating physicians, and rather than developing infrastructure that improves the delivery of health care in the United States.

In response, one member of the workgroup on data reporting noted that there were concerns about penalties for suboptimal levels of performance (as opposed to rewards for good performance). In terms of a strategic framework, she said, the best solution is to connect to electronic medical records so that data extrapolation is possible.

Another workgroup member noted that it was difficult to talk about reporting principles without knowing what was to be reported. There are other ways (besides the starter set of performance measures) of measuring quality, she said, picking up on the theme of how the reports would be used. This is the key to how we would report, she said, noting that all of the data thus far show that patients don't use report cards when they are available. She also pointed out that standardization of clinical processes and documentation has a plus side for physicians—more time to spend with their patients or in other activities. On the reporting side, there are two activities, she added: how the data are scored and how the data are actionable. We need to help physicians improve their quality without overburdening them, she concluded.

One participant took issue with the perception that patients don't care about data. That has changed, he said. As we move toward consumerism, we're hearing from the employer and consumer communities that it is very difficult to ask people to make decisions without information. Some employers are postponing rollout of consumer-driven programs, he said, because there is insufficient information. Another participant stressed that reporting must be useful to consumers in order for the process to have any value. A third noted that much of the focus on consumer reports addressed individual physicians, and suggested that perhaps reports need to be designed to help physicians select other physicians. A fourth participant picked up the theme, suggesting that once physicians are rated on efficiency, it's useful to include a component on referrals.

There was also discussion about the need to link principles to performance and to accountability. One person pointed out that he wanted to know the outcome for his treatment if he visits a physician in one practice versus a physician working in a different system. He also pointed out that it was important to be able to know which elements of a person's integrated health care delivery system were working and which were not. I think we should urge the workgroup to define terms and principles so that we know what we're talking about regarding controllability (because this is very different for a consumer versus a solo physician versus a physician within a system). Regarding accountability, the person noted that accountability was the leverage to make improvements in quality (including pay for performance). Without accountability, he said, we're simply changing the reporting system without addressing the underlying goals.

What should a report look like, who is it for, and what should it contain? While this theme underpinned the entire discussion on data reporting, there was little unanimity of opinion. One person suggested that the workgroup develop a standard report that would apply equally to a physician in a group health cooperative and one in a small practice. Another suggested that reports should be based on contextual measures (the extent to which the starter set represents and impacts an individual physician's practice). A third expressed concern about the possibility of individual providers ending up being held accountable for factors outside their control (e.g., if a patient doesn't show up at a referred specialist or doesn't follow a drug regime). A fourth noted that, from a consumer standpoint, he was less interested in individual physician accountability than in outcomes. We need to keep effectiveness and efficiency in mind, he said, as we think about collecting, using, and packaging data. A fifth participant stressed that employers and consumers need a high level of detailed reporting on physicians and medical networks in order to make informed choices about their care.

There seems to be a broad theme across all three workgroups, observed one participant. The reason we are collecting an initial set of data is that we have some evidence that these are things physicians should be doing to treat patients. And we then need both to translate that information on a scale of 1 to 10 and also to define why doctors should be doing what they are doing.

Turning to the issue of next steps, Randy Johnson noted that a lot of different report formats are out there already, and suggested that asking people who have existing report formats to respond to a set of rigid standards would be problematic. As to what the reports should contain, Johnson noted that it was important to focus on both specific physicians and physician groups. If we don't measure the whole area of care (including the hospital the patient goes to, the lab, and the x-rays), then how are we going to measure efficiency?

Carolyn Clancy suggested that it was important to think of the various factors as principles for reporting. Then different entities can slice and dice the information in different ways, she said. Another participant stressed that developing principles was the only rational approach, noting that it was impossible to hold individual physicians or practice groups accountable for systems where none exist. For example, she said, how can physicians make referrals if there are no adequate data to make them?

The final comment refocused on what's doable now versus in the future. Until we have perfect measures, said the participant, there are ways we can help consumers differentiate quality and efficiency regarding what is happening now. And ultimately, she said, I would like to see a label (similar to a food label), in order to compare service providers.

Next Steps

The meeting adjourned following the discussion of reporting standards. Carolyn Clancy had previously asked participants to provide possible meeting dates for late April (and to let AHRQ know which dates were tied up). As the meeting closed, there was an expectation that the workgroups would move forward with their work and report back at a future meeting, most likely in late April.

Current as of April 2005

Previous Section Previous Section        Contents                        

Internet Citation:

Second Invitational Meeting: Performance Measurement, Data Aggregation, and Reporting. Summary. April 2005. Agency for Healthcare Research and Quality, Rockville, MD.

The information on this page is archived and provided for reference purposes only.


AHRQ Advancing Excellence in Health Care