This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information.
Please go to www.ahrq.gov for current information.
AQA Second Invitational Meeting Summary
Session on Data Sharing and Aggregation
Creating a Strategy for Health Data Sharing and Aggregation for Quality Improvement
David Kibbe, American Academy of Family Physicians
David Kibbe discussed the work of the workgroup on data sharing and aggregation. The group came up with some starting points:
- We are long overdue in enacting a national strategy for data collection, aggregation, management, and reporting on quality processes and outcomes for common, expensive, and chronic illnesses, especially from physicians' practices.
- There is currently no single trusted and competent national entity to oversee this set of tasks, which the workgroup has been calling data stewardship.
- There are many local, State, and regional "data stewards" or data aggregating organizations that are achieving success, and there are likely to be many more in the near future.
- Standards are critical to success—standard measures, standard principles and operating rules, and standard technologies and protocols.
- It is in the mutual interest of, and a common responsibility of, all major stakeholders to encourage and promote data stewardship activities that protect the health data assets of private parties but that have a public value and will incur shared costs.
- For purposes of quality and cost efficiency, data sharing and aggregation should avoid paper—we are committed to moving to an electronic environment.
- Providers, including physicians' offices, must be able to access electronic data currently unavailable to them—the paradigm will be "data sharing along a health data supply chain," as opposed to one of "data producers and users."
From this starting point, said Kibbe, the workgroup has recommended the establishment of a new, nonprofit entity to be known as the National Health Data Stewardship Board. The board will set uniform principles, standards, and operating rules for health data sharing and aggregation. In addition, the workgroup recommends that the new board distinguish itself from existing data aggregation and reporting organizations or entities by the nature of its mission and goals, governance structure, funding mechanisms, and responsibilities.
The workgroup has also recommended that the new board and the data stewardship activities it promotes be organized and operated based on a set of founding principles. Kibbe noted that the workgroup had struggled to make sure the list of principles was inclusive. These principles are data quality, independence, inclusiveness, transparency, fairness and neutrality, completeness of data sources, data protection, use of standards, access, affordability, and consistency with applicable laws.
Kibbe discussed possible models for the board, focusing on the Financial Accounting Standards Board (FASB). The key, he said, is that if we do not have such an entity, then we will have many different organizations doing this work.
Kibbe noted that the workgroup has not yet discussed next steps—but he offered two. First, he suggested that the broader group consider appointing a panel of appropriately able people to do a feasibility study of a national health data stewardship board, and flesh out a possible governance structure, funding mechanisms, and responsibilities. Second, he suggested that the feasibility study include a review of how the many local, State, and regional organizations already doing this work are practicing data aggregation and stewardship, along with their business models, costs, and some of the problems they may be encountering. Finally, Kibbe suggested a 6-month timeframe for accomplishing this work and reporting back.
The discussion opened with a question about the purpose of data aggregation, as well as the connection between data reporting and quality improvement. In response, Kibbe noted that most data aggregation activities are tied to performance measurement. He also clarified that the workgroup's recommendation is that the new body be a standard-setting body (i.e., set principles, standards, and operating rules) for other organizations that do data aggregation, rather than undertaking the activity itself.
What's the fundamental problem that this is addressing? asked one participant. The problem is that there's not always trust or competence, said Kibbe. If we're going to engage in massive performance measurement in this country, he said, we ought to have some highest level standards with which these organizations must comply. If we do not, then we will have chaos.
One participant called the proposal "an intriguing idea." He pointed out that there are groups doing this type of work at a national level, including the Centers for Medicare & Medicaid Services, the Joint Commission on Accreditation of Healthcare Organizations, and the National Council on Quality Assurance. There's a lot out there that could be brought together so we are not reinventing the wheel, he said. Kibbe agreed that any new initiative would have to build on existing efforts. But, he added, he would leave it to the group to decide whether these other organizations are inclusive enough.
There are multiple data stewards out there, said one participant, so some of the standards would need to deal with interoperability. We need the ability to create a virtual database out of these multiple ones.
The "data sources" principle implies that we must have patient-specific data (otherwise there would be double-counting of data), said another participant, who suggested there was a strong link between this conversation and the upcoming discussion on data reporting. There's a potential to "slice and dice" the data in multiple ways if they are totally unprocessed data, he said.
The assumption is that organizations that are already doing this are probably using and will continue to use patient-identifiable data, replied Kibbe. The main point here is that there may be a need for an organization that is trusted anew, or achieves a level of trust and technical competency, to which participating organizations grant authority. He explained that the workgroup had looked at two separate models: the Federal Reserve (and its relationship to the bank industry) and FASB (which is nonprofit but federally funded to oversee the financial accounting industry and its rule setting). As we did this, he said, we realized the FASB model was probably most apt.
There was a question about the role of the proposed National Health Data Stewardship Board, leading Kibbe to clarify that it is not intended to be a repository for patient-identifiable data. He also reiterated that the workgroup had discussed data validation, and that the concept was embedded in several of the draft principles (including data quality).
One participant suggested that there had been a lot of discussion about the disruption caused by multiple data requests, but not about the disruption from data aggregation. I see a similar problem, she said, as you want one-stop shopping for data (rather than having to go to multiple players). Shouldn't we be talking about some kind of uniformity? she asked. Several other participants echoed this question. One asked, Who can afford to report and aggregate data multiple times, refresh it, replenish it, and then report it out over and over again? It seems we need to agree that a doctor reports out data only once, and that then these data are dumped into a database that everyone can access. Another noted that a discussion about data aggregation has to be overlaid onto the issue of data sharing, so a physician in Pennsylvania can get information on a patient who comes down from Massachusetts.
Kibbe noted that the need for uniformity is the reason the workgroup came up with the "radical idea" that there be an organization chartered to get all the health plans on the same page. We don't want physicians bombarded by multiple data set requests.
The question is what system do we envision for our country that is unambiguous, said one participant. While this can easily become a national/regional discussion, he said, the workgroup started out by trying to develop a mechanism for uniform standards. As yet, there has been no discussion about how we will roll out this system and how it will be relevant and respond to local market conditions. I think we need a rich fabric of local organizations to engage the process across the entire country, he said.
I don't think it's as chaotic out there as we think it is, Kibbe responded. There are organizations struggling now with how to get the data from their various communities, aggregate the data, and report the data back. Some are doing this quite well, and we should look at these organizations. There is a huge technical set of issues, he said, and the workgroup didn't see any alternative to a national oversight group. He suggested that one of the next steps is to look at the trusted and competent organizations that now exist and see how they are doing this work.
Echoing the call to consider a national oversight board, one participant suggested that meeting participants were setting about to transform the U.S. health care system. Information technology and data are essential to this process, he said. An oversight authority is critical to helping bring together the various computer systems so they can talk to each other and ensure the portability of medical records. The big question, said the participant, is whether this will be a public or private effort—and he suggested a partnership was needed. He also warned that this effort would cost far more than people realized.
A member of the workgroup noted that it was important to balance the starting principles with the practical reality of what we ultimately need to think through. This has a lot of technical, infra-structural implications. As we move ahead, he continued, we will need to consider the more technical questions and come forward with a fairly specific proposal. A second workgroup member stressed the importance of interoperability, adding that the group believed that local data efforts offer a laboratory for a future national effort. And Kibbe pointed out that the workgroup had looked at the potential negative effects (economic, social and, potentially, quality of care) of different groups doing data aggregation on their own, reduplicating efforts, and generally not getting anywhere cohesively.
It seems that we have two scenarios, said one participant. In the first scenario (our current situation), we don't have electronic health records or data exchange, but there are repositories to aggregate data—in which case it makes sense to develop such standards. But in the second scenario (what we see happening), we have electronic health records and data-sharing networks. The data would probably reside at the source, but you would have the ability to query over the network for the data you need. It seems that the workgroup should break down these two scenarios, she said. We need rules or guidelines to determine who is an authorized user, who is authorized to query, and who is able to gather the data.
We want standardized measures at the front end and standardized reporting at the back end, said one participant. But what comes between these two? He suggested that there was a gap regarding who would undertake the analysis of the data that were reported out. Who's going to generate the reports? he asked. If we adopt a decentralized and rule-based approach, this will get to the matter of standardized output. A second participant expressed concern about the possibility of ending up with multiple data aggregation activities and wondered what could be done to control the process.
One workgroup participant noted that the group had only talked about data sharing and aggregation within the parameters listed for the National Board. We set aside our differences to put down what we could reach agreement on, she said, and we have not yet discussed the issue of a data repository.
One participant urged that the Regional Health Information Organizations (RHIOs) be brought into the discussion. Another suggested, however, that the RHIOs were invented because there was no better plan.
A question was voiced about whether the discussion was really about national standards or about driving information on quality of care to areas of the country in some defined timeline. One participant said the real question for this group was how to take performance measurements coming from that workgroup to create viable outcomes. Another cautioned on the need to articulate the steps that must be taken to address confidentiality and the security of patient data. If the public is to accept this process, she said, there must be a very explicit statement about the handling of data.
One participant expressed concern that the discussion implied that there were no existing structures and that the aim was to build a pristine system. But there is a system out there, and some of us are making large purchasing decisions every day. The question, he said, is whether we can do better.
Turning back to next steps, one participant said that it was important for the workgroup to address standard-setting (e.g., Setting standards for what? How does standard-setting relate to current data availability? What standards are needed to integrate the data?). In addition, it is important to clarify whether or not the proposed national board would do any data aggregation of its own, and to understand how its work would relate to existing standard-setting efforts. Several people commented that the body that sets rules should not also aggregate data.
Finally, one person suggested the workgroup set a goal of addressing confidentiality and creating a system for pulling data from multiple sources in order to produce, by mid-2006, public report cards on a starter set of performance measures. And another wondered whether the aim of the group should be to develop a single benchmarking database or comparable databases that are publicly accessible.
In response to the last comment, Carolyn Clancy suggested that it would be helpful for the workgroup to discuss the implications of both models. She added that the workgroup should operate under the assumption that the system would be voluntary, with different degrees of incentives.