Skip Navigation Archive: U.S. Department of Health and Human Services www.hhs.gov
Archive: Agency for Healthcare Research Quality www.ahrq.gov
Archive print banner
Performance Plans for FY 2000 and 2001 and Performance Report for FY 1999

This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information.

Please go to www.ahrq.gov for current information.

GPRA Goal 4: Evaluate the effectiveness and impact of AHRQ research and associated activities. (HCQO)

Note: All Agency evaluation activities, including MEPS-related studies, are included under Goal 4. This is because the MEPS budget line covers only costs associated with data design, data collection and analysis, and data products.

Strategy

As explained in other portions of this document, interim outcomes of research can be evaluated on a relatively short-term basis. However, the ultimate outcome of how the research affects people receiving health care or people interacting with the system requires large, expensive retrospective studies. AHRQ is implementing a growing portfolio of evaluations that will show, iteratively, the outcomes of the investments of Agency funds.

Previous Successes

Examples of evaluations conducted by AHRQ:

Medical organizations increasingly are investing in the development and dissemination of health informatics tools to help patients make decisions about screening and treatment. These informatics tools provide treatment- or disease-specific health information to patient, especially when they are facing choices among ways to manage their illnesses. AHRQ sponsored a study to ascertain the scientific knowledge base underlying these tools and to provide a comprehensive assessment of existing computerized and noncomputerized tools. The results of this study included four recommendations for research priorities in this important resource for patient information. These priorities are being used in the development of the Agency's program on medical informatics.

Types of indicators

The interim outcomes of research can be evaluated on a relatively short-term basis.

AHRQ conducts evaluations of its major programs or products to determine one or more of the following:

  • Evaluate the current state of the program or product including impact in health care.
  • Improve customer satisfaction with the program or product.
  • Target or prioritize future activities to increase their usability or usefulness.

Use of results by AHRQ

A contract with the Research Triangle Institute (RTI) evaluated the opportunities and challenges for working with private-sector organizations with access to large data sets which could support health services research. This review of types of organizations, the nature and scope of their data, and the conditions under which they could participate in research collaborations led to the development of the Fiscal Year 2000 initiative to form the Integrated Delivery System Research Network. Other evaluations have been used by AHRQ to better target activities, improve program management, and help in identifying future priorities for research.

Data Issues

Many of the evaluations are conducted with the assistance of consultants who are highly skilled in evaluation research and/or the subject matter. Some were done through surveys for customer satisfaction that were cleared through OMB. The third category is evaluations conducted through consultations with experts and users to obtain direct feedback on a particular product. The evaluation protocols were developed in consultation with Agency staff. In order to ensure the integrity of the evaluations, the AHRQ staff assigned to the projects were not program staff responsible for the day-to-day administration of the program. Staff with applicable expertise are drawn from throughout the Agency to staff the evaluation projects. Additionally, advice on the evaluation questions as well as on the interpretation and use of the results is often sought from experts on the AHRQ National Advisory Council.

GPRA Goal 4—Fiscal Year 1999 Results

Objective 4.1: Evaluate the impact of AHRQ sponsored products in advancing methods to measure and improve health care.

The following evaluations of five core Agency program/projects were completed in Fiscal Year 1999.

Indicator

An evaluation of the outcomes of outcomes research and the impact of AHRQ-supported outcomes and effectiveness research on clinical practice.

Note: the report on the evaluation study Outcome of Outcomes Research at AHCPR is used as a prime resource for this section. Select for further discussion of the study.

The full print version of the Outcome of Outcomes Research at AHCPR report may be obtained by contacting Joanne Book at (301) 427-1488 or at Center for Outcomes and Effectiveness Research, AHRQ, 540 Gaither Road, Suite 6000, Rockville, MD 20850. The report is also available on the Agency's Web site: http://www.ahrq.gov/clinic/outcosum.htm.

Results

Outcomes of Outcomes Research at AHCPR (now AHRQ).

Background. In 1998-99, following a decade of investment in outcomes and effectiveness research (OER), AHRQ pursued several activities in needs assessment and evaluation to assure that future research investments would be informed by both a clear understanding of our customers' needs and an evaluation of prior successes and lessons learned. We held several meetings with stakeholders to obtain their input on future priorities; we also conducted quantitative analyses to set the stage for discussion. We also conducted an evaluation titled The Outcome of Outcomes Research at AHCPR.

In 1999, Agency efforts to evaluate the first decade of Outcomes and Effectiveness Research (OER) resulted in a report, The Outcome of Outcomes Research at AHCPR. The evaluation was conducted by the consulting firm, The Lewin Group, and was designed to:

  • Develop a framework for understanding and communicating the impact of OER on health care practice and outcomes.
  • Identify specific projects that illustrate the research impact framework.
  • Derive lessons and options from past efforts that can help develop strategies to increase the measurable impact of future research sponsored by AHCPR.

In addition to this report, the authors have written or contributed to several recent review articles about outcomes and effectiveness research.

OER accomplishments. At least three conceptual developments have been strongly influenced by AHCPR-sponsored work:

  • The increasing recognition that evidence, rather than opinion, should guide clinical decisionmaking.
  • The acceptance that a broader range of patient outcomes need to be measured in order to understand the true benefits and risks of health care interventions.
  • The perspective that research priorities should be guided in part by public health needs.

Other accomplishments include:

  • OER studies have often provided descriptive data that challenged prevailing clinical ideas about how best to manage specific clinical problems.
  • Tools and analytic methods have been developed, including strategies for conducting systematic reviews and meta-analysis (now used by AHCPR's Evidence-based Practice Centers and others), instruments for measuring health outcomes important to patients, and sophisticated techniques for analyzing observational data to adjust for disease severity and minimize bias.
  • AHCPR's funding for OER has produced a network of institutions and trained investigators capable of conducting rigorous evaluations.
  • A growing appreciation of evidence-based medicine as a guiding framework for decisionmaking has intensified interest among clinicians, health systems leaders, and purchasers in information about the relationship between clinical and organizational interventions and patient outcomes. In particular, recent interest in quality measurement and improvement has resulted in increasing use of OER results as the basis for performance measures for "report cards" and accreditation.

Lessons learned about OER.

  • The framers of OER realized that existing data and studies might represent an inexpensive source of knowledge about effective care but might not be sufficient to address all questions about which treatments work best, and for which patients.
  • Lessons were learned about study designs, use of data, and associated bias. Further work is needed to explore more systematically how to associate the features of a particular clinical problem with the most appropriate tools and methods to study that problem (given that the goal is to promote decisions that will improve outcomes of care).
  • Additionally, research and experience have demonstrated that development and dissemination of high-quality, highly credible information is necessary to alter practices, but it is not enough. Enhanced knowledge must be linked with supportive practice environments and active implementation efforts.

The full print report may be obtained by contacting Joanne Book at (301) 427-1488 or at Center for Outcomes and Effectiveness Research, AHRQ, 6010 Executive Boulevard, Rockville, MD 20852. The report is available online at: http://www.ahrq.gov/clinic/outcosum.htm

Indicator

An evaluation and synthesis of:

  1. Primary care research supported by AHRQ.
  2. An assessment of the current state of the science and future directions for primary care research.

Results

Progress Report—The State of the Science in Primary Care Research: An Assessment of Recent AHRQ Contributions and Future Opportunities.

In a report published in 1996, a committee for the Institute of Medicine defined primary care as "the provision of integrated, accessible health care services by clinicians who are accountable for addressing a large majority of personal health care needs, developing a sustained partnership with patients, and practicing in the context of family and community." At the same time, the committee acknowledged that the "paucity of primary care research and development leaves primary care insufficiently prepared to confront the challenges and opportunities inherent in the committee's definition."

The Center for Primary Care Research (CPCR) within AHRQ began in 1999 the task of classifying recent contributions to the primary care research base and identifying, as recommended by the IOM committee, areas of primary care research that warrant high-priority attention. This brief paper reports on our progress to date.

A prerequisite to formulating an agenda for primary care research is knowledge of the current status of the science base supporting primary care, including the gaps in that base. With this in mind, we undertook a review of the major primary care research findings published in peer-reviewed journals during the preceding 5 years (1994 through 1998), with the intention of then identifying the portion of that published research that had been supported by AHRQ. The goals of this effort were to:

  1. Develop a framework, or typology, that captures the major primary care research categories reported on over a recent five year period.
  2. Identify areas within this typology in which the primary care research base appears underdeveloped.
  3. Characterize AHRQ's specific contribution to that research base (beginning five years after the establishment of the agency in 1989), as well as the key areas of primary care research that warrant future federal funding.

Our first task was to identify the major primary care research articles published from 1994-98. Since earlier work with the National Library of Medicine made it clear that this body of literature could not be adequately recovered through the usual electronic searches (Medical Subject Headings [MESH], etc.), we found it necessary to go directly to the journals in which the majority of primary care research in the United States is published.

To identify those journals, we asked a group of primary care researchers to list the journals to which they submitted their most important research; we also asked leaders of professional primary care organizations to list the journals they most frequently consulted for scientific information to guide their daily clinical or administrative work.

Based on this information, nine journals were identified as the repositories of major primary care research published in the United States:

  • Annals of Internal Medicine.
  • Archives of Pediatric and Adolescent Medicine.
  • Health Services Research.
  • Journal of the American Medical Association.
  • Journal of Family Practice.
  • Journal of General Internal Medicine.
  • Medical Care.
  • New England Journal of Medicine.
  • Pediatrics.

The next step was to identify the articles published in these journals which reported on studies conducted in the United States that can be considered primary care research. Toward this end, we established the following criteria: to be considered in the study, an article had to report on:

  1. An empirical evaluation (editorials, reviews or opinion pieces were excluded).
  2. Research conducted in the United States.
  3. Research conducted within a primary care setting.

We reviewed a total of 5,850 research articles published between January 1994 and December 1998 in the nine journals listed above. After applying the listed criteria, we determined that 915 of these articles (15.6 percent) fulfilled our criteria for primary care research. The percentages of primary care research included in each journal was fairly consistent from year to year but varied dramatically from journal to journal. For example, over 55 percent of the articles published in Journal of Family Practice fulfilled our criteria while only 2 percent of articles published in New England Journal of Medicine could be considered primary care research.

The 915 primary care articles were individually classified into six main categories of research. The percentage of articles that fit into each category is as follows: (a) epidemiological studies, 17 percent; (b) descriptive clinical studies, 41 percent; (3) interventions/trials, 12 percent; (4) studies of the organization of services, 24 percent; (5) evaluations of workforce or other policy issues, 3 percent; and (6) development of methods or measures, 1 percent.

The largest category of articles (descriptive clinical studies) was further sub-classified. The 375 articles in this category were sub-classified as follows: (a) studies on communication or counseling, 8 percent; (b) research on values/ethics/preferences, 15 percent; (c) preventive care, 18 percent; (d) methods of diagnosis, 21 percent; (e) treatment issues, 25 percent; (f) cost-effectiveness studies, 2 percent; (g) studies of performance/quality of care, 8 percent.

In addition, all 915 primary care articles were classified according to the research design/method used by the investigator. These categories (and percentages) were as follows: (a) cross-sectional design, 51 percent; (b) prospective cohort design, 9 percent; (c) retrospective study/chart review, 13 percent; (d) controlled trial, 11 percent; (e) secondary data analysis, 13 percent; (f) meta-analysis/decision or cost-effectiveness analysis, 2 percent.

Only 3.5 percent of published primary care studies were conducted within primary care practice-based research networks.

While much work remains to be done on identifying (and verifying) the number and percentage of published research studies supported by funding from AHRQ, preliminary data indicate that less than 20 percent of the published studies acknowledge AHRQ as a source of funding.

Further classification and sub-categorization of the published primary care research remains to be done. However, we are able at this point to make the following tentative conclusions regarding the state of primary care research:

  1. Though journals considered most receptive to primary care research included a significant proportion of primary care research articles, only a small percentage of all published articles fulfilled our criteria for primary care research.
  2. Approximately 60 percent of the published primary care research we reviewed focused principally on clinical issues (epidemiology, clinical care, or interventions); less than 30 percent examined issues related to primary care health services research.
  3. Within the body of research dealing with clinical issues, there was a rich diversity of studies. Notable was the small percentage of studies that considered cost-effectiveness issues in primary care.
  4. Cross-sectional designs (e.g., mailed surveys, in-office questionnaires or interviews) predominate the methods used in the recent past by primary care researchers, followed by retrospective studies/chart reviews.
  5. Practice-based research networks have yet to contribute in any major way (in terms of quantity of studies) to the body of published primary care research.

The final typology of primary care research will be useful in determining future directions in primary care research. The recent publication of a request for formal proposals for Primary Care Based Research Networks (PBRNs) will further focus the primary care research agenda on several priority areas, including informatics and health care disparities. Upcoming expert meetings in rural health care, screening for alcoholism, and end-of-life care will also provide useful goal-setting for the primary care research agenda. The final typology of primary care research and planned expert meetings will further delineate CPCR's future role as the principal source of funding for primary care research in the Department of Health and Human Services.

Indicator

AHRQ's State data strategy will be redesigned based on consultations with State Policymakers, researchers, hospital associations, and others about their past use of data from Healthcare Cost and Utilization Project (HCUP) as well as additional data needs.

Results

The Healthcare Cost and Utilization Project (HCUP) is a long-standing public-private partnership to build a multi-state data system. Throughout the Fiscal Year 1999 redesign effort, the HCUP team sought and received input from key stakeholders and other sources, including State HCUP partners, hospital associations and other private data organizations, policymakers, and researchers.

A key forum for input occurred at the annual HCUP State Partners meeting in May l999 where 19 of the 22 partner States participated along with representatives from other public and private organizations. All participants examined the current status of the HCUP project and gave feedback on suggested improvements and future directions for the project.

Based on input received, the following redesign efforts have been put in place for the HCUP project:

  • During l999, the number of HCUP State partners grew from l9 to 22 states. New state partners were selected for geographic diversity, population concentration, representation of important population subgroups (e.g., racial and ethnic minorities), and immediate availability of data.
  • In l999, the HCUP inpatient hospital data effort expanded to include other settings. Hospital-based ambulatory surgery data was collected from nine states on a pilot basis, along with emergency department data from one state. Data from these new sites is being evaluated for data-quality and policy relevance.
  • HCUPnet is now available for public access on the Agency's Web site. HCUPnet allows users to tailor an online query of HCUP's National Inpatient Sample (NIS), the largest all-payer inpatient database in the U.S. HCUPnet is ideal for developing national estimates and analyzing national trends, including trends for hospitalizations that can only be analyzed with large sample databases (e.g., care patterns for rare conditions, frequency and distribution of uncommon procedures such as transplantation).
    In less than 2 months, the site received over 2,100 hits, an average of 51 per day. The average number of requests per visit (i.e., how many screens the user goes through) is 10.7. This means users are sticking around and using the service, not just bouncing in and out. Thirty percent of visits are from users with .com organizations, 23 percent from .net, 9 percent from .edu, 6 percent from .org, and 2 percent from .gov or .mil.
  • Eleven of the 22 Statewide Inpatient Databases (SID) are now available from a single point of access, under the auspices of AHRQ. Prior to September l999, the only means to access SID data was to approach each HCUP partner State on an individual basis, determine if the data organizations released their SID, obtain information about state-specific application processes, and successfully complete the application processes. The method was time-consuming to researchers since each state had varied application requirements.
    The Central Distributor allows researchers a more efficient method to gain access to HCUP-formatted data from several States since a single application process is used for all states. AHRQ is currently assisting the data organizations in the release of the 1995 and 1996 SID. AHRQ continues to work with the remaining states with the goal of making the SID universally available from a single point of access.
  • A feasibility study is underway to explore construction of a dataset specifically aimed at children's studies, in response to the growing interests of policymakers and researchers in studying pediatric hospitalizations. Children comprise about 16 percent of the HCUP Nationwide Inpatient Sample (which has 5.6 million observations); however many pediatric conditions are relatively uncommon, which makes analysis difficult despite the large sample size of the NIS. The new children's database is in the early stages of development. The Agency is consulting with potential users (e.g., pediatric researchers, CDC staff involved in birth defects studies) to best design the database to allow more reliable estimates for uncommon conditions and procedures.
  • Efforts are now underway to create a new database for minority studies called the Nationwide Inpatient Sample for Minority Studies (M-NIS). This dataset would enable the hospitalization experience of racial/ethnic groups to be studied, and in particular would facilitate disparities analysis. This dataset will be based on data from hospitals in the 16 HCUP states that provide data on race/ethnicity.

In addition to routine contact with HCUP partners, representatives from HCUP actively participated as faculty for a diverse assortment of professional conferences, giving seminars on the HCUP project and eliciting input on efforts to improve the usefulness of the database:

  • User Liaison Program (ULP) on Managed Care, Medicine and Public Health: Building Collaborations that Work (meeting for State policymakers), September 1999.
  • Conference on Health Statistics, National Center for Health Statistics (NCHS), August l999.
  • National Meeting, Society for General Internal Medicine (SGIM), May l999.
  • NIH-Funded Conference on Funding, Evaluating, and Assessing Sources of Health Data, May l999.
  • Annual Meeting, Association for Health Services Research (AHSR), June l999.
  • Building Bridges Research Conference IV (meeting of managed care researchers), April l999.
  • User Liaison Program (ULP) on Making Evidence-based Decisions; Technology Assessment for Coverage and Disease Management, July 1998.

Indicator

Results of the evaluation of the Consumer Assessment of Health Plan (CAHPS®) study will be used to improve the usability and usefulness of the tool. Findings are expected to show whether:

  • The survey-based information from CAHPS® helps consumers make better health care decisions.
  • The information increases consumer confidence when choosing health care plan.
  • CAHPS® is used by public and private organizations.

Results

Results from the CAHPS® demonstration sites will be available over a period of time as data collection, analysis and interpretation is completed at each site. Additionally, grantees are working collaboratively to summarize results across sites. The plan is to publish these results the scientific literature. Preliminary findings indicate that:

  • Consumers say that quality is an important consideration in their choice of plan.
  • Quality affects their choice of plan.
  • Consumers have a favorable reaction to the CAHPS® reports.
  • Consumers use CAHPS® data when choosing a plan.

Indicator

Evaluation studies on:

  1. the quality and usefulness of the evidence reports and technology assessments produced by the Evidence-based Practice Centers.
  2. the impact of the use of these products on the health care system will be developed and initiated in Fiscal Year 1999.

Results

Final evaluation report will be received in February 2000.


Objective 4.2: Evaluate major dissemination mechanisms.

Indicator

AHRQ Clearinghouse customer satisfaction rated at 98 percent. (Baseline: Overall experience in ordering from Clearinghouse—96.4 percent in the first half of Fiscal Year 1997.)

Results

Clearinghouse Customer Service Survey, 1999


Question                               Answer

Was your question answered             Yes, 99.7 percent
within a reasonable time?
Was your call handled in a polite      Yes, 99.6 percent
and helpful manner?
Did you get the information or         Yes, 97.2 percent
assistance that you requested?
If you used our automated answer       (90.5 percent said they never
system, were the directions easy       used the system before.)
to follow?
How would you rate the overall quality of service, using a scale
of 1 to 5,  from lowest quality to highest quality?
Five:      1,248
Four:        399
Three:        68
Two:           9
One:          13
Total number of calls for the survey:    4,603.
Total number of callers transferred to the survey:  2,091.
Total number of callers that completed the survey:  1,737.

This survey was cleared under OMB 0937-0201, entitled "Survey of AHCPR Publications Clearinghouse."

Indicator

Customer satisfaction data on AHRQ consumer publications (useful/relevant) was rated at 90 percent. (Baseline: 81.1 percent from 1997 survey).

Results

Satisfaction was rated at 81.3 percent. The main reasons that customers were not satisfied were:

  1. They ordered it but didn't read it.
  2. Someone else ordered it for them.
  3. The publication was too general or not specific to the person's condition.

GPRA Goal 4—Fiscal Year 2000 and 2001 Indicators

Objective Fiscal Year 2000 Indicator Fiscal Year 2001 Indicator

Fiscal Year 2001 Objective 4.1: Evaluate the impact of AHRQ sponsored products in advancing methods to measure and improve health care outcomes and quality.

AHRQ's HCUP Quality Indicators (QIs) will be redesigned based on consultations with State policymakers, researchers, hospital associations, and others about their past use of the QIs. By the end of Fiscal Year 2000, a new set of quality indicators will be defined and feedback obtained from a new set of HCUP QI users. In addition, AHRQ will provide access to recent national-level QI information via both the Internet and through published reports, with special focus on disseminating information to hospital users and organizations with responsibility for hospital quality reporting.

Use of evidence reports and technology assessments to create quality improvement tools in at least 10 organizations. (Baseline under development.)

For at least four evidence reports or technology assessments per year, work with partners to measure how the reports or assessments were used and what impact they had on clinical decisionmaking and patient care. (Baseline under development.)

At least 3 examples of how research informed changes in policies or practices in other Federal agencies. (Baseline under development.)

AHRQ will report on the extent to which CONQUEST assists those who are charged with carrying out quality measurement and improvement activities and the extent to which it helps further state-of-the-art in clinical performance measurement. (Baseline will be established by the evaluation study.)

CAHPS® has assisted the Health Care Financing Administration (HCFA) in informing Medicare beneficiaries about their health care choices. The use and impact of this information is determined by surveying a sample of these beneficiaries. (Baseline under development.)

At least one quality measure from Q-span (or instances where AHRQ research contributes to the development of measures) are used in the Health Plan Employer Data Information Set (HEDIS) by the National Committee for Quality Assurance (NCQA), measurement activities of the Joint Commission for the Accreditation of Healthcare Organizations (JCAHO) or others who monitor health care quality. (Baseline in Fiscal Year 1998—One quality measure adopted and one instance of AHRQ-sponsored research contribute to adoption of measures.)

Evidence-based Practice Centers

Use of evidence reports and technology assessments to create quality improvement tools in at least 15 organizations. Budget: Commitment Base.

For at least four evidence reports or technology assessments per year, work with partners to measure how the reports or assessments were used and what impact they had on clinical decisionmaking and patient care. Budget: Commitment Base.

Findings from at least 3 evidence reports or technology assessments will effect State or Federal health policy decisions. Budget: Commitment Base.

Use of evidence reports or technology assessments and access to NGC site informed organizational decision making in at least 4 cases and resulted in changes in health care procedures or health outcomes. Budget: Commitment Base.

Research: At least 3 examples of how research informed changes in policies or practices in other Federal agencies. Budget: Commitment Base.

Quality Measures: Achievable Benchmarks of Care are used for quality improvement activities by Peer Review Organizations. Budget: Commitment Base.

Use of dental performance measures by dental service and insurance organizations. Budget: Commitment Base.

HCUP quality indicators incorporated into government, quasi-government (JCAHO), and hospital efforts to improve the quality of care. Budget: Commitment Base.

National Guideline Clearinghouse™

At least 10 users of the National Guideline Clearinghouse will use site to inform clinical care decisions. Budget: Commitment Base.

Guideline development or quality improvement efforts by users will be facilitated through use of NGC in at least five cases. Budget: Commitment Base.

NGC information will be used to inform health policy decisions in at least two cases. Budget: Commitment Base.

Improvements in clinical care will result from utilization of NGC information in at least three cases. Budget: Commitment Base.

Training Programs: Two-thirds of former pre- and postdoctoral institutional award trainees are active in conduct or administration of health services research. Evaluation results to date show:

  • 76 percent (of respondents) embark on a research or research administration career upon completion of training.
  • 57 percent are actively involved in a research grant or contract.
  • 75 percent have had at least one publication.

Budget: Commitment Base.

Fiscal Year 2000 Objective 4.2: Evaluate the impact of AHRQ sponsored products in advancing methods to measure and improve health care quality.

Use of MEPS data in 1 percent of research applications received by AHRQ. (20/400 or 5 percent in Fiscal Year 1999. Because of budget increase, AHRQ expects to receive significant increases in numbers of applications. Indicator changed based on the changing circumstances.)

MEPS products started to be available in Fiscal Year 1998, with more to be available in Fiscal Year 1999. AHRQ is publishing program announcements indicating interest in receiving grant applications involving the use of MEPS data. The first research proposals using MEPS data are expected in Fiscal Year 1999.

Distribution of MEPS data sets to at least 2,500 requestors.

Baseline in Fiscal Year 1998—916 data sets downloaded from Web site. 1,000 CD's distributed at conferences and other venues.

Feedback from recipients of MEPS data indicating that the data were timely, useful, and of high significance. Baseline under development.

At least five examples of how research using MEPS has been used to inform decisions by Federal, state, and private sector policymakers. Baseline under development.

Use of MEPS data in AHRQ research applications will increase by 10 percent over number received in baseline period of 1999. Budget: Commitment Base.

Feedback from recipients of MEPS workshop participants indicating that they were useful and timely. Budget: Commitment Base.

At least five examples of how research using MEPS has been used to inform decisions by Federal, State and private sector policymakers. Baseline not yet available. Budget: Commitment Base.

Return to Contents

GPRA Goal 5: Support Department-wide Initiative to Improve Health Care Quality through Leadership and Research. (HCQO)

The President mandated the establishment of the Quality Interagency Coordination Task Force (QuIC) as a vehicle for promoting collaboration among the Federal Agencies with health care responsibilities to improve the quality of care in America. Secretaries Shalala and Herman are co-leading this activity, but asked the AHRQ Director to serve as operating chair. The QuIC is working to improve patient and consumer information, quality measurement systems, the workforce's ability to deliver high quality care, and the information systems needed to support the analysis of the care provided.

The recommendations for assuring and advancing the quality of health care released by the President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry have contributed significantly to the development of quality-related research being proposed by AHRQ.


Priorities for the QuIC:

  1. Improving patient and consumer information.
  2. Providing key opportunities for clinical quality improvement.
  3. Improving measures of quality.
  4. Developing the work force to provide quality.
  5. Improving information systems.

Strategy

The work the Agency is doing to support this initiative is woven into the three priority areas that are proposed in the Fiscal Year 2001 budget. Both objectives represent aspects of other programs that will directly contribute to the goals of the Initiative to Improve Health Care Quality.

Previous Successes

Quality Indicator (QI) Taxonomy Meeting: The Agency for Healthcare Research and Quality and the Health Care Financing Administration jointly sponsored a meeting under the auspices of the QuIC to develop a QI taxonomy. The meeting included participants from a number of the Federal agencies represented on the QuIC including AHRQ, HCFA, CDC, DOD, HRSA, VA, the Office of the Assistant Secretary for Planning and Evaluation (OS/ASPE), and the Coast Guard. Also present was the Medical Review Organization (under contract to HCFA).

The meeting was the first step in development of a taxonomy of quality indicators that could be used by Federal agencies in a variety of projects including the advancement of the research agenda of various agencies within the Department of Health and Human Services and other Federal agencies and the development of HCFA Peer Review Organization Sixth Scope of Work. The draft documents developed from the meeting will be refined in a report and a published paper and made available to all Federal agencies and other interested parties.

National Quality of Care Assessment: The Secretary's Quality initiative and the Agency's Reauthorization call for the Agency to lead efforts to measure the current quality of health care in the nation. A preliminary assessment of the currently available measures and data show significant gaps. For example, we are currently unable to provide nationally representative data about the quality of care for traumas or many other life threatening events, we have no nationally representative data on the frequency with which errors occur, and we have not national data on patients' experiences with the care they receive. In the initiative, we will identify what gaps need to be filled and will engage in research projects to fill the gaps. These process measures will track our progress in closing those gaps.

Funding grant with Louisiana State University to support research into the development of a tool to develop a common language and basis for comparing patient preferences and quality measures: The grant will be used to develop and test methodology related to the MEPS data on family perception of the quality of their usual sources of care (Q-USC) and the degree to which their children express behavioral and emotional problems.

The Health Care Informatics Standards Activities of Selected Federal Agencies (A Compendium): The Agency for Healthcare Research and Quality has produced two reports to compile the health care informatics standards activities that have been voluntarily reported by selected Federal agencies. The initiative was originally undertaken to assist:

  1. The Secretary of Health and Human Services in making health data standards choices for administrative simplification (mandated under PL 104-91).
  2. The Department of Health and Human Services Data Council's oversight of health data standards.
  3. The White House in meeting the goals of the Administration to promote the widespread use of the National Information Infrastructure (NII) in health care.

The report also provides information to assist HHS in responding to the request of Vice President Gore (March 1995) to improve the coordination of Federal activities on health care data standards development.

Collaborative Opportunities: These projects would provide tools that can be used by both Government and private-sector entities and also involve possible collaboration with private sector groups. Identifying projects that other agencies are funding and need co-funding support as well as identifying projects for which AHRQ grants can be sought to advance or expand existing projects with other agencies. Pursuing possible collaborative efforts with NLM, DOD (several components), the VA, and the Government Computerized Patient Record workgroups.

Types of Indicators

Process and output measures are used to document steps being taken in the quality initiative that aims to coordinate and increase the Federal government's focus on improving health care quality. The steps presented in this plan have been developed by an intra-governmental task force and reflect major milestones in the effort. Because this is a relatively new initiative, many indicators reflect initial efforts on which future, outcome-oriented steps will be based, including critical gaps in knowledge.

Use of Results by AHRQ

The QuIC provides AHRQ with opportunities to further two major Agency goals:

  1. In working with the Federal agencies that provide and/or purchase health care for millions of Americans, AHRQ is learning what major users of health services research on quality, evidence-based medicine and other topics need. This provides AHRQ with an invaluable source of real-time user input and directly influences the Agency's research agenda and product development.
  2. The QuIC provides AHRQ with unparalleled opportunities to advance its Translating Research Into Practice agenda. The Agency is able to inform the Federal health care community about the existence of research and products that currently are in the portfolio and are relevant to the issues the community is wrestling with.

Data Issues

The results for these indicators are largely completed work products and success in meeting project milestones. As the Director of AHRQ is the QuIC operational chair, the AHRQ Coordinator for Quality Activities is assigned to monitor progress of the various workgroups and maintains to all the pertinent data. The majority of the work products of the group are available upon completion to the public. Beginning in February 2000, the QuIC Web site will be operational at http://www.quic.gov.

GPRA Goal 5—Fiscal Year 1999 Results

Objective 5.1: Provide leadership for the Executive Branch's Quality Interagency Coordination Task Force (QuIC)

Indicator

Collaborative work groups are established under the QuIC undertake projects with direct application to improving quality of care.

Results

QuIC Workgroups were established in May 1998. Projects were initiated in August 1998 and are still ongoing in three areas: Efforts to improve current patient care practices, efforts to create quality improvement tools, and efforts to help inform Americans about health care.

Indicator

In addition to the work on specific projects chosen by the QuIC, communication is facilitated on common issues such as:

  1. Implementation of the Bill of Rights and Responsibilities from the President's Commission on Consumer Protection and Quality in the Health Care Industry.
  2. Organization or management strategies to improve quality of care.

Results

In terms of communication on common issues, the participating agencies have:

  • Submitted an update on their activities to implement the Consumer Bill of Rights (January 7, 1999).
  • Worked collaboratively to decide how best to collaborate with the National Forum on Quality Measurement and Reporting.
  • Are working on papers and presentations on issue of quality together.

Objective 5.2: Conduct research to expand the toolbox of measures and risk adjustment methods available help to measure the current status of quality in the Nation.

Indicator

Inventory of measures and risk adjustment methods currently in use by Federal Agencies will be developed.

Results

The measures inventory and risk adjustment methods were developed and reported in March 1999. They have led to comparisons of similar measures to try to identify which measures are simpler to use and yield sufficiently detailed data to support analyses. The inventory also resulted in identification of common areas of need for measures, discussions of how to develop the measures together, and collaboration on identifying measures that are sufficiently robust that they can be used for the National Quality Report.

Indicator

Assessment of measures and risk adjustment methods needed by Federal Agencies will be conducted.

Results

The assessment of measures and risk adjustment methods was initiated in April 1999 and is still ongoing.


Objective 5.3: Inform health care organizational leaders and others how to design quality into their systems.

Indicator

Review research conducted that identifies appropriate ways of redesigning health care delivery systems to reduce errors.

Results

The review of research was completed in August 1999. An initiative to reduce errors, based on the synthesis of the research, will be undertaken beginning in early February 2000.


Objective 5.4: Improve understanding of how to ensure that research affects clinical practice as appropriate.

Indicator

Research on effective dissemination of information to decisionmakers including patients, clinicians, organizational leaders, purchasers, and public policymakers conducted.

Results

AHRQ research on diabetes and depression was presented in August 1999 and is being used in two projects to improve patient care practices in these areas. Generally, the QuIC is working on methods for ensuring that relevant research from AHRQ, NIH, CDC, SAMHSA and other organizations is in the hands of the DoD/VA teams that are trying to establish practice guidelines based on the best available clinical information. These guidelines are implemented through automated reminder systems, policy directives, performance measures, and other techniques. They directly affect the care received by DoD and VA beneficiaries, so it is imperative that they be based on the best possible evidence. The QuIC has facilitated that identification of appropriate experts to include in the DoD/VA guideline development processes addressing clinical issues that the DoD and VA have identified as critically important to them.

GPRA Goal 5—Fiscal Year 2000 and 2001 Indicators

Objective Fiscal Year 2000 Indicator Fiscal Year 2001 Indicator
Objective 5.1: Conduct research to help to measure the current status of health care quality in the Nation.

Data sources identified that will contribute information as part of the mosaic picture of quality of care in the Nation.

Develop and begin to test some questions to be added to the existing data collection activities to provide a better picture of quality.

Develop a framework for the National Healthcare Quality Report.

QI Taxonomy Meeting held under the auspices of the QuIC. Budget: Commitment Base.

Number of grants and contracts funded in Fiscal Year 2001 that will help to fill gaps in the information available to assess the national quality of care, or will help to expand the use of current measures to provide a broader or richer picture of quality. Select for Budget: Improving the Quality of Health Care Delivery Systems; Reducing the Burden of Worker Illness; The National Healthcare Quality Report.

Objective 5.2: Facilitate use of quality information to improve health care in the Nation.

Development of at least one tool that can be used by large group purchasers in assisting their beneficiaries to choose the health care plan, provider, or hospital that best meets their needs.

Number of grants to assess quality improvement strategies. Budget: Commitment Base.

Adoption of Agency sponsored research and tools developed by one or more users to facilitate consumers/purchaser/decisionmaker use of information about quality. Budget: Commitment Base.

Objective 5.3: Improve quality measurement. Sponsor research to fill the existing gaps in needed measures. Identification of collaborators for research projects on electronic medical records integrated with guidelines (e.g., from the Guideline Clearinghouse) or QI indicators (e.g., CONQUEST, QI Taxonomy project, HCUP measures). Budget: Commitment Base.
Objective 5.4: Improve understanding of how to ensure that research affects clinical practice as appropriate. Discontinued. Discontinued.

Return to Contents
Proceed to Next Section

 

The information on this page is archived and provided for reference purposes only.

 

AHRQ Advancing Excellence in Health Care