Skip Navigation Archive: U.S. Department of Health and Human Services
Archive: Agency for Healthcare Research Quality
Archive print banner
Evaluation of AHRQ's Partnerships for Quality Program

This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: Let us know the nature of the problem, the Web address of what you want, and your contact information.

Please go to for current information.

Chapter II. Evaluation Framework, Methods, and Data Sources

A. Evaluation Framework

To guide the evaluation, we developed a conceptual framework that identifies key participants, the way they are linked, and the critical questions of interest from each participant's perspective. Figure II.1 presents this framework.

The framework highlights the fact that the success of PFQ involves successful interaction of four core participating groups whose contributions are essential in improving quality of care. These are: 1) AHRQ, 2) the lead grantee organizations, 3) the relevant collaborators and targets for each grantee's efforts, and 4) the coordinating activities put in place by AHRQ to foster overall program goals and link PFQ to AHRQ's broader quality agenda and objectives.

The second dimension of the framework involves a series of relevant tasks, decisions, and communications that each actor/program component must successfully execute if PFQ is to achieve its goals.  Specifically:

  • AHRQ needs an infrastructure to support the program and ensure that it is well-linked to the agency's overall goals. AHRQ must establish effective project officer guidance and oversight of each grantee, along with effective overall program management and linkages to other AHRQ activities.  Grants management needs to support the program, and PFQ staff must be able to access needed resources (financial and otherwise) on a timely basis.
  • Lead grantee organizations are the link between AHRQ and those in the field whose involvement is pivotal to quality improvement.  The chosen organizations need to be well-situated to influence their constituencies and must demonstrate access to the appropriate collaborators and communication channels, as well as the existence of working agreements—all of which are prerequisites for change.  But focus is critical to change. Though the specific focus will vary across grantees, each grantee needs an effective focus relevant to those it seeks to influence and the focus must be suited to making concrete improvements in quality. In addition, activities need to evolve over time to generate increasing impacts appropriate to the project's span and its goals.
  • Collaborators, target organizations and providers are the places where care is delivered, and are the core stakeholders. Their involvement is essential to individual grantees' strategies for improving care quality. Improvements in quality cannot occur unless targets "buy in" to the grantee's goals and are provided with the motivation and support to achieve them. To achieve AHRQ's goal for PFQ, these changes in practice or in purchaser decision making need to be sustained and ultimately diffused more broadly both in and across individual organizations.
  • Coordinating activities are those efforts carried out by AHRQ or others aimed at helping grantees learn from one another, and linking PFQ's work to the broader quality agenda.  They include the PFQ Web site (the Web site run by National Institutes of Health and used to foster electronic communication), the PFQ database, and AHRQCoPs sand its subcommittees.  To be effective, coordinating activities need to be well-structured, well-regarded and well-supported by those whose involvement and participation is critical, and well-targeted to support substantive contributions to PFQ goals.

Return to Contents

B. Overview of Evaluation Design

Our approach to the evaluation involved a combination of document review, interviews, and limited observation of selected AHRQCoPs meetings. The intent was to use these sources to capture information on how each component was executed and what factors facilitated or impeded work. The evaluation is largely qualitative in nature. However, to the extent grantee progress reports and self-assessments include concrete measures of the "reach" and impact of their efforts on process or outcome measures, we include them in this report.

Table II.1 summarizes the overall evaluation design.  It shows the four key questions or areas of interest described in Chapter I, the key measures that are relevant to answering them as derived from the evaluation framework, the data sources that were used to create the data needed on each measure, and how the analysis was conducted. 

Return to Contents

C. Sources of Information

1. Program and Grantee Documents

At the start of the evaluation, MPR worked with AHRQ staff to gather documentation about the program and about each grantee. 

Program Documents. We could not obtain documents that described PFQ's history, purpose, and design.  While we had access to the RFA, we were unable to review the original documents detailing the idea behind the program, such as e-mail or internal memos summarizing the discussions that occurred during the development of the RFA on the issues of the program's purpose, focus, and targeted participants.

Leadership indicated that because the program was taking AHRQ in a new direction of translating research into practice, the processes for reviewing/scoring applicants and selecting grantees required new methods that diverged from the traditional AHRQ methods. Unfortunately, we were unable to obtain documentation that may have explained how these processes differed from the agency's traditional methods (for example, list of technical reviewers, technical review scores for applicants, AHRQ's executive management meeting (EMM) notes). What we were able to learn about the genesis of the program and grantee selection primarily came from interviews with AHRQ staff, discussed in Chapter I. AHRQ's Office of Grants Management generated a spreadsheet of total funding given to each grantee over PFQ's four years.

Grantee Documents. We had greater success accumulating materials on individual grantees, including original applications, annual renewal applications, technical reviewer comments (when available), quarterly progress reports, funding recommendations, and funding awards on all 20 of the PFQ grants.  MPR staff went on-site to AHRQ's Office of Grants Management, which housed grantee documents, to sort through and copy relevant materials from grantee files. Not all files were complete because either PIs did not submit all the quarterly progress reports, or grantee POs did not forward copies to Grants Management for filing. To conserve use of resources on this unbudgeted function (the evaluation RFP had indicated AHRQ would provide materials), MPR staff read materials for all 21 grants initially funded, and copied the documents that seemed most relevant, such as those listed above.  This meant that some materials attached in appendices, such as survey tools, that supplemented the progress reports were not copied.

To provide a concise overview of each project's focus, progress, and results, we drafted summaries of each grant project using the documents available to us.  We supplemented the summaries with information from interviews with grantee PIs and partners and materials provided after the interviews, such as progress reports, project data/outcomes, articles, and presentations. We provided PIs the opportunity to review and comment on our draft summaries before finalizing them for this report.  See Appendix B for the final summaries of all PFQ projects, containing information on project goals, activities, partners and partnership functioning, results, major products, and potential for sustainability or follow-on projects.

Program Tools.  AHRQ gave MPR access to the PFQ Web site that had information on grantee projects, subcommittee notes and tools, and an events calendar. The Web site also contained a checklist for the database that grantees used to enter information about their projects. MPR staff reviewed the PFQ database information to extract information on grantee partners, tools, and target populations as entered in June 2004, shortly after the database was created.  But MPR could not use the database to track grantee progress, since few grantees updated the information.

MPR also had access to other parts of the PFQ Web site, which was used as a tool for communication among grantees as well as a central storage area for work related to AHRQCoPs. Since grantees found e-mail or telephone calls to be more convenient as a method of communication, the Web site was not widely used, though there are several documents on AHRQCoPs work products, such as an evaluation framework and implementation assessment tool, and meeting minutes from the AHRQCoPs' semi-annual conferences.

2. Interviews

We interviewed AHRQ staff and individuals associated with each grantee to support this evaluation. Notes from the interviews were coded by major topic and entered into Atlas.ti, a searchable information database, which we used to analyze themes across grants and interviewees.

  • AHRQ Staff. We interviewed 17 AHRQ staff, including 4 current and former staff involved in PFQ program development and grant selection about the program's history and goals, 9 current project officers and one former project officer overseeing grants about their roles and their views of grantee and program success, 2 staff members from the Office of Grants Management on managing the grants, and one representative from the Office of Communications and Knowledge Transfer about program and grantee plans for information dissemination.5 Interviews ranged in time from 30 to 60 minutes and were conducted in Fall 2005 early in the evaluation.  We conducted a longer interview with the program director, who also served as a program project officer.  Most interviews were in person at the AHRQ offices; the rest were by telephone. Topics for each type of interview are shown in Table II.2.  Two MPR staffers participated in each interview—the project director and an analyst who took notes and documented the interview for use in the evaluation.
  • Grantees and Affiliates. We conducted in-depth telephone interviews with 19 of the 20 grant principal investigators.  For the remaining project, we spoke with primary project staff who were knowledgeable about the grant work. Most of the grantee PI interviews lasted 90 to 120 minutes. In addition to speaking with the PI, we spoke with people who were considered partners or collaborators for the grantee projects. 

The actual number ofpartner interviews scheduled for each project was determined after reviewing documents and holding interviews with PIs to consult them on which partners were important for us to contact.  For projects in which the activities were primarily research or the partners were not involved to a significant degree, only the PI and one or two other people were interviewed. For more elaborate projects, with diverse types of partner organizations, we interviewed three to five partners per project. Most interviews with partners were 30 to 60 minutes. 

The purpose of the PI interviews was to obtain additional details on grant-related activities and partnership structure and functioning that would complement the information in grantee reports.  The interviews with PI and partners covered the same general topics, discussing grant history and rationale, the evolution of project goals and activities, project accomplishments, partnership functioning, AHRQ support, and perceived sustainability of project activities.  However, the PI interviews covered the topics in more depth and were used to gather factual information on the project's progress as well as PI perception on the grant experience.  The partner interviews did not cover the topics in as much depth and were primarily used to collect information on the partner perception of the grant experience.  Table II.3 presents a list of topics.

In total, we conducted 76 interviews, including 19 grantee PI interviews and 57 partner interviews.  Given the number of grants, we decided to conduct the interviews in waves, with earlier interviews focused on grants that had been completed earliest so there might be results to discuss.  At the time this report was written, 12 grantee projects had been completed, 7 had received no-cost extensions, and one had requested a no-cost extension.6

Return to Contents

D. Key Constraints and Limitations

The evaluation was constrained by a number of factors.  These included:

  • A Late Evaluation Start.  While the program began in October 2002, the evaluation did not begin until October 2005.  As discussed previously, the late start meant that our ability to understand the origins of the program was limited, as many key decisions were not documented and the facts were elusive.  We also were unable to observe the evolution of AHRQCoPs directly since all but two meetings occurred before the evaluation began.
  • Limited Primary Data Collection. Our evaluation relied on grantees' own evaluations of their success.  Each grantee defined their evaluations differently, capturing different information. In many cases, evaluations were not complete when our evaluation report needed to be completed and some investigators were more willing to share early findings with us than others.  
  • Limited Documentation.  While grantees were required to file quarterly and annual reports, grantees varied in both the completeness and timeliness with which they responded. The reports also were not always forwarded to the grants office and in the grantee official file.  
  • Grantee Diversity.  The diversity of grantees and foci of the interest made the evaluation challenging.  Individual grantees not only focused on different substantive areas of translation, but the way they defined success and the strategies they pursued to do so differed greatly.  This meant that the appropriate metrics for evaluating each grantee's results were not the same.
  • Timing. AHRQ wanted to get formative feedback from PFQ as early as possible and structured the evaluation so that it would provide results soon after the formal end of the program.  This timing, together with the sheer number of grantees, meant that many interviews were conducted well before grantees finished their work. Though we were able to ask grantees to update their experience in early October 2006, this still was too soon for some to have finished their evaluations. Ultimately, of the 20 grantees, 12 (8 of those with clinical improvement goals, and 4 of those producing bioterrorism preparedness studies) were able to provide some preliminary results or outcomes in time to include in this report.  Most of the other eight projects had information about their reach into target providers, lessons about the implementation process, or some indication about the likelihood of sustainability or further diffusion of their approach.

5. We attempted but were unable to secure an interview with a former staff member who oversaw the technical review process to gain additional insight into how the process differed from AHRQ's traditional methods.

6. Information provided by an AHRQ Grants Management Office report, created October 23, 2006.  If there was a discrepancy between information provided by the PI and the report, we assumed the Grants Management report had the most updated information.

Return to Contents
Proceed to Next Section


The information on this page is archived and provided for reference purposes only.


AHRQ Advancing Excellence in Health Care