Skip Navigation Archive: U.S. Department of Health and Human Services www.hhs.gov
Archive: Agency for Healthcare Research Quality www.ahrq.gov
Archive print banner
Assessment of the Medical Reserve Corps Program

This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information.

Please go to www.ahrq.gov for current information.

Methods

In this chapter, we describe the methods RTI used to complete the MRC Program assessment, including the key informant interviews and case studies. We describe our evaluation approach, the criteria for selection of key informants, the criteria for selecting MRC units for the case studies, our data collection protocols, and methods for data management and analysis.

2.1 Evaluation Approach

Our technical evaluation approach (Figure 2-1) is based on Patton's utilization-focused evaluation (UFE) (Patton, 1997), which begins with the premise that an evaluation must first address the needs of its end users to maximize its utility and application. UFE does not advocate a particular set of models, methods, or theories, but rather is a process by which the evaluator works with the end users to ensure that the evaluation meets high standards for utility, feasibility, propriety, and accuracy. Thus, the engagement of stakeholders early and throughout the evaluation is critical to the success of an evaluation based on UFE principles. Accordingly, RTI convened an Evaluation Workgroup comprising a diversity of MRC stakeholders to solicit their input and guidance at onset of the project.

Figure 2-1. Framework for Program Evaluation

The figure is a circle made up of six numbered steps with an arrow pointing from one step to the next. The steps are: 1. Identify and engage the intended users; 2. Refine the evaluation plans; 3. Develop data collection instruments; 4. Collect evaluation data; 5. Analyze evaluation data; 6. Interpret, share, and disseminate evaluation findings. An arrow points from step 6 back to step 1 to complete the cycle. In the center of the circle is a text box that reads, 'Standards: Utility, Feasibility, Propriety, Accuracy.'

A more detailed discussion of the stakeholder engagement and data collection components is provided later in this report. In Appendix A, we present the evaluation questions and measures/indicators that assess MRC goals and the data that are relevant to the federal key informant interviews. In Appendix B, we present the evaluation questions for the case studies.

Return to Contents

2.2 MRC Evaluation Workgroup

Early in the planning phase of the evaluation, RTI convened an Evaluation Workgroup of five key MRC stakeholders. The members of the Evaluation Workgroup are presented in Appendix C. The first Evaluation Workgroup meeting took place on April 6, 2006, in Washington, D.C., with several people attending via teleconference. The objective of the initial meeting was to assess the relevance and feasibility of the evaluation plan, to identify key issues and topics to be addressed by the evaluation, and identify key federal-level informants. Subsequent to that meeting, we provided Workgroup members with the opportunity to give feedback on data collection instruments.

Return to Contents

2.3 Federal-Level Key Informants

At the initial meeting, the Evaluation Workgroup recommended a number of agencies, groups, and institutions to be included in the federal key informant interviews. RTI then worked with individual workgroup members and the MRC Program office to identify the appropriate individuals to serve as key informants. We focused on those persons who had worked most closely with the MRC Program office during the demonstration period and would therefore be the most informative for the interviews. We were able to contact or set up an interview with all but one key informant. A list of the agencies and organizations represented in the interviews can be found in Appendix D. Although not all of these informants are employed in federal agencies, they are considered federal-level key informants for the purposes of this evaluation because of their relationships with the federal MRC Program Office.

RTI conducted 30- to 60-minute key informant interviews with 11 federal-level key informants from July to September 2006. The focus of these interviews was to assess interagency coordination and sharing of information, examine the challenges and successes of the MRC Program Office, and obtain a fuller understanding of the programmatic and contextual factors that have shaped MRC and that may impact its future. Interviews were conducted both in person and by phone using a semistructured discussion guide. The interviews were tape recorded with the permission of the key informants. RTI transcribed the interviews, using the recording and/or notes from each interview.

Return to Contents

2.4 Case Studies

We conducted case studies of six MRC units that received funding from the MRC Program Office between 2002 and 2005 (the demonstration period). Within those units receiving funding, we selectively recruited six MRC units based on unit characteristics that we hypothesized would influence their performance and experiences. To obtain a broad range of experiences, units were selected based on whether the unit had been activated for an emergency, the MRC unit size, and the type of housing institution (e.g., local/State health department, hospital). The sampling scheme shown in Table 2-1 was developed to represent as completely as possible the diversity of MRC units. To select units for the case studies, RTI reviewed summarized progress reports for all of the federally funded MRC units during the demonstration period. These progress report summaries for 2003, 2004, and 2005 were provided by the MRC Program Office. Selected data on the chosen units were cross-checked with the basic unit information provided by unit coordinators and posted on the MRC Web site (http://www.medicalreservecorps.gov/FindMRC.asp). Data on the MRC units' organizational homes and size (number of volunteers) were obtained solely from the MRC Web site, because historical data for these categories were not usually available on the progress reports. RTI submitted the identities of the six initially selected MRC units to the Agency for Healthcare Research and Quality (AHRQ) for approval (to ensure that the selected units were not already participating in another ongoing evaluation). Of the initial six units, two were replaced because of known potential conflicts. Of the four remaining units and two replacement units, several were unable to participate in the case studies because of various issues, including a current emergency response that resulted in an inability to complete interviews during the project timeline, an ongoing reorganization within the MRC unit and its housing institution, and an inability to contact the individual who had served as unit coordinator during the demonstration period. A brief description of each selected unit is presented in Appendix E.

Within each selected MRC unit, RTI planned to conduct semistructured interviews with five to seven individuals. Informants included the MRC unit coordinator, as well as a selection of volunteers representing different professions (e.g., physicians, nurses, counselors) and various local partner agencies (e.g., hospitals, health departments, universities). Unit coordinators were asked to identify volunteers and key informants from partner institutions for interviews. Regional MRC coordinators are appointed by the MRC Program Office and serve as liaisons between the Program Office and the individual units. The regional coordinator responsible for each selected unit was interviewed. Additionally, some States have appointed their own MRC State coordinators to provide assistance to individual units and to liaise with the Program Office. Two units had State MRC coordinators in place during the demonstration project and they were interviewed in addition to regional coordinators. Furthermore, two unit coordinators only identified one individual from a partner institution who had worked with the MRC during the demonstration period, and one MRC unit did not identify any volunteers for interviews. Thus, the total number of key informant interviews for the case studies was 34 instead of the planned 36 (Table 2-2). Interviews were conducted between February 27, 2007, and April 4, 2007.

The protocol for coordinating and implementing the key informant interviews was as follows:

  • A structured interview guide, based on the federal key informant interview guide and tailored to the position and role of the key informant, was drafted and submitted to AHRQ and OSG for review.
  • Detailed progress reports for the selected MRC units were provided by the MRC Program Office to RTI for review.
  • The MRC Program Office sent a letter via E-mail to the unit coordinator and regional coordinator of each selected unit, introducing the evaluation and RTI.
  • A preliminary E-mail was then sent to the selected MRC unit coordinator, describing the case study and encouraging participation. MRC unit coordinators were asked to respond if they agreed to participate and to provide possible interview dates and contact information for volunteers and partners.
  • Once contact information was received, RTI followed up with key informants to schedule 30- to 60-minute telephone interviews and obtain any relevant documents. Because most of the key informants were outside of the Washington, D.C. metropolitan area and central North Carolina, all interviews were conducted via telephone.
  • A senior project member or research assistant conducted the key informant interviews. All interviews were recorded with the permission of the key informant to check and verify interview notes. These tapes will be destroyed at the conclusion of the evaluation.
  • Interview notes were cleaned and edited and returned to each key informant for verification.

Return to Contents

2.5 Data Collection Instrument

A semistructured interview guide was developed based on input received from the Evaluation Workgroup and the requirements stipulated in the original request for proposals (RFP). A draft of the interview guide for the federal key informant interviews was submitted to the OSG and the Evaluation Workgroup for review and comment. The comments RTI received were incorporated into the final version of the interview guide, which can be found in Appendix F. The interview guides for the case studies were based on the federal key informant interview guide and were modified to include topics relevant to the local units. Different interview guides were created for the unit coordinator, State/regional coordinators, partners, and volunteers, and can be found in Appendix G.

Return to Contents

2.6 Data Coding and Analysis

Key informant interview data were coded with NVivo software—a qualitative software program that allowed us to code and produce summaries of all relevant themes. We developed codes that correlated roughly to questions in the interview guides. Questions that produced similar responses were collapsed into single codes; alternatively, we created new codes for responses that emerged independently of the questions posed. Codes were analyzed across all interviews, with the primary intent of identifying commonalities but also making note of outlier opinions and ideas. All responses were coded and reviewed by two analysts.

Table 2-1. MRC Site Selection Criteria

Site Selection Criteria No. of MRC Units*
Emergency deployment Deployed

Not deployed

3

3

Organizational home Health department

Hospital

Emergency services

Other

1

1

2

2

MRC unit size Small (<90 volunteers)

Medium (90—300 volunteers)

Large (>300 volunteers)

2

2

2

Non-emergency public health activities Performed

Not performed

3

3


*Numbers reflect intended selections. Interviews revealed that progress reports/Web site data were not completely accurate for the demonstration period. Differences were as follows: only one MRC unit was based in emergency services and three had organizational homes in the "other" category; one MRC unit was small and three were medium; and five units performed at least one non-emergency public health activity.

Table 2-2. Number and Type of Case Study Key Informants

Key Informant Role Number Interviewed
MRC State coordinator 2
MRC regional coordinator 6
MRC unit coordinator 6
MRC volunteer 10
MRC partner institution 10
Total 34

Return to Contents
Proceed to Next Section

 

The information on this page is archived and provided for reference purposes only.

 

AHRQ Advancing Excellence in Health Care