Chapter 2. Methods
Assessing the Evidence for Context-Sensitive Effectiveness and Safety
This project represents a collaboration between the project team and an interdisciplinary group of patient safety and methods experts, called the Technical Expert Panel, or TEP. Each key step of the project required both preparatory work on the part of the project team and then consideration and discussion by the TEP, with synthesis of the TEP discussion and decisions then made by the project team. The figures on the following pages give an overview of the methods, which are more fully described in Appendix A, Part 1.
The five goals of the project were to:
- Form an interdisciplinary panel of experts to assist with all phases of the project.
- Identify a diverse and representative set of patient safety practices (PSPs) to be used as initial subjects. As noted in the RFP, "to help iteratively develop criteria for rigorous and systematic assessment of the context-sensitive effectiveness and safety of PSPs. They should be in actual use, promising in terms of underlying logic models for achieving effectiveness, safety, and generalizability, address high impact and diverse patient safety problems, and represent the contexts in which patient safety is an important concern"
- Identify research and evaluation models, methods and designs to rigorously evaluate the patient safety practices identified and, "in considering research designs and methods," identify or develop approaches that measure contexts and implementation processes in PSP interventions and suggest how collection of contextual and process data needed for assessing the generalizability of the PSP can be combined with designs that are strong on internal and construct validity. Pay close attention to assessing both the positive and negative impacts of PSPs. Pay close attention to identifying appropriated measures of aspects of the PSP."
- Develop a set of criteria, including criteria for strength of evidence, to be used for assessing future studies and reports. Criteria are necessary to guide both (a) future assessments of evidence and safety relative to the effectiveness, implementation, and adoption of the identified types of PSPs; and (b) systematic reviews of patient safety evidence.
- Identify specific needs for future development of theories, constructs, and research/evaluation designs and methods to further strengthen evaluations of PSPs and criteria for systematic review.
In Figure 2, the selection of the "diverse and representative patient safety practices" (goal 2 above), the project team used the literature, expert input, and information from other sources (such as the project officer, the RFP, etc.) to develop a list of candidate PSPs. This list was then voted on by the TEP, and the results of the vote were used by the project team to select four PSPs, based on a number of criteria such as setting, regulated use, etc., presented in more detail later. There were remaining questions about the need for a possible fifth PSP, and this too was put to the TEP in an e-mail vote. The results of this process led to the final set of five diverse and representative PSPs, which was affirmed by the TEP.
In Figure 3, regarding evaluation questions, the project team (again using the literature, input from experts, and their own experiences in quality improvement and patient safety research) developed a draft monograph proposing three basic types of evaluation questions. This monograph was reviewed by the TEP and then discussed at the July 17, 2009, TEP teleconference. A revised set of evaluation questions was then prepared reflecting the TEP's input.
In Figure 4, regarding study designs, the project team used existing literature plus input from experts, including key methods experts on the TEP, to come up with a framework of study designs linked to evaluation questions and contexts. The issue about study design continued to be a topic of discussion at the July 17, 2009, TEP teleconference, as well as the November 4-5, 2009, face-to-face TEP meeting. The results informed the report chapter on study design, presented in Appendix I, as well as the criteria for evaluating the body of evidence. An important result of this process was the TEP's recognition that prior arguments conceptualizing the issue as "randomized controlled trials" versus "observational study designs" obscured important elements of assessment that should be included in any well-done evaluation. Another important result of this process was the TEP's agreement on which of those assessment elements were most critical.
In Figure 5, regarding the selection of contexts, the project team again used existing literature, theory, and expert input to come up with a candidate list of potential contexts important for assessment in this project. This list was shortened as a result of TEP input in an Internet survey plus discussion at the July 17, 2009, TEP teleconference. This shortened list was then the subject of a literature review by the project team, assessing the evidence for the influence of these contexts on implementation effectiveness or outcomes. This information helped guide a discussion by the TEP at the November 4-5, 2009, meeting. Subsequently, a revised Internet survey was completed by the TEP, resulting in the final list of contexts.
In Figure 6, regarding selection of criteria for assessing context-sensitivity, the project team took the shortened list of contexts and reviewed available methods of measuring the key contexts that present measurement challenges (teamwork, leadership, patient safety culture, and organizational complexity). This literature review, in addition to the review of evidence developed in Figure 5, was then used by the TEP to select criteria for measuring these contexts. This was done during a discussion at the November 4-5, 2009, TEP meeting and in a subsequent Internet survey.
Finally, to identify specific needs for future development and research, we first surveyed the project team. We then received feedback from the project officer as well as the project team before surveying the entire TEP after the November 4-5, 2009, meeting.