Chapter 9. Description of Ideal Evaluation Methods: Selecting Key Domains of Context
Assessing the Evidence for Context-Sensitive Effectiveness and Safety
As previously noted, we lack a universally agreed-upon definition of what constitutes "context." Context can be conceptualized as consisting of a discrete number of known constructs (e.g., organizational complexity, patient safety culture, etc.) all the way to everything that is currently unexplained or unknown about why a patient safety practice (PSP) implementation succeeds or fails. In our discussion with the technical expert panel (TEP), we constantly found ourselves asking (when considering a particular construct), "Is this context or is it part of the intervention?" Consequently, we determined that trying to reach agreement on what constitutes the boundaries of context would not be as fruitful as concentrating on a limited group of constructs that all agreed were important and could be considered contextual variables. Hence, in this report there is no overarching definition of what context "is," but rather thee is a determination and discussion of variables believed to be important in understanding PSP implementation and effectiveness that currently do not receive the attention they deserve.
To begin this process, we used existing published papers regarding our five representative PSPs and our own knowledge to come up with a long list of potential influences on PSP effectiveness that might be considered context. We next surveyed the TEP in June 2009, and asked them to rate the importance of each contextual feature for each of the five PSPs. Based on the results of the survey, we developed a scheme for classifying and selecting PSP contexts for the next phase of the project. We attempted to take into account things like mutability (from the organization's perspective), whether PSPs were tactical or not (in terms of tactics that might be used to enhance implementation success), their measurement ability, and the evidence base supporting their importance. With input from the TEP, which included an Internet survey and a long teleconference discussion, we shortened the long list to a number of "high priority" contextual variables, which we then organized into four domains:
- External factors. These were all rated separately, were rated high in the Internet survey, and are related. Also, none are mutable from the organization's perspective but could be mutable by policymakers.
- Structural organizational characteristics, such as size, complexity, and financial status or strength. These are not mutable and might have a bi-directional effect on PSP implementation, with increasing size and complexity facilitating some PSP implementations but making others more difficult.
- Teamwork, leadership, and patient safety culture. These were all rated separately and rated high in the Internet survey. Although it is unclear how independent they are, they all somewhat address the social or cultural aspects of a PSP implementation context.
- Management/implementation tools, including training resources, internal organizational incentives, audit and feedback, and collaboration with QI consultants. These are all factors that researchers can influence while implementing the intervention.
After reviewing the available literature regarding these contexts and our five representative PSPs (see Appendix A, Part 1, section on contexts), we then discussed these contextual variables in more detail with the TEP. As a result of this process, we separated an assessment of contexts into "important for describing context" and "important in assessing the effect of context on implementation success." The former was judged to be important so that health care organizations could better assess the applicability of a PSP implementation to their own institution. We then conducted a second Internet-based survey of the entire TEP to determine which of the 32 contexts the TEP thought had a high priority for data collection. We asked the TEP to consider this question for each of the five PSPs when either assessing the effect of context on this PSP implementation or describing context in papers. The results of that survey are summarized in Table 5, which shows the contexts that respondents voted as "high priority" when assessing the effect of context or describing the context (full results of the survey are in Appendix D).
Tools for Measuring Key Domains of Context
One of the goals of the project is to suggest ways to measure contexts. Many of these contextual features have not posed a measurement problem in prior studies (i.e., size, location, academic status, regulatory requirement, use of audit and feedback, etc.). Other contexts do pose a measurement challenge, such as teamwork, leadership, patient safety culture, and organizational complexity. We concentrated on these four in our efforts to determine ways to measure context. To help guide a discussion of how these contexts might be measured, we did an extensive literature search in the health care peer-reviewed and "gray" literatures for measures. The measures we found for teamwork, leadership, and patient safety culture were sufficiently on target to include in subsequent activities. However, we did not find many measures of organizational complexity, even after expanding our search to the organization and business literature, such that we could not assess the relative strengths of measures of this context. Development of organizational complexity measures relevant for PSP evaluation remains an area for future development. All measures we found for the four contexts are listed in Appendix E.
Given that there are multiple measures, and no one measure is superlative in all aspects, expert judgment is needed to help select the measures that might be more appropriate to use. As such, in late November 2009, we surveyed the TEP using another Internet-based survey to determine their opinions on some of the measures we found for teamwork, patient safety culture, and leadership (we did not survey the TEP on the measures of organizational complexity because there were too few). As in our prior survey about contexts, some TEP members abstained on the grounds that they were not expert in this area. We also heard from several TEP members that they did not think the field was sufficiently advanced to recommend specific measures. For example, one TEP member said: "I think this is beyond the scope (of what we can do)," and another TEP member said "this is futile, there are too many to choose from, and the choice of instrument would depend on the nature of the work being done." Other panelists, while acknowledging that the evidence is too thin to support any one particular instrument, argued that providing an expert opinion-based recommendation was still useful, since evaluators and researchers have choices that need to be made about which instruments to use, and the guidance of experts is better than no guidance at all. One expert put it this way: "Expert opinion is often the best we have at a moment in time, and making use of expert opinion is not, in any logical sense, tantamount to accepting or endorsing its validity. Nor does it preclude further and different work."
In the end, we received between 11 and 14 responses (depending on the context; out of 22 possible participants) to our questions about which measures to use. This means that no measure could have received the 15 votes necessary for us to consider it the TEP recommendation. Furthermore, even when assessed as a proportion of actual respondents, no measure received endorsement above the 75 percent threshold that would constitute sufficient agreement for a recommendation from those who did respond.
For these reasons, our conclusion is that the evidence base is too thin and agreement among experts insufficient to make strong recommendations about which measures are preferred for assessments of patient safety culture, teamwork, and leadership, suggesting the need for ongoing dialogue among researchers. However, for patient safety culture, the most support was given to the various AHRQ surveys relevant to this topic, plus the Safety Climate Scale1 and the related Safety Climate Survey.2 For teamwork, the most support was given to the ICU Nurse-Physician Questionnaire;3 no other measure received more than half the votes of respondents. Finally, for leadership, the measures receiving the most support were the ICU Nurse-Physician Questionnaire,3 the Leadership Practice Inventory, 4 and the Practice Environment Scale.5 No other measure received more than half the votes of respondents. The full results of the survey are presented in Appendix F.
References for Chapter 9
- Pronovost P, Weast B, et al. (2003). Evaluation of the culture of safety: Survey of clinicians and managers in an academic medical center. Qual Saf Health Care 12:405-10.
- Sexton, et al. [as found in paper by Kho M., Carbone J, et al.] Safety Climate Survey: Reliability of results from a multicenter ICU survey. Qual Saf Health Care 2005; 14:273-8.
- Shortell SM, Rousseau DM, Gillies RR, et al. Organizational assessment in intensive care units (ICUs): Construct development, reliability, and validity of the ICU nurse-physician questionnaire. Med Care 1991; 29(8):709-26.
- Tourangeau AE, McGilton K. Measuring leadership practices of nurses using the Leadership Practices Inventory. Nurs Res 2004; 53(3):182-9.
- Lake ET. Development of the practice environment scale of the nursing work index. Res Nurs Health 2002; 25(3):176-88.