Chapter 1. Introduction

Assessing the Evidence for Context-Sensitive Effectiveness and Safety

Patient safety research is a fairly young field that received substantial investment in the United States after the Institute of Medicine's 1999 landmark report To Err is Human sounded the alarm and resonated with the public. They heard the salient sound bite that one "jumbo jet" of patients dies each day from a medical error. In rapid response, the lead Federal agency for health care quality research, the Agency for Healthcare Quality and Research (AHRQ), commissioned an evidence-based practice center (EPC) team of researchers to develop an evidence report of "patient safety practices" (PSP). The resulting report, Making Health Care Safer: A Critical Analysis of Patient Safety Practices, identified 79 practices, ranging from targeted clinical interventions (e.g., the use of antibiotic-impregnated catheters to prevent urinary tract infections), to clinical procedural enhancements (e.g., visualizing central line placement to avoid inadvertent lung puncture), to broad system changes (e.g., promotion of a culture of safety and teamwork to reduce a range of possible failures in patient safety).1 Because the evidence for the effectiveness of these PSPs was scant, according to the established evidentiary review lens available to the EPC at that time (which was in use by the global consortium of systematic review experts, the Cochrane Collaboration and their Effectiveness of Practice and Organization of Care (EPOC) group),2 the complex, systems-oriented PSPs did not rise to the top of the list of PSPs recommended for further implementation.

These EPC recommendations, while explicitly based on only one potential approach to distill the evidence on practices, stimulated an important debate about whether the evidentiary lens needed adjustment for application to patient safety practices. The issues on either side of the debate are well presented in two "Controversies" articles published in JAMA in 2002,3,4 and taken up again more recently in other publications.5-10 The interest in determining the approach to evidence evaluation for patient safety was also highlighted at the Second National Summit on Patient Safety Research sponsored by the Quality Inter-Agency Coordination Task Force (QuIC) in November 2003.11 The panel reinforced that often it is not possible or practical to evaluate implementation performance using randomized controlled trials (RCTs), and as a result, concluded that other types of research designs should be considered. In addition, the panel recommended that AHRQ develop standards for patient safety research, as well as for synthesis of a body of evidence on a given PSP, based on a range of suitable research designs and analytic methods. Over the ensuing years, further efforts by national and international organizations (e.g., the National Quality Forum (NQF), the Joint Commission's International Center for Patient Safety) have focused on approaches to identifying, prioritizing, and recommending further development and dissemination of PSPs or "Patient Safety Solutions."12 

AHRQ initiated the current project to respond to the debate on what constitutes evidence in patient safety by engaging in a structured process with experts from a broad range of pertinent fields, including human factors, organizational behavior, management sciences, public health, evaluation sciences, implementation sciences, biostatistics, clinical medicine, evidence-based medicine, and patient safety. The overarching charge to the research team and expert panel was to assist AHRQ in developing "criteria for assessing the evidence base for the context-sensitive effectiveness and safety of patient safety practices," or how the contexts (within which a patient safety practice is implemented) can affect the effectiveness of that implementation. This charge emanates from a well-articulated rationale described by AHRQ per the Request for Proposal (RFP) that guides this project, and it is summarized by the project team in Figure 1.

The diagram's upper part displays a PSP, its context, and potential results from localized testing or full-scale roll out. The lower half of the diagram stylizes the key components of evaluation and how they need to be fit together in evidence synthesis. The middle line represents the needs of patient safety stakeholders for criteria to assess which patient safety practices work and in what context. Essentially, this project aims to strengthen the line between the top and bottom half of this diagram. Each part of the diagram is described further in the following sections to provide a brief rationale for the project.

Diagram Component: Patient Safety Practices

In the RFP that led to this project, AHRQ defined patient safety practices as "interventions, strategies, or approaches intended to prevent or mitigate unintended consequences of the delivery of health care and to improve the safety of health care for patients. PSPs may include clinical interventions, systems' organizational and behavioral interventions, and various combinations of these."

As implied by this definition, PSPs often include components that often are constructed differently at different points in time or in different settings. Figure 1 shows a generic PSP, with small empty boxes as placeholders to describe the PSP's components. The definition also highlights the diversity of PSPs and the potential for combining them to develop new PSPs.

Diagram Component: Context

The oval around the PSP in Figure 1 represents the organizational, behavioral, and broader environmental context in which the PSP is embedded. Numerous leaders in patient safety research have articulated the importance of context. In a forthcoming review for the World Health Organization, John Øvretveit and colleagues state that an intervention's effectiveness and safety may vary according to context because of implementation differences, the need for adaptation of implementation, and interactions between contextual factors and the intervention, which result in modification to the intervention over time (Personal communication). Some PSPs address specific evidence-based therapies, while other PSPs are more abstract or diffuse, such as "training clinicians in teamwork." Local factors (such as staffing considerations) may require changes in order to make the PSP implementable given the local or wider context. Thus, interventions that appear to be the same or carry the same label may in fact be quite different when implemented in various places and timeframes; and such differences may account for different outcomes. These considerations support a requirement that studies provide precise descriptions of the evaluated intervention, along with relevant features of the intervention context, including implementation processes.

For many complex interventions, there is a paucity of information about context and its interplay with the PSP. For example, the 2006 AHRQ EPC report on Health Information Technologies (health IT) by Shekelle and colleagues found that the interventions studied included not just the technical aspects of the computer and software, but also the human factors, the project management, and the organizational process change; and that these contextual factors were not adequately described, making it is difficult for others to apply the study results to actual health care settings.14 In another AHRQ EPC project on care coordination interventions to improve health care quality and patient safety, McDonald and colleagues noted the need for context-flexible evaluations tied to theory, as well as actual needs of quality improvement implementers.15 The authors called for more detailed descriptions of both the interventions and the contexts in which they were tested to make any conclusions about outcomes more readily interpretable to those choosing potential intervention strategies for their particular circumstances. Thus, EPC investigators have also recognized the importance of context.

There is no standard definition of "context." It may include detailed information about processes of implementation, as well as barriers and facilitators related to the organizational and policy environment in which a PSP is implemented. These factors have been shown to be critically important to understanding the success or failure of a PSP. For example, Pronovost and colleagues discussed the importance of considering local context while maintaining standardized measures and evidence in their effort to reduce blood stream infections in Michigan.16 They found that it was both efficient and effective to standardize the technical aspects of quality improvement programs while encouraging local modification of how the evidence is put into practice. Similarly, a recent evaluation of the World Health Organization (WHO) surgical checklist found an overall reduction in adverse events; yet this was not consistent at all sites (go to Evaluation of the implementation effectiveness or barriers and facilitators will be important in attempts to disseminate the WHO surgical checklist across the world.17

Diagram Component: Results

Use of a PSP in a particular context may result in positive and negative outcomes (including unintended harms), shown as effectiveness and harms respectively in the box on the right hand side of Figure 1. In addition to these critical outcomes, other potentially important effects include those related either to implementation (e.g., uptake, cost, and ease of implementation initially) or widespread adoption and spread of a PSP. Figure 1 is a simplification, but nevertheless, it posits that the effectiveness, safety, and other outcomes of a PSP may be affected by its specific components; where, when, and how the PSP is implemented; and with whom and for what purposes the PSP is used; as well as by features of the external environment or larger context.

Diagram Component: Criteria and Knowing What Works

The middle dotted line in Figure 1 sets up the overarching objective of the current project. To inform stakeholders interested in improving patient safety about what works in which contexts, AHRQ has called for context-sensitive criteria to assess PSPs. Therefore, the goal of developing criteria and guidance on evaluations of PSPs is to understand the relationships between PSPs and their intended and unintended results in particular contexts and configurations. Specifically, the agency suggests that:

Establishing more appropriate criteria for evidence reviews of patient safety practices can be expected to have three closely related effects. First, the criteria should broaden the scope of patient safety practices that can be assessed for effectiveness and safety based on scientific evidence. Second, the availability of the criteria will strengthen research studies that are assessing those practices. Third, if developed in a way that is usable to implementers of patient safety practices beyond researchers (e.g., individual clinicians, health policymakers, and patient advocates), criteria can be applied in situations where PSPs should be evaluated for individual and institutional learning without regard to publication in peer-reviewed journals (per the RFP for this project; go to

Diagram Component: Evaluation and Synthesis

For context-sensitive evaluation of PSPs, evidence synthesis promises to assemble information from individual studies, ultimately determining how to draw together information for each of the four puzzle pieces (Figure 1):

  • Constructs about the PSP, its components, context factors, outcomes, and ways to measure accurately these constructs.
  • Logic model or conceptual framework about the expected relationships among these constructs.
  • Internal validity to assess the PSP results in a particular setting.
  • External validity to assess the likelihood of being able to garner the same results in another setting.

A number of individual studies, with a broad range of research and evaluation designs, may be needed to answer satisfactorily the many questions of interest to the patient safety field for a given PSP. Initial key questions for evaluation and synthesis put forth by AHRQ include those focused on effectiveness, implementation, and adoption or spread.

In summary, this project aims to advance the patient safety field by using targeted literature reviews and structured expert panel consultation to present a conceptual framework and a set of rigorous evaluation criteria for assessing and guiding studies on PSPs and their contexts. The report presents an initial conceptual framework, initial criteria, and a path toward developing a comprehensive set of rigorous evaluation criteria for assessing and guiding studies on PSPs and their contexts. Based on the framework and criteria, we identify the types of research and evaluation models and methods that experts judge to be most useful for advancing the field of patient safety. We develop specific criteria for assessing the rigor of individual studies; we also lay out methods and criteria for synthesizing sets of studies to assess the overall body of evidence related to specific PSPs and their contexts. Finally, we identify issues and questions for future analysis of and research on PSP methodology.

The litmus test for the project will ultimately come from those on the frontlines of patient safety improvement efforts. What information will help those who are accountable for their health system or the Nation's performance in terms of health care quality and patient safety? What methodology guidance will enable those who are conducting systematic evidence reviews to address key questions about PSPs? What material in the report will ease the process of primary knowledge generation for researchers and evaluators of PSP interventions? What take-home messages will support research funders' ability to continue to move the field forward to its ultimate goal of making health care substantially safer for the public? These questions shape the reporting of our approach and recommendations in the subsequent chapters.

References for Chapter 1

  1. Shojania K, Duncan B, McDonald K, Wachter R, eds. Making health care safer: A critical analysis of patient safety practices. Evidence Report/Technology Assessment No. 43 (AHRQ Publication No. 01-E058). Rockville, MD: Agency for Healthcare Research and Quality; 2001.
  2. Bero LA, Grilli R, Grimshaw JM, et al. Closing the gap between research and practice: An overview of systematic reviews of interventions to promote the implementation of research findings: The Cochrane Effective Practice and Organization of Care Review Group. Br Med J 1998; 317:465-8.
  3. Leape L, Berwick D, Bates D. What practices will most improve safety? Evidence-based medicine meets patient safety. JAMA 2002; 288(4):501-7.
  4. Shojania K, Duncan B, McDonald K, Wachter R. Safe but sound: Patient safety meets evidence-based medicine. JAMA 2002; 288(4):508-13.
  5. Auerbach A, Landefeld C, Shojania K. The tension between needing to improve care and knowing how to do it. N Engl J Med 2007; 357(6):608-13.
  6. Berwick D. The science of improvement. JAMA 2008; 299(10):1182-4.
  7. Landefeld CS, Shojania KG, et al. (2008). Should we use large scale healthcare interventions without clear evidence that benefits outweigh costs and harms? No. Br Med J 2008; 336(7656):1277
  8. Crump B. Should we use large scale healthcare interventions without clear evidence that benefits outweigh costs and harms? Yes. Br Med J 2008; 336(7656):1276.
  9. Pronovost P, Wachter R. Proposed standards for quality improvement research and publication: One step forward and two steps back. Qua Safety Health Care 2006; 15:152-3.
  10. Davidoff F. Heterogeneity is not always noise: Lessons from improvement. JAMA 2009; 302(23):2580-6.
  11. Quality Interagency Coordination Task Force (QUIC). Paper presented at Second National Summit on Patient Safety; Accessed Aug. 4, 2008.
  12. Safe practices for better healthcare. 2008 Update of Safe practices for better healthcare. Washington, DC: National Quality Forum; 2008. Available at:
  13. Patient Safety Solutions—Development Process. Washington, DC: The Joint Commission; 2008. Available at
  14. Shekelle PG, Morton SC, Keeler EB. Costs and benefits of health information technology. Evidence Report/Technology Assessment No. 132. (Prepared by the Southern California Evidence-based Practice Center under Contract No. 290-02-0003.) AHRQ Publication No.06-E006. Rockville, MD: Agency for Healthcare Research and Quality; 2006.
  15. McDonald KM, Sundaram V, Bravata DM, et al. Care coordination. In Shojania KG, McDonald KM, Wachter RM, Owens DK, eds. Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies, Volume 7. Technical Review 9 (Prepared by the Stanford University-UCSF Evidence-based Practice Center under contract 290-02-0017). AHRQ Publication No. 04(07)-0051-7. Rockville, MD: Agency for Healthcare Research and Quality; 2007.
  16. Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med 2006; 355(26):2725-32.
  17. Haynes AB, Weiser TG, Berry WR, et al. A surgical safety checklist to reduce morbidity and mortality in a global population. N Engl J Med 2009; 360(5):491-9.
Page last reviewed December 2010
Internet Citation: Chapter 1. Introduction: Assessing the Evidence for Context-Sensitive Effectiveness and Safety . December 2010. Agency for Healthcare Research and Quality, Rockville, MD.