This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information.
Please go to www.ahrq.gov for current information.
Chapter 2. Drawing on Safety Practices from Outside Healthcare
The medical profession's previous inattention to medical error, along with other publicized deficiencies (such as a notable lag in adopting sophisticated information technologies) have invited unfavorable comparisons between healthcare and other complex industries.1-5 The first of the two recent Institute of Medicine (IOM) reports on the quality of healthcare in America, To Err is Human: Building a Safer Health System,3 states that "healthcare is a decade or more behind other high-risk industries in its attention to ensuring basic safety." Consequently, one of the goals of this project was to search these other industries for evidence-based safety strategies that might be applied to healthcare.
The relatively short timeline of this project necessitated a focused approach to the search for potentially applicable patient safety practices from non-healthcare writings. Fortunately, many relevant practices have received at least some analysis or empirical study in the healthcare literature. As a practical solution we present original articles from outside healthcare as foundational and background material, rather than as a primary source of evidence. Specific topics and practices reviewed in Making Healthcare Safer that clearly derive from fields outside healthcare include:
- Incident reporting (Chapter 4).
- Root cause analysis (Chapter 5).
- Computerized physician order entry and decision support as a means of reducing medication errors (Chapter 6).
- Automated medication dispensing systems (Chapter 11).
- Bar coding technology to avoid misidentification errors (Subchapter 43.1).
- Aviation-style preoperative checklists for anesthesia equipment (Chapter 23 and Subchapter 41.3).
- Promoting a "culture of safety" (Chapter 40).
- Crew resource management, a model for teamwork training and crisis response modeled after training approaches in aviation (Chapter 44).
- Simulators (of patients or clinical scenarios) as a training tool (Chapter 45).
- Human factors theory in the design of medical devices and alarms (Chapter 41).
Many readers may still wonder at the relative paucity of safety practices drawn from non-healthcare sources. While the headline-grabbing assessments of medicine's safety have been criticized by researchers and likely overstate the hazard to patients,6-8 it is undeniable that some industries, most notably commercial aviation, have safety records far superior to that of healthcare. One issue we faced in compiling this evidence-based review was the extent to which specific practices could be identified as playing a direct and measurable role in this achievement. Interestingly, the same issue—ascertaining a causative variable—arose in reviewing the literature on anesthesia, likely the one field of medicine with a safety record that rivals aviation's (see also Chapter 56).
As outlined in Chapter 24, significant complications attributable to anesthesia9-12 have decreased to the point that major morbidity and mortality are now too rare to serve as practical endpoints for measuring the quality of anesthesia care, even in large multicenter studies.13,14 In attempting to account for this decrease, however, it is very difficult to find evidence supporting a causative role for even the most plausible candidates, such as widely utilized intraoperative monitoring standards.15 In other words, while the field of anesthesia has clearly made tremendous strides in improving patient safety over the past 50 years, it is hard to discern a particular, isolated practice that accounts for the clear and dramatic secular change in its safety. While at one level, a pragmatist might argue, "who cares, as long as it's safe," trying to adopt the lessons of anesthesia (or for that matter aviation) to the rest of healthcare is made more challenging by tenuous causality.
Some might argue that, rather than pinpointing specific practices to embrace from other industries, healthcare institutions should emulate organizational models that promote safety in complex, high-risk industries that manage to operate with high reliability.16,17-22 Analysis of detailed and interesting case studies have fueled a school of thought known as high reliability theory, whose proponents suggest a number of organizational features that likely reduce the risk of "organizational accidents" and other hazards. A cogently argued alternative position, often called normal accident theory, questions not only these prescriptions for organizational change, but fundamentally challenges the idea of high reliability in certain kinds of complex, "tightly coupled" organizations.23,24 These competing schools of thought offer interesting and valuable insights into the ways that organizational strategies foster safety, while cautioning about the ever-present threat of new sources of error that come with increasingly complex human and technical organizations. Unfortunately, this rich literature does not permit ready synthesis within the framework of evidence-based medicine, even using the less stringent standards we adopted in evaluating non-medical literature (see Chapters 1 and 3).
Even the more engineering-oriented of the disciplines with potential relevance to patient safety yielded a surprising lack of empirical evaluation of safety practices. For instance, numerous techniques for "human error identification" and "error mode prediction" purport to anticipate important errors and develop preventive measures prospectively.25-27 Their basic approach consists of breaking down the task of interest into component processes, and then assigning a measure of the likelihood of failure to each process. Many of the techniques mentioned in the literature have received little detailed description25,26 and few have received any formal validation (e.g., by comparing predicted failures modes with observed errors).28,29 Even setting aside demands for validation, the impact of applying these techniques has not been assessed. Total quality management and continuous quality improvement techniques were championed as important tools for change in healthcare based on their presumed success in other industries, but evaluations of their impact on healthcare have revealed little evidence of success.30-33
In the end, we are left with our feet firmly planted in the middle of competing paradigms. One argues that an evidence-based, scientific approach has served healthcare well and should not be relaxed simply because a popular practice from a "safer" industry sounds attractive. The other counters that medicine's slavish devotion to the scientific and epidemiologic method has placed us in a patient safety straightjacket, unable to consider the value of practices developed in other fields because of our myopic traditions and "reality."
We see the merits in both arguments. Healthcare clearly has much to learn from other industries. Just as physicians must learn the "basic sciences" of immunology and molecular biology, providers and leaders interested in making healthcare safer must learn the "basic sciences" of organizational theory and human factors engineering. Moreover, the "cases" presented on rounds should, in addition to classical clinical descriptions, also include the tragedy of the Challenger and the successes of Motorola. On the other hand, an unquestioning embrace of dozens of promising practices from other fields is likely to be wasteful, distracting, and potentially dangerous. We are drawn to a dictum from the Cold War era—"Trust, but verify."
1. Leape LL. Error in medicine. JAMA 1994;272:1851-1857.
2. Chassin MR. Is health care ready for Six Sigma quality? Milbank Q 1998;76:565-591,510.
3. Kohn L, Corrigan J, Donaldson M, editors. To Err Is Human: Building a Safer Health System. Washington, DC: Committee on Quality of Health Care in America, Institute of Medicine. National Academy Press; 2000.
4. Barach P, Small SD. Reporting and preventing medical mishaps: lessons from non-medical near miss reporting systems. BMJ 2000;320:759-763.
5. Helmreich RL. On error management: lessons from aviation. BMJ 2000;320:781-785.
6. Brennan TA. The Institute of Medicine report on medical errors—could it do harm? N Engl J Med 2000;342:1123-1125.
7. McDonald CJ, Weiner M, Hui SL. Deaths due to medical errors are exaggerated in Institute of Medicine report. JAMA 2000;284:93-95.
8. Sox HCJ, Woloshin S. How many deaths are due to medical error? Getting the number right. Eff Clin Pract 2000;3:277-283.
9. Phillips OC, Capizzi LS. Anesthesia mortality. Clin Anesth 1974;10:220-244.
10. Deaths during general anesthesia: technology-related, due to human error, or unavoidable? An ECRI technology assessment. J Health Care Technol 1985;1:155-175.
11. Keenan RL, Boyan CP. Cardiac arrest due to anesthesia. A study of incidence and causes. JAMA 1985;253:2373-2377.
12. Eichhorn JH. Prevention of intraoperative anesthesia accidents and related severe injury through safety monitoring. Anesthesiology 1989;70:572-577.
13. Cohen MM, Duncan PG, Pope WD, Biehl D, Tweed WA, MacWilliam L, et al. The Canadian four-centre study of anaesthetic outcomes: II. Can outcomes be used to assess the quality of anaesthesia care? Can J Anaesth 1992;39:430-439.
14. Moller JT, Johannessen NW, Espersen K, Ravlo O, Pedersen BD, Jensen PF, et al. Randomized evaluation of pulse oximetry in 20,802 patients: II. Perioperative events and postoperative complications. Anesthesiology 1993;78:445-453.
15. Orkin FK. Practice standards: the Midas touch or the emperor's new clothes? Anesthesiology 1989;70:567-571.
16. Reason J. Human error: models and management. BMJ 2000;320:768-770.
17. Weick KE. Organizational culture as a source of high reliability. California Management Review 1987;29:112-127.
18. Roberts KH. Some characteristics of high reliability organizations. Berkeley, CA: Produced and distributed by Center for Research in Management, University of California, Berkeley Business School; 1988.
19. LaPorte TR. The United States air traffic control system: increasing reliability in the midst of rapid growth. In: Mayntz R, Hughes TP, editors. The Development of Large technical Systems. Boulder, CO: Westview Press; 1988.
20. Roberts KH. Managing High Reliability Organizations. California Management Review 1990;32:101-113.
21. Roberts KH, Libuser C. From Bhopal to banking: Organizational design can mitigate risk. Organizational Dynamics 1993;21:15-26.
22. LaPorte TR, Consolini P. Theoretical and operational challenges of "high-reliability organizations": Air-traffic control and aircraft carriers. International Journal of Public Administration 1998;21:847-852.
23. Perrow C. Normal accidents: Living with High-Risk Technologies. With a New Afterword and a Postscript on the Y2K Problem. Princeton, NJ: Princeton University Press; 1999.
24. Sagan SD. The Limits of Safety: Organizations, Accidents and Nuclear Weapons. Princeton, NJ: Princeton University Press; 1993.
25. Kirwan B. Human error identification techniques for risk assessment of high risk systems—Part 1: Review and evaluation of techniques. Appl Ergon 1998;29:157-177.
26. Kirwan B. Human error identification techniques for risk assessment of high risk systems—Part 2: towards a framework approach. Appl Ergon 1998;29:299-318.
27. Hollnagel E, Kaarstad M, Lee HC. Error mode prediction. Ergonomics 1999;42:1457-1471.
28. Kirwan B, Kennedy R, Taylor-Adams S, Lambert B. The validation of three human reliability quantification techniques—THERP, HEART and JHEDI: Part II-Results of validation exercise. Appl Ergon 1997;28:17-25.
29. Stanton NA, Stevenage SV. Learning to predict human error: issues of acceptability, reliability and validity. Ergonomics 1998;41:1737-1756.
30. Blumenthal D, Kilo CM. A report card on continuous quality improvement. Milbank Q 1998;76:625-48,511.
31. Gerowitz MB. Do TQM interventions change management culture? Findings and implications. Qual Manag Health Care 1998;6:1-11.
32. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q 1998;76:593-624,510.
33. Shortell SM, Jones RH, Rademaker AW, Gillies RR, Dranove DS, Hughes EF, et al. Assessing the impact of total quality management and organizational culture on multiple outcomes of care for coronary artery bypass graft surgery patients. Med Care 2000;38:207-217.
Return to Contents
Proceed to Next Chapter