Page 1 of 1

Chapter 2. Relationships to Existing Patient Safety Efforts and Tools

Mistake-Proofing the Design of Health Care Processes -

Chapter 2. Relationships to Existing Patient Safety Efforts and Tools


After the publication of the Institute of Medicine (IOM) report, To Err is Human,1 patient safety deficits moved to the forefront of public attention. The goal of the report was a challenging one: to reduce medical errors by half in 5 years. Those 5 years have passed, and substantial effort has been invested in reducing errors in health care. This chapter focuses on how many of these efforts relate to mistake-proofing and how new tools can contribute to improved patient safety.

Mistake-proofing has been used effectively in other industries and has been adopted in medicine as an artifact of common sense applied to processes. More can be done. In many cases, mistake-proofing will fit into a variety of existing efforts to improve patient safety. In other cases, it provides an effective alternative direction to seek improvement in the face of ineffective actions.

Return to Contents

Relationships to Existing Patient Safety Efforts

The relationship of mistake-proofing to current patient safety efforts is shown in Table 2.1. Many of the efforts to improve patient safety are important enablers of mistake-proofing. They create a foundation for, or aid in, mistake-proofing implementation. Others are areas of opportunity in which existing patient safety efforts create resources for identifying likely mistake-proofing projects. Some efforts address the same problems as mistake-proofing. While these techniques are listed as competing there is no requirement for mutual exclusivity. Multiple approaches are not only possible, they are recommended. In cases in which some competing approaches are onerous or ineffective, mistake-proofing can reduce the scope and burden of these efforts so that they may be used only where they are needed most.

Table 2.1 includes some overlapping concepts; both "creating a just culture" and "enhancing attentiveness," for example, can be seen as subsets of safety culture. Each of the relationships in Table 2.1 is also discussed in the next several pages.

Mistake-proofing has been used effectively in other industries and has been adopted in medicine as an artifact of common sense as applied to processes.

Safety Culture

Safety culture is a set of attitudes, values, perceptions, norms, and behaviors that tend to reduce the likelihood of unsafe acts, and which encourage thorough disclosure of, and learning from, adverse events.3 Safety culture also includes norms of high reliability organizations, as described by Weick and Sutcliffe:4

  1. Preoccupation with failure.
  2. Reluctance to simplify interpretations.
  3. Sensitivity to operations.
  4. Commitment to resilience.
  5. Deference to expertise.

Just Culturea

Just culture refers to a working environment that is conducive to "blame-free" reporting but also one in which accountability is not lost.5 Blame-free reporting ensures that those who make mistakes are encouraged to reveal them without fear of retribution or punishment. A policy of not blaming individuals is very important to enable and facilitate event reporting which in turn, enables mistake-proofing.

The concern with completely blame-free reporting is that egregious acts, in which punishment would be appropriate, would go unpunished. Just culture divides behavior into three types: normal, risk-taking, and reckless. Of these, only reckless behavior is punished.

a. For more information on Just Culture, see Patient Safety and the just culture: A primer for health care executives by David Marx, J.D. (2001), on the Web at

Event Reporting

Event reporting refers to actions undertaken to obtain information about medical events and near-misses. The reporting reveals the type and severity of events and the frequency with which they occur. Event reports provide insight into the relative priority of events and errors, thereby enabling the mistake-proofing of processes. Consequently, events are prioritized and acted upon more quickly according to the seriousness of their consequences.

Root Cause Analysis

Root cause analysis (RCA) is a set of methodologies for determining at least one cause of an event that can be controlled or altered so that the event will not recur in the same situation. These methodologies reveal the cause-and-effect relationships that exist in a system. RCA is an important enabler of mistake-proofing, since mistake-proofing cannot be accomplished without a clear knowledge of the cause-and-effect relationships in the process.

Care should be taken when RCA is used to formulate corrective actions, since it may only consider one instance or circumstance of failure. Other circumstances could also have led to the failure. Other failure analysis tools, such as fault tree analysis, consider all known causes and not just a single instance. Anticipatory failure determination6 (AFD™) facilitates inventing new circumstances that would lead to failure given existing resources.

Corrective Action Systems

Corrective action systems are formal systems of policies and procedures to ensure that adverse events are analyzed and that preventive measures are implemented to prevent their recurrence. Normally, the occurrence of an event triggers a requirement to respond with counter-measures within a certain period of time. Mistake-proofing is an effective form of counter-measure. It is often inexpensive and can be implemented rapidly.

It is also important to look at all possible outcomes and counter-measures, not just those observed. Sometimes, mistake-proofing by taking corrective action is only part of the solution. For example, removing metal butter knives from the dinner trays of those flying in first class effectively eliminates knives from aircraft, but does not remove any of the other resources available for fashioning weapons out of materials available on commercial airplanes.7 This is mistake-proofing but not a fully effective counter-measure.

Corrective action systems can also serve as a resource to identify likely mistake-proofing projects. Extensive discussion and consultation in a variety of industries, including health care, reveal that corrective actions are often variations on the following themes:

  1. An admonition to workers to "be more careful" or "pay attention."
  2. A refresher course to "retrain" experienced workers.
  3. A change in the instructions, standard operating procedures, or other documentation.

All of these are essentially attempts to change "knowledge in the head."8

Mistake-proofing is an effective form of counter-measure. It is often inexpensive and can be implemented rapidly.

Chappell9 states that "You're not going to become world class through just training, you have to improve the system so that the easy way to do a job is also the safe, right way. The potential for human error can be dramatically reduced."

Mistake-proofing is an attempt to do what Norman8 recommends, put "knowledge in the world." Consequently, corrective actions that involve changing "knowledge in the head" can also be seen as opportunities to implement mistake-proofing devices. These devices address the cause of the event by putting "knowledge in the world."

Not all corrective actions deserve the same amount of attention. Therefore, not all corrective actions should be allotted the same amount of time in which to formulate a response. Determining which corrective actions should be allowed more time is difficult because events occur sequentially, one at a time. Responding to outcomes that are not serious, common, or difficult to detect should not consume too much time. For events that are serious, common, or difficult to detect, additional time should be spent in a careful analysis of critical corrective actions.

Specific Foci

Substantial efforts to improve patient safety have been focused on specific events such as falls, medication errors, use of anesthesia, transfusions, and communication. These specific foci provide areas of opportunity for the implementation of mistake-proofing.


There have been many discussions in health care circles concerning the application of methods developed in the aviation industry to improve patient safety. In aviation, simulation is used to train pilots and flight crews.

Logically enough, simulators have also begun to be employed in medicine. In addition to training, simulation can provide insights into likely errors and serve as a catalyst for the exploration of the psychological or causal mechanisms of errors. After likely errors are identified and understood, simulators can provide a venue for the experimentation and validation of new mistake-proofing devices.


Technological solutions to patient safety problems have generated substantial interest. Bar coding and computerized physician order entry (CPOE) systems, in particular, are being widely implemented. Both of these technologies are, in fact, forms of mistake-proofing, despite their tendency to be more expensive and complex than the mistake-proofing characterized in Table 1.6.

Facility Design

The study of facility design complements mistake-proofing and sometimes is mistake-proofing (Figure 2.1). Adjacency, proper handrails and affordances, standardization, and the use of Failure Modes and Effects Analysis (FMEA) as a precursor are similar to mistake-proofing. Ensuring non-compatible connectors and pin-indexed medical gases is mistake-proofing.

Revising Standard Operating Procedures

When adverse events occur, it is not uncommon for standard operating procedures (SOPs) to be revised in an effort to change the instructions that employees refer to when providing care. This approach can either improve or impair patient safety, depending on the nature of the change and the length of the SOP. If SOPs become simpler and help reduce the cognitive load on workers, it is a very positive step. If the corrective responses to adverse events are to lengthen the SOPs with additional process steps, then efforts to improve patient safety may actually result in an increase in the number of errors.

Evidence from the nuclear industry suggests that changing SOPs improves human performance up to a point but then becomes counterproductive. Chiu and Frick10 studied the human error rate at the San Onofre Nuclear Power Generation Facility since it began operation. They found that after a certain point, increasing procedure length or adding procedures resulted in an increase in the number of errors instead of reducing them as intended. Their findings are shown in Figure 2.2. Their facility is operating on the right side of the minimum, in the region labeled B. Consequently, they state that they "view with a jaundiced eye an incident investigation that calls only for more rules (i.e., procedure changes or additions), and we seek to simplify procedures and eliminate rules whenever possible."

While there is no comparable study in health care, prudence suggests that increases in the complexity of standard operating procedures should be considered carefully to ensure that the benefits of the additional instructions exceed the problems generated by the added complexity. Simplifying processes and providing clever work aids complement mistake-proofing and in some cases may be mistake-proofing. When organizations eliminate process steps, they also eliminate the errors that could have resulted from those steps.

Attention Management

Substantial resources are invested in ensuring that workers, generally, and medical personnel, particularly, are alert and attentive as they perform their work. Attention management programs range from motivational posters in the halls and "time-outs" for safety, to team-building "huddles" (Figure 2.3). Eye-scanning technology determines if workers have had enough sleep during their off hours to be effective during working hours.11

When work becomes routine and is accomplished on "autopilot" (skill-based12), mistake-proofing can often reduce the amount of attentiveness required to accurately execute detailed procedures. The employee performing these procedures is then free to focus on higher level thinking. Mistake-proofing will not eliminate the need for attentiveness, but it does allow attentiveness to be used more effectively to complete tasks that require deliberate thought.

Crew Resource Management

Crew resource management (CRM) is a method of training team members to "consistently use sound judgment, make quality decisions, and access all required resources, under stressful conditions in a time-constrained environment."13 It grew out of aviation disasters where each member of the crew was problem-solving, and no one was actually flying the plane. This outcome has been common enough that it has its own acronym: CFIT—Controlled Flight Into Terrain.

Mistake-proofing often takes the form of reducing ambiguity in the work environment, making critical information stand out against a noisy background, reducing the need for attention to detail, and reducing cognitive content (for details on cognitive content, go to Chapter 4). Each of these benefits complements CRM and frees the crew's cognitive resources to attend to more pressing matters.


FMEA and FMECA are "virtually the same,"14 except for a few subtleties that have been more or less lost in practice (hereafter simply referred to as FMEA). These two related tools enable teams to analyze all of the ways a particular component or process can fail, predict what the consequences of that failure would be, and prioritize remedial change actions.

FMEA and FMECA are form- or worksheet-based approaches. Since forms are easily manipulated to meet users' needs, rarely are two forms exactly the same.15-18 Regardless of which version of FMEA is selected, certain aspect of the analysis will be included.

The FMEA process is begun by creating a graphical description of the sequence of tasks being analyzed, referred to as a process map. Several books are devoted exclusively to process mapping.19-23 The team lists all the failures that could occur at each task on the FMEA form. The scope of this step must be managed carefully to keep it from becoming tremendously onerous. Often, only a small subset of tasks is considered at one time. After failures have been identified, the potential effects of each failure are specified, and the severity of each is assessed. Potential causes are identified. The team then assesses the likelihood of each occurrence and the probability of detecting the cause before harm is done. The severity, the likelihood of occurrence, and the detectability of each cause are combined into a priority ranking.

A common method is to rank severity (sev), likelihood of occurrence (occ), and detectability (det) on a 10-point scale and then multiply them together. The product is often called the risk priority number (RPN). An example of these RPN calculations is shown in Figure 2.4. With FMECA, the risk priority number of each cause is summed to create a mode criticality number. Failure causes (or failure modes for FMECA) are then prioritized, and preventive actions are taken. In Figure 2.4, the cause "strip of labels with multiple patient names mixed" is the highest priority cause. "Order entry error" is the lowest priority. Little indication of what actions should be taken is provided by authors writing about FMEA. However, the logic of FMEA implies that the RPN after the prevention effort should be less severe, less likely to occur, or more easily detected. A detailed discussion is included in Chapter 3.

In an FMEA analysis, rank severity, likelihood, and detectability on a 10-point scale and multiply them to determine the risk priority number (RPN).

Fault Trees

FMEA is a bottom-up approach in the sense that it starts at the component or task level to identify failures in the system. Fault trees are a top-down approach. A fault tree starts with an event and determines all the component (or task) failures that could contribute to that event.

A fault tree is a graphical representation of the relationships that directly cause or contribute to an event or failure. Figure 2.5 shows a generic fault tree. The top of the tree indicates the failure mode, the "top event." At the bottom of the tree are causes, or "basic failures." These causes can be combined as individual, independent causes using an "OR" symbol. They can be combined using an "AND" symbol if causes must co-exist for the event to occur. The tree can have as many levels as needed to describe all the known causes of the event.

These failures can be analyzed to determine sets of basic failures that can cause the top event to occur, cut sets. A minimal cut set is the smallest combination of basic failures that produces the top event. A minimal cut set leads to the top event if, and only if, all events in the set occur. This concept will be employed in Chapter 3 to assess the performance of mistake-proofing device designs. These minimal cut sets are shown with dashed lines in Figure 2.5.

Fault trees also allow one to assess the probability that the top event will occur by first estimating the probability that each basic failure will occur. In Figure 2.5, the probabilities of the basic failures are combined to calculate the probability of the top event. The probability of basic failures 1 and 2 occurring within a fixed period of time is 20 percent each. The probability of basic failure 3 occurring within that same period is only 4 percent. However, since both basic failures 1 and 2 must occur before the top event results, the joint probability is also 4 percent. Basic failure 3 is far less likely to occur than either basic failure 1 or 2. However, since it can cause the top event by itself, the top event is equally likely to be caused by minimal cut set 1 or 2.

Two changes can be made to the tree to reduce the probability of the top event:

  1. Reduce the probability of basic failures.
  2. Increase redundancy in the system.

That is, design the system so that more basic failures are required before a top event occurs. If one nurse makes an error and another nurse double checks it, then two basic failures must occur. One is not enough to cause the top event.

FMEA and fault trees are useful in understanding the range of possible failures and their causes.

The ability to express the interrelationship among contributory causes of events using AND and OR symbols provides a more precise description than is usually found in the "potential cause" column of an FMEA. Potential causes of an FMEA are usually described using only the conjunction OR. It is the fault tree's ability to link causes with AND, in particular, that makes it more effective in describing causes. Gano2 suggests that events usually occur due to a combination of actions and conditions; therefore, fault trees may prove very worthwhile. FMEA and fault trees are not mutually exclusive. A fault tree can provide significant insights into truly understanding potential failure causes in FMEA.

Return to Contents

Knowing What Errors Occur, and Why, Is Not Enough

FMEA and fault trees are useful in understanding the range of possible failures and their causes. The other tools—safety culture, just culture, event reporting, and root cause analysis—lead to a situation in which the information needed to conduct these analyses is available. These tools, on their own, may be enough to facilitate the design changes needed to reduce medical errors. Only fault tree analysis, however, comes with explicit prescriptions about what actions to take to improve the system.

These prescriptions, which will be discussed further in Chapter 3, are: increase component reliability or increase redundancy. Fault trees are also less widely known or used than other existing tools. FMEA is far more widely used, in part because it is a popular method of meeting the Joint Commission's (JCAHO) requirement to perform proactive risk assessment.

FMEA calls for action. Most versions of FMEA do not provide explicit prescriptive information about what action to take. Only JCAHO explicitly prescribes redesigning the process. With the exception of the less-utilized fault tree analysis, the tools used in patient safety improvement efforts are currently focused on determining what events and errors occur and what causes them. They are silent about how to fix the problem or prevent the cause of failure from recurring. Even JCAHO,24 which explicitly identifies redesign as the preferred approach for increasing patient safety, provides little direction about how to accomplish it. JCAHO provides three questions that must be answered at the "redesign the process" step:

  1. How can we change the process to prevent this failure mode from occurring?
  2. What design/redesign strategies and tools should we use? How do we evaluate their likely success?
  3. Who should be involved in the design/redesign process?

These are the crucial questions and, like Fermat's Last Theorem,b are left as an exercise for the reader. A recurring theme in quality improvement literature is that we are good at identifying problems but not so good at devising methods to solve them. Numerous tools are available to define, measure, and analyze quality problems and control processes. Six-sigma is a popular quality improvement framework. It has an improvement cycle that involves five problem-solving steps: define, measure, analyze, improve, and control. The tools available to actually conceive of what the improvement should be are limited. In a 191-page quality management quick reference guide,25 only 12 pages were devoted to tools for actually improving the process (Figure 2.6). Worse, those pages are devoted to managing the process of implementing the improvement, not how to determine what the improvement should be.

Determining what the improvement should be is an inventive problem that will require some creativity. Tools to facilitate the inventive solution to determining how to design devices that will mistake-proof the process are introduced in the next section and presented in detail in Chapter 3.

b. Pierre de Fermat (1601-1665) was a French lawyer and number theorist known for his last theorem, which was discussed for hundreds of years until it was solved in 1995 by mathematician Andrew John Wiles (1953-). Wiles had been working on solving the theorem since 1963. The Last Theorem states that xn + yn = zn has no non-zero integer solutions for x, y, and z when n > 2. Go to's_Last_theorem.html.

Return to Contents

Using the Tools Together

Figure 2.7 shows a flowchart of how patient safety tools can be used together with other management tools to reduce human error and create mistake-proofing devices.

Enabling Tools

The box to the left in Figure 2.7 contains enabling tools that provide a foundation for designing effective mistake-proofing devices. The design tools in the center box require detailed information about the process and a thorough understanding of cause-and-effect relationships as inputs to be analyzed. The enabling tools provide these inputs.

Process mapping defines the current process. A process is "a collection of interrelated work tasks, initiated in response to an event, achieving a specific result for the customer and other stakeholders of the process."16 Thinking of health care as a process and then mapping that process is a critical step in improving the process.

Process mapping is also an early step in performing FMEA. "Graphically describing the process" is Step 3 in healthcare failure modes and effects analysis (HFMEA)™.12 Flow charting, one style of process mapping, is utilized in Steps 1 and 2 in JCAHO's recommended FMECA process.16 A detailed understanding of the process also provides insights into where specific errors might be detected and how likely that detection is to occur.

Having a just culture that is fair and open will foster event and near-miss reporting. Reporting provides insights into what events occur, how often they occur, and the outcome's level of seriousness when they occur.

Information about the frequency and severity of adverse events facilitates the prioritization of process improvement efforts. Knowing a failure occurred should trigger an event investigation and subsequent root cause analysis. Root cause analysis determines what cause-and-effect relationships lead to events in the process. There is an implicit expectation that the cause-and-effect relationships of a process are understood in FMEA. The potential causes of an event must be listed for each failure mode. Fault tree analysis also assumes an understanding of cause and effect. Fault trees go beyond FMEA by stating the relationships among multiple causes that would lead to the event taking place.

Visual systems create an environment where mistake-proofing can be used more effectively (Chapter 1). Visual cues indicating what action to take are more obvious when distractions are removed, and standardization provides points of reference to enable employees to detect and prevent errors.

Design Tools

The central box in Figure 2.7 contains tools that facilitate the design of mistake-proofing devices. The tools are listed and employed in a sequential manner. FMEA is first. No additional design tools are needed if, after conducting an FMEA and brainstorming for possible solutions, an adequate number of possible solutions is generated. The next step is to select and implement the best solution available.

There is no reason to think that the first solution arrived at will be the best overall solution. Teams should determine the optimal number of solutions to be developed before deciding on the best one, shown as "n" in Figure 2.7. Pella Window™ engineers26 reported that they develop and test seven solutions before making a decision. One step in their decisionmaking process is to fabricate cardboard and scrap-wood prototypes of equipment that can be tested and compared by workers and engineers.

A similar approach was used by St. Joseph's Hospital in West Bend, WI. The team focused on creating a patient safety-centered design for their new building.27 To facilitate the design process, they tore out two rooms of the existing hospital and mocked up one new room so that staff members could walk through it, visualize working in it, and identify improvements. The St. Joseph's room is shown in Figures 2.8, 2.9, and 2.10. Figure 2.10 shows a page of comments taped to the wall. This page is concerned only with the bathroom light fixture. Staff members filled several sheets as they explored the mock-up room.

St. Joseph's Hospital relied heavily on FMEA. The mock-up room helped them to identify failure modes and think through creative new solutions. The new facility opened in August 2005.27

Teams can employ a similar approach on a smaller scale for most mistake-proofing device implementation. As mentioned in Chapter 1, Hirano28 suggests that if a device has a greater than 50-percent chance of success, teams should stop analyzing the situation and immediately attempt a mock-up of the solution. Some refer to this approach as "trystorming." Trystorming extends brainstorming by quickly creating mock-ups that can be rapidly and thoroughly evaluated. Given many mistake-proofing devices' low implementation cost and simplicity, it is logical to fabricate an early mock-up before continuing with analysis.

A fault tree is used to model the situation further in cases where FMEA does not yield a sufficient number of potential solutions. Fault trees add information that may not appear in FMEA. The use of AND and OR nodes, the concept of minimal cut sets, and the use of probabilistic information in fault tree analysis enable a more accurate assessment of the impact of potential mistake-proofing devices. More brainstorming is called for after the completion of the fault tree analysis. Teams should proceed to selection and implementation only after generating a sufficient number of solutions.

If, after employing FMEA and fault tree analysis, teams still do not generate enough potential solutions, the next logical step is to employ multiple fault trees, a technique discussed in detail in Chapter 3. Multiple fault tree analysis aids in converting the problem from one of how to stop failures from happening into one of how to make failures happen. The question here is, "Which failure would be more benign, and how can we generate that failure?" Fault trees that were initially used to analyze undesirable failures are used here to explore resources and generate benign failures.

If the number of design changes resulting in benign failures is still not sufficient, the next step is to employ creativity, invention facilitation techniques, or software. A variety of techniques, methodologies, and software could be used here. One promising approach, TRIZ, has its genesis in the work of Genrich Altschuller.29-31 He created an inventive algorithm called the "Theory of Inventive Problem Solving." Its Russian acronym is transliterated as TRIZ. The TRIZ algorithm is designed for groups to find new ideas on how to approach a problem; to formulate the specific problem in general terms, then identify past approaches—which originate in a Russian patent database—that have been successful. TRIZ is complex and requires extensive reading and/or training. Learning is made somewhat easier by the TRIZ software, which assists in the learning process.

If teams still need more potential solutions, they might consider designing a process that embeds cues about how to do the work correctly.c Norman's concepts from Table 1.6—natural mappings, affordances, visibility, feedback, and constraints—are used here.

It would be unrealistic to assume that all problems lend themselves to a solution. If every attempt fails, teams may have to give up, at least in the short run. Before giving up, though, teams should consider a change in focus; explore sub-systems or super-system changes that might provide an alternative problem that is more easily solved. Can the process step be moved to a more advantageous area or combined with another step? What would need to change in order for this task to be entirely unneeded and eliminated?

Selecting a solution

Let us assume that a team is not forced to give up, and that the process described above yielded a cornucopia of possible solution approaches. There are now many directions in which to embark in the search for improvement, especially when employing TRIZ software. The team is now confronted with a delightful dilemma: how to determine which solutions are the most promising. Godfrey et al.,32 provide an answer, the solution priority number (SPN). The SPN concept is very similar to FMEA's risk priority number (RPN). The SPN is the product of a solution's effectiveness, cost, and ease of implementation, as shown in Table 2.2. The best solutions will have high SPN scores: 12, 18, and 27 are the highest possible scores. (Because SPN is the product of integer scores, no intermediate scores, such as 13, 19, or 26, are possible). These high-scoring solutions will be very effective, cost very little, and be exceptionally easy to implement.

A high SPN (Table 2.3) is an indication that a solution is promising. It does not obviate the need for careful consideration of device design issues. Human factors like process usability and time constraints placed on workers still must be considered. Devices must not negatively affect the usability of a process or slow the process noticeably, particularly when resources such as nurse staffing levels are constrained. Staff will find ways to accomplish their responsibilities, even if it means disabling devices (Chapter 4).

c. Embedded cues about how to use the process should be placed throughout facilities, regardless of which mistake-proofing efforts are undertaken. Mutual exclusivity of tools or approaches is not warranted or advisable. Cues are often less effective in stopping errors. They can still be quite effective, however, in avoiding them.

Table 2.3. Possible SPN scores and combinations

Possible SPN scoresNumber of combinations resulting in that score

Return to Contents


Mistake-proofing does not obviate the need for many of the tools currently in use in patient safety environments; it uses the insights these tools generate to aid in the design of safer systems and processes.

Regrettably, even with these tools at teams' disposal, determining what design change to make is not as well defined as Figure 2.7 would suggest. Creativity, at its core, is not a linear process. The tools contribute to our ability to make sense of a situation, determine what needs to be done, and decide how to do it. The actual solutions could yet require a leap of creativity, a flash of inspiration. The intent of Figure 2.7 and the tools it contains is to reduce the size of the leap.

Return to Contents


1. Kohn LT., Corrigan JM, Donaldson MS, eds. Institute of Medicine. To err is human: building a safer health system. Washington, DC: National Academies Press; 2000.

2. Gano DL. Apollo root cause analysis—A new way of thinking. Yakima, WA: Apollonian Publications; 1999.

3. Reason JT. Managing the risks of organizational accidents. Aldershot, UK: Ashgate; 1997.

4. Weick KE, Sutcliffe KM. Managing the unexpected: assuring high performance in an age of complexity. San Francisco: Jossey-Bass; 2001.

5. Marx D. Patient safety and the "just culture": A primer for health care executives. Accessed March 2004.

6. Kaplan S, Visnepolschi S, Zlotin B, Zusman A. New tools for failure and risk analysis: anticipatory failure determination™ (AFD™) and the theory of scenario structuring. South Field, MI: Ideation International; 1999.

7. Marx D. Personal communication. 2002.

8. Norman DA. The design of everyday things. New York: Doubleday; 1989.

9. Chappell L. The Poka-yoke solution. Automotive News Insights 1996; (August 5) 24i.

10. Chiu C, Frick W. Minimizing the human error rate by understanding the relationship between skill and rulebased errors. Proceedings of the 16th Reactor Operations International Topical Meeting. Long Island, NY: 1993; 274-8.

11. Hotchkiss E. Pulse medical instruments (PMI).; Accessed March 2004.

12. Reason J. Human error. New York: Cambridge University Press; 1990.

13. Kern T. Culture, environment & CRM. Controlling pilot error series. New York: McGraw-Hill; 2001.

14. Bahr, NJ. System safety engineering and risk assessment: a practical approach. Philadelphia: Taylor & Francis; 1997.

15. DeRosier J, Stalhandske E, Bagian JP, Nudell T. Using health care failure mode and effect analysis: The VA National Center for Patient Safety's prospective risk analysis system. Jt Comm J Qual Improv 2002; 28:248-67, 209.

16. Joint Commission on Accreditation of Healthcare Organizations. Search entry: FMECA (Failure Modes, Effect, and Criticality Analysis). Accessed October 2004.

17. Krasker, G.D. Failure modes and effects analysis: building safety into everyday practice. Marblehead, MA: HCPro; 2004.

18. AIAG Workgroup. FMEA-3: Potential failure modes and effects analysis, 3rd ed. Southfield, MI: Automotive Industry Action Group; 2002.

19. Sharp A, McDermott P. Workflow modeling: tools for process improvement and application development. Boston: Artech House; 2001.

20. Frazier J. Swimlane process mapping: coordinating work across functional groups and information systems. Walnut Creek, CA: Frazier Technologies, Inc; 2001.

21. Jacka JM, Keller P. Business process mapping: improving customer satisfaction. New York: John Wiley & Sons; 2001.

22. Tapping D, Shuker T, Luyster. T. Value stream management. New York: Productivity Press; 2002.

23. Hunt D. Process mapping: how to reengineer your business processes. New York: John Wiley & Sons; 1996.

24. Joint Commission on Accreditation of Healthcare Organizations. Patient safety: essentials for health care, 2nd ed. Oakbrook Terrace, IL: Joint Commission Resources; 2004.

25. Rath X, Strong X. Six-sigma pocket guide. Lexington, MA: Rath and Strong Management Consultants; 2002.

26. Grout J. Unpublished conversation with engineer. Pella Windows plant tour. Mistake-Proofing Forum. Productivity Inc.: Des Moines, IA; November 3, 2000.

27. Reiling JG, Knutzen BL, Allen TK, et al. Enhancing the traditional hospital design process: a focus on patient safety. Jt Comm J Qual Safety 2004; 30(3):115-24.

28. Hirano H. Overview. In: Shimbun Factory Magazine. Poka-yoke: improving product quality by preventing defects. Portland, OR: Productivity Press; 1988.

29. Kaplan S. An introduction to TRIZ: the Russian theory of inventive problem solving. South Field, MI: Ideation International; 1996.

30. Altshuller G. The innovation algorithm: TRIZ systematic innovation and technical creativity. Trans. Shulyak L, Rodman S. Worcester, MA: Technical Innovation Center; 1999.

31. Rantanen K, Domb E. Simplified TRIZ: new problemsolving applications for engineers and manufacturing professionals. Boca Raton, FL: CRC Press; 2002.

32. Godfrey AB, Clapp T, Nakajo T, et al. Application of healthcare-focused error proofing: principles and solution directions for reducing human errors. Proceedings, ASQ World Conference on Quality and Improvement. Seattle, WA: American Society for Quality; May 2005.

Return to Contents
Proceed to Next Section


Page last reviewed May 2007
Internet Citation: Chapter 2. Relationships to Existing Patient Safety Efforts and Tools: Mistake-Proofing the Design of Health Care Processes -. May 2007. Agency for Healthcare Research and Quality, Rockville, MD.