Your browser doesn't support JavaScript. Please upgrade to a modern browser or enable JavaScript in your existing browser.
Skip Navigation U.S. Department of Health and Human Services www.hhs.gov
Agency for Healthcare Research Quality www.ahrq.gov
Archive print banner

This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information.

Please go to www.ahrq.gov for current information.



Quality Interagency Coordination (QuIC) Task Force
Return to QuIC Home
About QuIC
Steering Group
Workgroups
Press Releases
Related Links
Site Map
Written Statement

spacer

National Summit on Medical Errors and Patient Safety Research

Panel 2: Broad-based Systems Approaches

Testimony of David Woods, Past President, Human Factors and Ergonomics Society


The first National Summit on Medical Errors and Patient Safety Research was held on September 11, 2000, in Washington, DC. Sponsored by the Quality Interagency Coordination Task Force (QuIC), the Summitís goal was to review the information needs of individuals involved in reducing medical errors and improving patient safety. More importantly, the summit set a coordinated and usable research agenda for the future to answer these identified needs.

Individuals were selected by the Agency for Healthcare Research and Quality (AHRQ) to testify at the summit as members of the witness panels. Each submitted written statements for the record before the event, documenting key issues that they confront with regard to patient safety as well as questions to be researched. Other applicants were invited to submit written statements.

Disclaimer and Copyright Statements


Introduction / Systems Issues / High Reliability
Organizations
/ Conclusion / References

Introduction

Throughout the patient safety movement, health care leaders have consistently referred to the potential value of the Human Factors research on human performance and system failure (Leape et al., 1998). The patient safety movement has been based on three ideas derived from results of research on human expertise (Feltovich et al., 1997), collaborative work (Rasmussen et al., 1991), and high reliability organizations (Rochlin, 1999) built up through investments by other industries:

  • Adopt a systems approach to understand how breakdowns can occur on how to support decisions in the increasingly complex worlds of health care.
  • Move beyond a culture blame to create open flow of information and learning about vulnerabilities to failure.
  • Build partnerships across all stakeholders in health care to set aside differences and to make progress on a common overarching goal.

Creating a durable, informative, and useful partnership between health care and those disciplines with core expertise in areas of human performance is critical to advancing patient safety. Through substantive partnerships, we can look ahead to side step false trails and to anticipate new paths to failure in the changing world of health care.

Parallel Perspectives

"Human error in medicine, and the adverse events which may follow, are problems of psychology and engineering not of medicine."

John Senders, 1993

Health care organizations and the public are wondering how we can reduce injuries to patients that occur in the process of care. From one perspective, preventing adverse events such as medication misadministrations, delayed diagnoses, or wrong site surgeries is concerned with specific medical issues in specific health care settings. From another perspective adverse events concern different variables that affect human performance in lawful, predictable ways.

The research base about human performance has been built up from studies of how practitioners interact to handle situations in many different contexts such as aviation, industrial process control, space operations. The results capture empirical regularities and provide explanatory concepts and models. These results allow us to go behind the unique features and surface variability of many different settings to see common underlying patterns.

To understand and predict human performance in health care, as in any complex setting, we need to make use of the different languages that describe human performance. To understand patterns in human judgment one needs to understand concepts such as bounded rationality, knowledge calibration, heuristics, oversimplification fallacies (Feltovich et al., 1997). To understand patterns in communication and cooperative work one needs to understand concepts such as supervisory control, common ground, and open versus closed work spaces (Clark and Brennan, 1991; Galegher et al., 1990; Greenbaum and Kyng, 1991; Rasmussen et al., 1991). To understand patterns in human-computer cooperation, one needs to understand concepts such as the representation effect, object displays, inattentional blindness, mental models, data overload, mode error (Norman, 1993; LaBerge, 1995; Rensink et al., 1997; Zhang, 1997; Vicente, 1999). These are concepts foreign to health care practitioners just as medical concepts in cardiology or respiratory therapy are foreign to the human factors community.

The same pattern in communication and cooperative work—the effectiveness of cross-checking—plays out in many different health care settings. The same health care issue, medication misadministration, involves very different kinds of human performance issues depending on the context (e.g., internet pharmacies, patient self-managed treatment, administration through computerized infusion devices, or interactions through computer based communication in an order entry system). When we understand the kinds of processes in human performance that play out in the health care setting or situation of interest, we can use past knowledge to guide the development and testing of new ways to support very high levels of human performance.

This is a process of going back and forth between two different perspectives in a partnership where each party steps outside of their own area of expertise to learn by looking tat the situation through the otherís perspective. This means research on patient safety is a kind of interdisciplinary synthesis that requires combining technical knowledge in specific health care areas, technical knowledge of general results/concepts about the various aspects of human performance that play out in that area, and a view of practitioners carrying out this kind of work in context.

Human Factors researchers and professionals have been and continue to be eager to join in such partnerships to make progress on patient safety. We are willing to invest the energy to learn from medical colleagues about the pragmatics and technical knowledge in diverse health care settings. Similarly, we hope that our medical colleagues are ready to invest energy in learning about the technical knowledge that captures regularities in the various aspects of human performance that can play. Indeed, partnerships of this kind have been and continue to be the engine of progress in work on patient safety to date (see Bogner, 1994; see programs such as the Annenberg meetings, Hendee, 1999; and in partnerships at various research labs around the U.S. and the world, e.g., Nyssen and De Keyser, 1998; Howard et al., 1992; Howard et al., 1997; Cook and Woods, 1996).

Startling Results from the Science

One of the great values of science is that during the process of discovery conventional beliefs are questioned by putting them in empirical jeopardy. When scientists formulate new ideas and look at the world anew through these conceptual looking glasses the results often startle us. As a result we can innovate new approaches to accomplish our goals.

This process has been going on for over 20 years in the "New Look" at the factors behind the label human error (Reason, 1997; Woods et al., 1994; Rasmussen, 1990; Rasmussen, 1999. Driven by surprising failures in different industries, researchers from different disciplinary backgrounds began to re-examine how systems failed and how people in their various roles contributed to both success and failure. The results often deviated from conventional assumptions in startling ways.

The research found that doing things safely, in the course of meeting other goals, is always part of operational practice. As people in their different roles are aware of potential paths to failure, they develop failure sensitive strategies to forestall these possibilities. When failures occurred against this background of usual success, researchers found multiple contributors each necessary but only jointly sufficient and a process of drift toward failure as planned defenses eroded in the face of production pressures and change. The research revealed systematic, predictable organizational factors at work, not simply erratic individuals. The research also showed that to understand episodes of failure one had to first understand usual success—how people in their various roles learn and adapt to create safety in a world fraught with hazards, tradeoffs, and multiple goals (Cook et al., 2000).

Researchers have studied organizations that manage potentially hazardous technical operations remarkably successfully, and the empirical results also have been quite surprising (Rochlin, 1999). Success was not related to how these organizations avoided risks or reduced errors, but rather that these high reliability organizations created safety by anticipating and planning for unexpected events and future surprises. These organizations did not take past success as a reason for confidence. Instead they continued to invest in anticipating the changing potential for failure because of the deeply held understanding that their knowledge base was fragile in the face of the hazards inherent in their work and the changes omnipresent in their environment. Safety for these organizations was not a commodity, but a value that required continuing reinforcement and investment. The learning activities at the heart of this process depended on open flow of information about the changing face of the potential for failure. "High reliability" organizations valued such information flow, used multiple methods to generate this information, and then used this information to guide constructive changes without waiting for accidents to occur.

Perhaps most startling in this work is the result that the source of failure was not those others who are less careful or motivated than we are. Instead, the process of investing in safety begins with each of us being willing to question our beliefs to learn surprising things about how we can contribute to the potential for failure in a changing and limited resource world.

Top of Page

Systems Issues for Research to Improve Safety

Search for Underlying Patterns to Gain Leverage

From past work, progress has come from going beyond the surface descriptions (the phenotypes of failures) to discover underlying patterns of systemic factors (genotypical patterns). Genotypes are patterns about how people, teams, and organizations coordinate activities, information, and problem solving to cope with the complexities of problems that arise (Hollnagel, 1993).

The surface characteristics of a near miss or adverse event are unique to a particular setting and people. Genotypical patterns re-appear in many specific situations. Research in Human Factors has revealed a wealth of genotypical patterns. For example:

  • Garden path problems and the potential to fixate on one point of view or hypothesis in problem solving (De Keyser and Woods, 1990).
  • Missing side effects of an action or change to a plan in highly coupled systems (Rasmussen, 1986).
  • Hindsight bias from knowledge of outcome.
  • Alarm overload and high false alarm rates leading to missed or ignored warnings (Stanton, 1991).
  • Mode errors in computerized devices with multiple modes and poor feedback about device state.

A great deal of leverage for improvements is gained by identifying the genotypical patterns at work in a particular situation of interest. For example, we can sample the kinds of difficult situations that can occur in a health care setting and recognize the presence of garden path problems (e.g., in anesthetic management; Gaba, 1987). We may review a corpus of near misses and note that in several cases a practitioner became fixated on one view of the situation (Cook et al., 1989). Or we may analyze how people handle simulated problems and see the potential for fixating in certain situations (e.g., as has occurred in crisis training via anesthesia simulators; Howard et al., 1992).

Previous work on aiding human and team situation assessment can now seed and guide the development of interventions. To overcome fixation in a garden path problem, one can bring to bear techniques that may break up frozen mindsets (such as new kinds of pattern-based displays or new team structures that help broaden the issues under consideration; De Keyser and Woods, 1990).

Each of the genotypes listed above was identified and studied in aerospace and process control settings, but they all also play out in multiple health care settings:

  • Fixation as a danger in anesthetic management during surgery.
  • Missed warnings due to high nuisance and false alarm rates in intensive care units.
  • Mode errors in computerized infusion devices.
  • Hindsight bias in incident review teams.

This list is short and only exemplifies some of the results available to jump start research in health care settings.

Research on patient safety should be using and expanding the set of genotypical patterns related to breakdowns in human performance that occur on health care settings. Research should focus on developing and testing interventions to reduce these problems. In many cases previous work has identified the interventions needed (e.g., mode errors) which need to be transferred to particular medical devices and contexts of use. In other cases, seed ideas exist which to be further developed.

Tame Complexity

In the final analysis, the enemy of safety is complexity. In nuclear power and aviation, we have learned at great cost that often it is the underlying complexity of operations that contributes to the human performance problems. Simplifying the operation of the system can do wonders to improve its reliability, by making it possible for the humans in the system to operate effectively. Often, we have found that proposals to improve systems founder when they increase the complexity of practice. Adding new complexity to already complex systems rarely helps and can often make things worse. This applies to system improvements justified on grounds of safety as well.

The search for simplicity of operation has a severe catch however. The very nature of health care delivery and the environment in which it exists includes, creates or exacerbates many forms of complexity. Success and progress occur through monitoring, managing, taming, and coping with the changing forms of complexity.

This has proven true particular with respect to efforts to introduce new forms and levels of computerization. Improper computerization can simply exacerbate or create new forms of complexity to plague operations. The situation is complicated by the fact the new technology often has benefits at the same time that it creates new vulnerabilities.

Again the science startles us. To help people in their various roles create safety, research needs to

  • Search out the sources of complexity.
  • Understand the strategies people, teams and organizations use to cope with complexity.
  • Devise better ways to help people cope with complexity to achieve success.

This one of the most basic lessons that has come out of the New Look research about error.

Adopt Methods for User Centered Design of Information Technology

Virtually every Human Factors practitioner and researcher when examining the typical human interface of computer information systems and computerized devices in health care is appalled. What we take for granted as the least common denominator in user centered design and testing of computer systems in other high risk industries (and even in commercial software development houses that produce desktop educational and games software) seems to be far too rare in medical devices and computer systems. The devices are too complex and require too much training to use given typical workload pressures (e.g., Cook et al., 1992; Obradovich and Woods, 1996; Lin et al., 1998).

Computer displays, interfaces, and devices in health care exhibit "classic" human-computer interaction deficiencies. By "classic" we mean that we see these design "errors" in many devices in many settings of use, that these design problems are well understood (e.g., they appear in our textbooks and popular writings, e.g., Norman and Draper, 1986; Norman, 1988), and that the means to avoid these problems are readily available.

We are concerned that the calls for more use of integrated computerized information systems to reduce error could introduce new and predictable forms of error unless there is a significant investment in use-centered design (Winograd and Woods, 1998).

The concepts and methods for use centered design are available and are being used everyday in software houses (Nielsen, 1993; Carroll and Rosson, 1992; Flach and Dominguez, 1995). Usability testing should be a standard, not an exceptional part of product development practices. Health care delivery organizations also need to understand how they can use these techniques in their own testing processes and as informed consumers of computer information systems.

Building the partnerships, creating demonstration projects, and disseminating the techniques for health care organizations is a significant and rewarding investment to ensure we receive the benefits of computer technology while avoiding designs that induce new errors.

But there is much more to human-computer interaction than adopting basic techniques like usability testing. Much of the work in Human Factors concerns how to use the potential of computers to enhance expertise and performance. We will consider only a few of these issues here.

Study Human Expertise to Develop the Basis for Computerization

The key to skillful as opposed to clumsy use of technological possibilities lies in understanding both the factors that lead to expert performance and the factors that challenge expert performance (Feltovich, Ford, and Hoffman, 1997). Once we understand the factors that contribute to expertise and to breakdown, we then will understand how to use the powers of the computer to enhance expertise. This is an example of a more general rule—to understand failure and success, begin by understanding what makes some problems difficult.

The areas of research on human performance in medicine explored in the monograph A Tale of Two Stories (Cook, Woods and Miller, 1998) illustrate this process. In these cases, progress depended on investigations that identified the factors that made certain situations more difficult to handle and then exploring the individual and team strategies used to handle these situations. As the researchers began to understand what made certain kinds of problems difficult, how expert strategies were tailored to these demands, and how other strategies were poor or brittle, new concepts were identified to support and broaden the application of successful strategies. In each of these cases, the introduction of new technology helped create new dilemmas and difficult judgments. On the other hand, once the basis for human expertise and the threats to that expertise had been studied, new technology was an important means to achieve enhanced performance.

We can achieve substantial gains by understanding the factors that lead to expert performance and the factors that challenge expert performance. This provides the basis to change the system, for example, through new computer support systems and other ways to enhance expertise in practice.

Make Machine Advisors and Automation Team Players

New levels of automation have had many effects in operational settings. There have been positive effects from both an economic and a safety point of view. Unfortunately, operational experience, research investigations, incidents, and occasionally accidents have shown that new and surprising problems have arisen as well. Computer agents can be brittle and only able to handle a portion of the situations that could arise. Breakdowns in the interaction between operators and computer-based automated systems can also contribute to near misses and failures in these complex work environments.

Over the years, Human Factors investigators have studied many of the "natural experiments" in human-automation cooperation—observing the consequences in cases where an organization or industry shifted levels and kinds of automation. One notable example has been the many studies of the surprising consequences of new levels and types of automation on the flight deck in commercial transport aircraft (Billings, 1996).

The overarching result from the research is that for automation concerned with information processing and decision making to be successful, the key requirement is to design for fluent, coordinated interaction between the human and machine elements of the system. In other words, automated and intelligent systems must be designed to be "team players" (Malin et al., 1991; Roth et al., 1997). When automated systems increase autonomy or authority of machines without new tools to support cooperation with people, we find automation surprises contributing to incidents and accidents (Sarter, Woods and Billings, 1997).

Human Factors research has abstracted many patterns and lessons about how to make automated systems team players. One example is that increased automation requires new forms of feedback and display to show human users what the automated agents are doing and what they will do next relative to the state of the process (Norman, 1990). People can be more willing to accept even poor advice when it comes from a computer (e.g., Layton et al., 1994). More successful designs reverse the relationship, instead of having people check the computer, critiquing software can be used relatively unobtrusively to remind, suggest, and broaden the factors considered by the human decision maker and improve performance, even for cases where the computer is unable to generate a good solution on its own (Guerlain et al., 1999).

Many of the developments in computer information systems across health care delivery systems (be they intended to enhance safety or productivity) include embedded forms of automation. Using the lessons from past research to guide the design of automated information processing systems will help avoid new paths to failure and increase the benefits to be obtained from these investments in new technology.

Invest in Collaborative Technologies

"When I order a medication, I think the patient gets the medication directly, but there are many other steps, computer systems, and hands that intervene in the process."

Physician

As health care delivery advances and also tries to become more efficient, it has become more distributed over multiple practitioners, groups, locations, and organizations. This change challenges us to coordinate care over these disparate players.

We often think that if these different players are connected through new information technology, effective coordination will follow automatically. The exploding field of computer supported cooperative work (CSCW) tells us that achieving high levels of coordination is a special form expertise requiring significant investment in experience or practice (Grudin, 1994; Galegher et al., 1990). It also tell us that making cooperative work through a computer effective is a difficult challenge. For example, a critical part of effective collaboration is how it helps broaden deliberations and cross-checks judgments and actions to detect and recover from incipient failures. The design of the communication channel and information exchanged can degrade the cross-check function or enhance it, depending on how the channel is designed. A great deal of work is underway to try to identify what factors are important to support the good collaborative work to guide the investment in technology (e.g., Nyssen and Javaux, 1996; Heath and Luff, 1992; Jones, 1995).

The field of CSCW is exploding because of the advances in the technology of connectivity, the Internet and telecommunications in general, and because of the potential benefits of being connected. This leads designers to need information about the basis for high levels of skill at coordinated work.

The new technologies of connectivity will transform the face of health care practices, and CSCW should become a core area of expertise, research and development in health care. Relationships between practitioners will change and new relationships will be introduced. The changes can be designed primarily to support efficiency, or they can be designed primarily to enhance safety. Side effects of these coming changes can also create new paths to failure while they block other paths. Research to understand and direct this wave of change to enhance safety will need to be a critical research priority in health care as it is in other high performance fields.

The development and impact of tele-medicine is one example of this process. Achieving continuity of care in the new world of computer connectivity across health care practitioners and providers is another.

Manage the Side Effects of Change

Health care systems exist in a changing world. The environment, organization, economics, capabilities, technology, and regulatory context all change over time. Even the current window of opportunity to improve patient safety is a mechanism for change. And new waves of change are beginning to swell up and move into play (e.g., tele-medicine). Waves of change are due in part to resources pressures and in part to new capabilities. Uncertainties created by these changes have given rise to the public pressure for improving patient safety.

This backdrop of continuous systemic change ensures that hazards and how they are managed are constantly changing. This is important because many of these changes could easily increase complexities of health care delivery. Again, increasing complexity often challenges safety and rarely produces safety benefits without some other investments.

The general lesson is that as capabilities, tools, organizations and economics change, vulnerabilities to failure change as well—some decay but new forms appear. The state of safety in any system is always dynamic, and stakeholder beliefs about safety and hazard also change. Progress on safety depends on anticipating how these kinds of changes will create new vulnerabilities and paths to failure even as they provide benefits on other scores.

For example, new computerization is often seen as a solution to human performance problems. Instead, consider potential new computerization as another source of change. Examine how this change will affect roles, judgments, coordination, what makes problems difficult. This information will help reveal side effects of the change that could create new systemic vulnerabilities.

Armed with this knowledge we can address these new vulnerabilities at a time when intervention is less difficult and less expensive (because the system is already in the process of change). In addition, these points of change are opportunities to learn how the system actually functions and sometimes malfunctions.

Another reason to study change is that health care systems are under severe resource and performance pressures from stakeholders. First, change under these circumstances tends to increase coupling, that is, the interconnections between parts and activities, in order to achieve greater efficiency and productivity. However, research has found that increasing coupling also increases operational complexity and increases the difficulty of the problems practitioners can face. Second, when change is undertaken to improve systems under pressure, the benefits of change may be consumed in the form of increased productivity and efficiency and not in the form of a more resilient, robust and therefore safer system.

Thus, one linchpin of future success on safety is the ability to anticipate and assess the impact of change to forestall new paths to failure.

In addition, investments in safety are best timed to coincide with windows of opportunity where change is happening for other goals and reasons as well. A great deal of leverage may result from projects designed to show health care organizations how to take advantage of change points as windows of opportunity where they can re-think processes, work flow, and new modes of collaboration to reduce the potential for breakdowns.

The System Of Health Care Delivery Is Changing To Include Patients In New Ways

Patients are becoming involved in their own care (or their family membersí care) in new ways. Patients have new access to information so that they can take an active role in treatment decisions. Technology and other factors are shifting care from in patient settings to home settings where patients become a provider of their own treatment (e.g., Obradovich and Woods, 1996; Lin et al., 1998).

Patient self-managed treatment represents a large and growing part of the health care system. Human factors can support patient safety in self-managed treatment. Errors can occur when information and interfaces do not fit the patient's capacities, past experiences or the demands of daily life. Human factors can focus attention on underlying mechanisms, behavioral patterns, and contextual contributions and provide the methods and tools to design the support systems that match patient needs.

Studying Human Performance, Human-Machine Systems, Collaboration, and Organizational Dynamics Requires Methods Unfamiliar to Health Care

Ultimately, the study of human performance is in one sense or another the study of problem solving. Since its origins over one hundred years ago, understanding problem solving, be it a person, human-machine system, distributed teams, or organizations, has always been the study of the processes that lead up to outcomes—what is seen as a problem-to-be-solved, how to search for relevant data, anticipating future events, generating hypotheses, evaluating candidate hypotheses, modifying plans to handle disruptions.

The base data is the process or the story of the particular episode—how multiple factors came together to produce that outcome (Klein et al., 1995; Klein, 1998; Dekker, in press). Patterns abstracted from these processes are aggregated, compared, and contrasted under different conditions—different problem demands (scenarios), different human-human and human-machine teams, different levels of expertise, different external tools.

The fields that study one or another type of problem solving have developed sophisticated methods tailored to meet the uncertainties of studying and modeling these processes. These are deeply foreign to medical research communities, but they are the lifeblood of coming to understand human performance in any complex setting including health care. Ethnography (Hutchins, 1995), interaction analysis (Jordan and Henderson, 1995), protocol analysis (Ericsson and Simon, 1984), critical incident techniques (Flanagan, 1954; Klein, 1998), work analysis (Vicente, 1999) are just a few of techniques to be mastered by the student of human problem solving at work.

One critical resource for the study of problem solving is mechanisms to build or obtain access to simulation environments at different scopes and degrees of fidelity. Much of the progress in aviation safety has depended on researchers access to full scope training simulators to study issues such as effective human-human and human-machine cooperation (e.g., Layton et al., 1994). This has occurred through research simulators at NASA Ames and Langley Research Centers, through partnerships with pilot training centers (when research and training goals can be synchronized), and through the use of rapid prototyping tools to create simulation test beds. We have already begun to see how the availability of simulator resources in health care (notably for anesthesia and the operating room) can be a catalyst to learning about the factors that lead to effective or ineffective human performance (Howard et al., 1992; Nyssen, and De Keyser, 1998; Guerlain et al., 1999).

Using these resources validly to understand human performance depends on special skills such as problem or scenario design (the design of the problems study participants attempt to solve) and in analysis techniques such as interaction and protocol analysis (De Keyser and Samercay, 1998). Health care research organizations will need to create, modify, and use simulation resources to provide critical pieces of evidence in the process of finding effective ways to improve safety for patients.

Technology Evaluation: Beyond V&V

Evaluating changes intended to improve some aspect of human performance is a difficult problem. Human Factors has worked with many industries to assess the impact of technology and other interventions (e.g., training systems) designed to aid human performance. Stakeholders have frequently asked us to give them a simple up or down result—does this particular system or technology help significantly or not? We refer to such studies as verification and validation evaluations or V&V.

Despite the surface appeal of such efforts and the desire to provide definitive answers to guide investments, V&V has proved to be a limited tool in other high risk domains. The short summary of the lessons is that such studies provide too little information, too late in the design process, at too great a cost.

They provide too little information in a variety of ways. There are multiple degrees of freedom in using new technology to design systems, but V&V studies are not able tell developers how to use those degrees of freedom to create useful and usable systems. The problem in design today is not can we build it, but rather what would be useful to build given the wide array of possibilities new technology provides.

Measurement problems loom large because V&V studies usually try to capture overall outcomes. However, the systems are intended to influence aspects of the processes (human expertise, cooperative work, a culture of safety) that are important to outcomes in particular kinds of situations that could arise (Woods et al., 1995). As a result, global outcome measures tend to be insensitive to the operative factors in the processes of interest or wash out differences that are significant in restricted kinds of situations.

New systems and technology are not unidimensional, but multi-faceted, so that problems of credit assignment become overwhelming. Introducing new technology is not manipulating a single variable, but a change that reverberates throughout a system transforming judgments, roles, relationships, and weightings on different goals (Carroll and Rosson, 1992).1 This process, called the task-artifact cycle, creates the envisioned world problem for research and design (Dekker and Woods, 1999; Hoffman and Woods, 2000): how do the results of studies and analyses that characterize cognitive and cooperative activities in the current field of practice inform or apply to the design process, since the introduction of new technology will transform the nature of practice, what it means to be an expert, and the paths to failure? Health care specialists need only consider the case of laparoscopic surgery to see these processes play out (e.g., Dominguez et al., in press; Cook et al., 1998).

V&V studies occur too late in the design process, especially given their great cost, to provide useful input. By the time that the V&V results are available, the design process has committed to certain design concepts and implementation directions. These sunk costs make it extremely difficult to act on what is learned from evaluation studies late in the process.

The advent of rapid prototyping technology has revolutionized evaluation studies. While late V&V studies still have a role, the emphasis has shifted completely in many different work domains to early, generative techniques such as ethnography, envisioning techniques, and participatory design (Greenbaum and Kyng, 1991; Carroll and Rosson, 1992; Smith et al., 1998; Sanders, 2000). Health care needs to build on this experience and learn the use of these new techniques.


1 This is one way to view Human Factors as a field-the body of work that describes how technology and organizational change transforms work in systems.


Top of Page
Proceed to Next Section


The information on this page is archived and provided for reference purposes only.

 

AHRQ Advancing Excellence in Health Care