Skip Navigation Archive: U.S. Department of Health and Human Services U.S. Department of Health and Human Services
Archive: Agency for Healthcare Research Quality
Archival print banner

This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: Let us know the nature of the problem, the Web address of what you want, and your contact information.

Please go to for current information.

Summary of the Presentations (continued)

Expanding Research and Evaluation Designs to Improve the Science Base for Health Care and Public Health Quality Improvement Symposium

On September 13-15, 2005, AHRQ convened a meeting to examine public health quality improvement interventions.

Wednesday, September 14, 2005

Meeting Purposes and Process Revisited

Thomas Chapel, M.A., M.B.A.
Senior Health Scientist, Office of the Director, Office of Strategy and Innovation, Centers for Disease Control and Prevention
Denise Dougherty, Ph.D.
Senior Advisor, Office of Extramural Research, Education, and Priority Populations,
Agency for Healthcare Research and Quality

Dr. Dougherty identified the goals for the meeting:

  • To review a range of quality improvement interventions (QIIs) and the critical questions that arise in evaluation of these interventions, including basic questions of internal (does it work in the setting and for the condition in which it was tested) and external (will it work in other settings and conditions) validity.
  • To identify the strengths, weaknesses, and tradeoffs of alternative designs and methods for evaluating QIIs.
  • To achieve a working consensus about the range of traditional and innovative designs and methods that can be used to answer key QII questions.
  • To identify and suggest strategies to facilitate possible changes in funding mechanisms, review processes, research and publication standards, and research training that could help accelerate the development and spread of reliable QII research methods.

Dr. Dougherty asked the group to think about what it will take to develop more designs, to develop more information about designs, and to make a broader range of designs acceptable to the scientific community. Further, she asked participants to identify changes to broaden the field of evaluation design in terms of review processes, funding mechanisms, publication standards, research training, implementation, and designs and methods themselves.

It is difficult to find a standard definition for QI, especially when considering both health care and public health, and a range of quality improvement (QI) strategies exists. The criteria for the definition chosen by the core planning group are that the QI strategies are implemented in "real-world" settings; are used to expand the delivery, reach, and impact of evidence-based interventions at the population level; and include health care and public health interventions. Some examples are policy, organization, system changes, practice design, and strategic linkages to community programs and policies. The breakout sessions scheduled throughout the two days of the symposium were designed to enable a variety of professionals with different backgrounds and specialties to discuss the challenges facing QI designs and ways to improve the science base.

The overarching goal is moving forward to improve the science base for quality improvement interventions and evaluation in the spirit of a quote from Richard Grol:

The challenge for the years to come is to design strategies for quality improvement that ... step from anecdotal evidence for those strategies to systematic evaluation in order to distinguish between faith and fact in the field of improving care.

Return to Contents 

Session I. Case Studies: QII at the Clinical Microsystem Level

QII to Increase Delivery of Clinical Preventive Services in Office-Based Primary Care

Lawrence Fine, M.D., Dr.P.H., Chair/Moderator
Leader, Scientific Research Group on Clinical Prevention and Translation, National Heart, Lung, and Blood Institute, National Institutes of Health

QII in the Practice Partner Research Network: Group Randomized Trials and Other Designs

Steven Ornstein, M.D.
Professor of Family Medicine and Director, Practice Partner Research Network, Medical University of South Carolina

Dr. Ornstein presented two Practice Partner Research Network (PPRNet) studies. The Translating Research Into Practice (TRIP II) study, which was a group-randomized trial, and the Accelerating TRIP in a Practice-Based Research Network (PBRN) project (A-TRIP), which is a time series study currently underway. Dr. Ornstein briefly presented the results and then discussed strengths and weaknesses of each project and lessons learned that might be applicable to other QI studies.

PPRNet is a practice-based learning and research organization designed to improve health care in its member practices first and then throughout the United States. PRNet comprises interested users of the Physician Micro Systems, Inc., Practice Partner Patient Record, an electronic medical record (EMR); consultants and collaborators; and research offices at the Medical University of South Carolina, Charleston, South Carolina. The organization includes 101 practices and 502 clinicians in 37 States.

Data are collected and entered into EMRs at sites that use the PPRNet EMR system. Data are extracted and sent to the vendor, who in turn submits the completed data to Dr. Ornstein's group. His group then develops practice reports. The motto of the PPRNet is "to blur the distinction between quality improvement and research" meaning that when they work with practices, they are a quality improvement organization, in that most practices are not overly interested in research, and, to funders, they are a research organization.

Translating Research Into Practice (TRIP) II: Primary and Secondary Prevention of Cardiovascular Disease and Stroke in Small Primary Care Practices. This group-randomized trial was funded by AHRQ and the results were published in Annals of Internal Medicine (2004;141:523-32). The study ran for two years in 20 non-academic family practice and internal medicine practices and included 87,291 patients. The 10 control group practices received quarterly practice-level performance reports on 21 indicators of care. These indicators were based on guidelines from the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure VI; the National Cholesterol Education Program Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults; the American College of Cardiology/American Heart Association; and the American Diabetes Association. Each practice had EMRs and they organized approaches to improvement on their own.

Practices randomized to the intervention group received quarterly practice performance reports and six to seven practice site visits to obtain descriptive data and to facilitate QI via the use of participatory planning, EMR tools, complexity science approaches, and best practices. They also participated in two network meetings in Charleston to share best practices approaches. Quantitatively, practice-level analyses showed improvement in both intervention and control group practices in the percentage of clinical targets reached, although the intervention group had greater improvement than the controls in 18 of 21 indicators. The patient-level analyses showed that there was improvement in intervention and control subjects, but improvement in the intervention group was greater than that in the control group for only 2 of the 21 measures. The qualitative analyses showed that the most successful practices made quality a priority, involved the entire staff in the effort, redesigned the office, made efforts in patient activation, and used their EMR.

Several challenges had to be faced in this study, particularly in the area of QI "buy-in." Some providers in larger practices did not believe in the study for various reasons, such as not believing the information in the practice performance reports, or having competing demands, or having a perceived lack of self-efficacy. The response of the investigators was to focus on the more amenable members of a practice and have them model change for others. This had varying levels of success.

In addition, they found their initial concept for the practice visits was misguided. They thought that if they taught providers the practice guidelines and how to use their EMRs correctly, that that would be a panacea. However, most physicians knew the guidelines and wanted to use EMR in their own idiosyncratic ways. The research team's response was to change the focus of site visits to encouraging QI approaches at the microsystem level, meaning that they helped practices in the way that they wanted to be helped.

In planning the study, Dr. Ornstein's group had not recognized the importance of non-provider staff to the organizations studied. As they implemented the study, the team's response was to encourage non-provider staff to play a greater role in participatory planning and implementation, as well as to participate in a second network meeting.

The lessons and conclusions from the TRIP II study were:

  1. Clinicians will volunteer to participate in these activities, particularly those who have EMR systems that facilitate QI reporting and interventions.
  2. Clinicians will participate in interventions that they deem beneficial to their patients.
  3. The EMR is not a panacea, and a more robust quality improvement intervention model is needed.
  4. Simply giving practices the information (academic detailing) and the tool (EMR) is insufficient.
  5. "One size does not fit all": a) intervention approaches and emphases have to be customized at the microsystem level; and b) study sections accustomed to specific protocols that require rigid adherence may need to appreciate that customization is needed.

Accelerating TRIP in a Practice-Based Research Network (PBRN) project (A-TRIP). A-TRIP is an effort to enhance PPRNet practice performance reports to include approximately 80 indicators in 8 discreet clinical areas. This study is an AHRQ-funded expansion of the TRIP-II study. This is a demonstration project with descriptive and time-series evaluation components. The project involves more that 100 non-academic family practice and internal medicine practices and there will be approximately 500,000 patients included. The intervention methods are practice performance reports, practice site visits every six months in practices that want them, and participation in network meetings in practices that want to attend. They are examining a broad spectrum of clinical indicators, such as 13 related to diabetes mellitus and 21 related to heart disease and stroke. In their practice performance reports, they use comparison with national benchmarks as well as statistical process control methodology to let practices know when they make an improvement. The study is ongoing and the results are preliminary, but there has been greater than expected participation in practice site visits and less than expected participation at network meetings. Preliminary data on practice-level improvement are encouraging.

A-TRIP challenges and resolutions include:

  1. Participants wanted patient-level reports in addition to practice-level reports to better identify those in need of specific interventions.
  2. 10% practice attrition annually will challenge data analysts; recruitment of new practices has been ongoing; "duration of exposure" will be included as a variable in the analyses.
  3. Lack of any control group may compromise the validity of the findings. The hope is that looking at a broad range of indicators and for a large effect size will be considered sufficient evidence of an effect.

A-TRIP lessons to date include:

  1. Physicians will participate in such a project a) when they receive a tangible benefit, such as free practice reports or continuing medical education (CME) credit; b) when they believe the project is in the best interest of their patients, and c) when they can titrate their levels of involvement to suit their needs or level of interest.
  2. Non-provider staff are key to project success. One challenge is that their training is variable and they need proper supervision, focused training, and inclusion as a respected team member in QI planning activities.

Dr. Ornstein's recommendations for future work were that:

  1. Practice leaders, either physicians or office managers, need to be developed so as to better incorporate these individuals in QIIs. Research is needed on the best approaches for this.
  2. In addition to clinical outcome measures, studies need to look for the behavioral/organizational changes that take place as a result of different intervention strategies so approaches can be tailored efficiently to the needs of specific practices.

A Multimethod Tailored QII for Sustainable Practice Change

Mary Ruhe, R.N., B.S.
Project Coordinator for the Enhancing Practice Outcome through Communities and Healthcare Systems (EPOCHS) Project, Department of Family Medicine, Research Division, Case Western Reserve University

& Kurt Stange, M.D., Ph.D.
Professor, Family Medicine, Epidemiology and Biostatistics, Oncology, and Sociology, Case Western Reserve University

Ms. Ruhe focused on the Study to Enhance Prevention by Understanding Practice (STEP-UP)14-16 series of interventions, part of a line of inquiry in collaborating practice-based research networks (PBRNs). This group's collaboration is based on the premise that understanding practice is important to successful intervention efforts. Two conditions, pursuing a line of inquiry and having collaboration among PBRNs, offer a fertile ground for conducting QIIs. The line of inquiry started with observational studies, moved to intervention studies, and returned to observational studies. Their studies have been funded by AHRQ, the National Cancer Institute (NCI), National Heart, Lung, and Blood Institute (NHLBI), and the Robert Wood Johnson Foundation.

The theoretical framework for these studies is the competing demands/competing opportunities theory, which states that many worthwhile services compete for time on the agenda of primary care patient visits, and that when primary care clinicians are not performing an activity under scrutiny, they may be doing something else that is more compelling. Creating change in the primary care setting may be enhanced or inhibited by the charge to primary care practices of offering integrated, prioritized care within an ongoing personal relationship with patients. The basic premise of this research is that understanding practice is a process involving learning before, during, and after an intervention.

The STEP-UP study was a group-randomized trial in 77 practices which involved individualized interventions based on the multimethod assessment process (MAP). Control groups get a (refined) delayed intervention with pre/post evaluation. MAP involves observation of practice operations and patient visits, key informant interviews, and practice genograms, a tool for understanding practices which depicts the structure and relationships within primary care practices. Understanding practices includes understanding key stakeholders and their motivations, promoting current approaches to preventive service delivery, and understanding levers for change and the practice's capacity to change. The intervention was tailored to each practice and feedback on rates of preventive services delivery were given to each practice.

The rate of improvement was variable across practices, ranging from 31 to 43% improvement in preventive services delivery rates. Improvements were sustained during a 24-month follow-up period. The most substantial variability was in health habit counseling. Thus, the study demonstrated that a tailored QII can have a variable but sustained effectiveness, even in a changing health care environment. However, greater individualization of intervention approaches, based on a greater understanding of practice variation, is needed.

STEP-2 was a refined QII among the STEP-UP control practices that did not show an increase in preventive services delivery. Their stasis was attributed to the practices having been given more choice in where to focus their attention.

The lessons learned from the STEP-UP studies led to the group's newest study, EPOCHS, which stands for Enhancing Practice Outcomes through Communities and Healthcare Systems. This study is a randomized controlled trial (RCT) of 30 primary care practices in three systems and includes engagement of resources from the practices, the health care system, and community organizations.

The bottom line throughout the entire line of inquiry was the need to understand the practices as complex adaptive systems in which complex behavior emerges from relationships among agents, simples rules and recurrent patterns exist, initial conditions are important, and co-evolution of the organization is non-linear. Grounded in this framework, a reflective quality improvement process offers insights into the practice change process. For example, understanding practices' vision and mission is useful in guiding change. Tension and discomfort are essential and normal during change.

Ms. Ruhe indicated that some of the lessons learned were that:

  1. Change is difficult to predict, but a practice being at the "edge of chaos," meaning stressed but not set in its ways, facilitates practice change.
  2. It is important to tailor facilitation over time to optimize emergent opportunities and malleable moments of readiness to change.
  3. Motivated change agents are important. Once motivation exists, instrumental needs can be addressed.

In summary, efforts to improve practice should be preceded by efforts to understand practice. Primary care practices are complex adaptive systems. The implications of a complexity science perspective are that relationships are critical, learning is more important than knowing, and problems cannot be solved by muscle, but instead require creativity and improvisation.

Comments: Rigor and Relevance

Kurt Stange, M.D., Ph.D.
Professor, Family Medicine, Epidemiology and Biostatistics, Oncology, and Sociology, Case Western Reserve University

In Dr. Stange's talk, he commented on: 1) Dr. Ornstein and Ms. Ruhe's presentations; 2) how we think about the problem of rigor versus relevance; 3) how we approach working on the issues of quality improvement; and 4) the problem of how we synthesize knowledge.

Although the papers by Ornstein and Ruhe represent the work of two independent research groups, they have a lot of commonalities in their approach, likely because there are common stimuli that have led them to similarities in how they approach their interventions and their evaluations. Both presentations worked with real-world primary care practices that are characteristic of the industry. The business and reimbursement model for primary care practice does not match QI intervention goals. To keep afloat financially, most primary care visits need to be kept to less than 10 minutes and most practices have had to reduce their number of qualified staff to maintain the bottom line. There is a mismatch between the staff available and the complexity of the competing demands for optimizing care.

Both of these studies worked in practice-based research networks. Whereas most quality improvement work aims to reduce variation, the work presented here was done in a manner that is designed to promote desirable variability as well as reduce variation in the delivery of evidence-based services. The desirable variability reflects local adaptation.

The research was done in a peer review and funding environment that emphasizes the RCT and single disease foci. In contrast, the intervention approaches that both presentations described were multifaceted, emphasizing multiple processes and tools, and addressing a diverse set of outcomes.

How should we optimize this package? In light of the competing demands theory described by Ms. Ruhe, it is important to look broadly so that improvement in one area is not occurring at the expense of a deficit in another. We need to think about the commonalities of what the systems were trying to optimize when trying to improve multiple outcomes. Both presentations addressed the practice level, but with a direction to begin to include elements of the health care system level and community level. Both studies individualized shared best practices via outside facilitation and consultation, and also with mechanisms to facilitate shared learning within a practice and across practices using complexity science principles. Both presentations described evaluations with mixed methods designs which included concurrent qualitative evaluation to understand the process of change and to understand individuals' learning processes. This information fed back into the intervention.

Dr. Stange's understanding of the need for the conference is that we feel some angst because our theory and methods and worldview do not match the problems we are addressing. The fundamental problem is that our ways of thinking, our ways of knowing, and our methods are good at isolating a phenomenon from its context. Yet, we believe that context matters. There are four different ways of "knowing" about health and health care17-18:

 Inner RealityOuter Reality


When applied to the health care system, individual outer reality would be the study of disease and treatment and collective reality is about systems such as health services research.

When we do research we tend to focus on one way of knowing at a time, but we need to remember that other ways of knowing are always there. There also is a need to synthesize these ways of knowing (Figure 2)

One way of honoring these different ways of knowing is to readily acknowledge when we plan interventions and when we interpret the findings that there are different perspectives.

It is important to consider other ways of knowing even when we are working within one way of knowing. One way is to use transdisciplinary approaches in whole systems that involve collaboration. Thomas20-21 described QI needing both "top down" and "bottom up" leadership. "Top down" leadership involves working with systems and addressing power structures. "Bottom up" leadership means involving the perspective of people on the front lines. To do whole system collaboration in QIIs, Dr. Stange suggested using three different forms of collaboration:

  • Multidisciplinary—Multiple disciplines contribute their individual piece to solving the problem. Multiple experts can do this through an edited book or separate presentations.
  • Interdisciplinary—Interdisciplinary research can focus on a conversation between and among disciplines, with both working together to solve a common problem.
  • Transdisciplinary—Transdisciplinary research is a sustained conversation across and beyond disciplinary boundaries that creates a new, shared language.

It is critical to think about where transdisciplinary teams should be developed. Bringing together research and development would help overcome problems currently faced in translating research into practice. Three models mentioned for research and development and QI are integrated health care systems where one can look at the population of enrollees all at once such as the Health Maintenance Organization Research Network, the National Institutes of Health (NIH) Research Center Model, and PBRNs. PBRNs are affiliated practices that are primarily devoted to patient care, and that engage front-line wisdom to develop questions, gather data, and interpret and implement the findings. These networks have more generalizable patient populations than the typical settings for research. Community-based participatory research (CBPR) is parallel to PBRNs and has three characteristics: collaboration, mutual education, and acting on the results that are relevant to the community.

Fostering a multimethod approach that integrates quantitative and qualitative methods is the way to move the field forward by allowing us to understand the meaning of a study and what its generalizable lessons are. The strength and weakness of quantitative methods is that they isolate a phenomenon from its context. Qualitative methods are good for helping one understand context.

A complexity science perspective makes us think about the relationships among agents in a system. Simple rules can be used to describe the components of complex behavior. Different parts of a system co-evolve, meaning that if we are studying practices, we need to look at the communities in which they exist because the systems will co-evolve. We need to look at where a system started before we intervene.

Greenhalgh22 states that the next generation of intervention research needs to be theory-driven rather than by thoughts about how to disseminate a particular package. It also is important to look at ecological context while pursuing a multidisciplinary and participatory approach.

In conclusion, Dr. Stange asked, "How do we get started?" He suggested these approaches:

  • We need to work at multiple levels of a system.
  • We need to consider different ways of knowing.
  • We need to pursue development alongside research in QI.
  • We need to foster shared learning among participants in research.
  • We need to develop participatory relationships that transcend single projects.

Dr. Stange recommended that quality improvement be pursued not as single projects but as lines of inquiry. Thus, we need to integrate quantitative and qualitative research either sequentially or simultaneously. With incremental approaches, we are just part of the problem, enabling the current dysfunctional system. Instead, we should address how we can transform the system.


Discussants noted the emphasis was on learning how we are going to learn rather than on studying the strength of the interventions. QII research to date suggests that the design of the intervention is where things are lacking. Do the constraints of the study design result in weaker interventions? In response to a comment about a lack of science on the intervention side compared to a lot of theory, but a robust use of science on the evaluation side, Dr. Stange agreed that methods could be restrictive and problematic, but robust interventions build in iterative review cycles. He countered that a great deal of science is used on the intervention side.

Another participant asked whether there would be a model self-sustaining microsystem, and how will people be directed to get there? Dr. Ornstein noted that there are practices that need to hear information one time only and then can implement it. There are other practices that after repeated exposure to an intervention fail to make changes.

QII to Increase Timely Delivery of Surfactant to High-Risk Newborns During Hospital Labor and Delivery

David Atkins, M.D., M.P.H. (Chair/Moderator)
Chief Medical Officer, Center for Outcomes and Evidence, Agency for Healthcare Research and Quality

QII Case Study: Surfactant Use in Preterm Infants

Gautham Suresh, M.D., D.M., M.S.
Assistant Professor, Division of Pediatrics, Medical University of South Carolina

& Laura Leviton, Ph.D.
Senior Program Officer, Department of Research and Evaluation, Robert Wood Johnson Foundation

Drs. Suresh and Leviton presented the findings of a cluster-randomized trial on the timing of surfactant use in preterm infants for which the principal investigator was Dr. Jeffrey Horbar.23 This study was funded by AHRQ and the results were published in BMJ.

Prematurity is a common problem with infants, and respiratory distress syndrome (RDS) is one of its most common morbidities, primarily from the lack of endogenous surfactant. The development of exogenous surfactant therapy to treat RDS was a significant advancement in the field, leading to decreases in mortality and morbidity. Randomized trials have shown that surfactant therapy is best used on a prophylactic basis within the first two hours after birth, before the infant becomes too sick. According to studies by the Vermont Oxford Network, only 19% of premature infants receive the first dose of surfactant less than 15 minutes after birth, and 27% received surfactant more than two hours after birth.

The trial was conducted in the Vermont Oxford Network which is a network of 500 neonatal intensive care units in North America. The mission of the network is to improve the quality and safety of care for newborn infants and their families through a coordinated program of research on improvement. The network has an ongoing process of data collection and no additional data collection was needed for the sake of the trial. The network provides its units with quarterly and annual reports on outcome. A multidisciplinary team of neonatologists, outcomes researchers, statisticians, practice improvement experts, evaluation experts, and behavioral scientists worked together to prepare for the trial. Before the trial, focus groups met to refine and customize the intervention's design. These groups used the PRECEDE framework and included neonatal practitioners not affiliated with the network. The focus groups' input helped refine the intervention. Out of 355 eligible hospitals, 114 were enrolled and split into 57 intervention and 57 control sites. The network provided good baseline data at the site and individual patient levels. The chief component of the intervention was a 2-day workshop consisting of multidisciplinary teams (physicians, nurses, and respiratory therapists from each hospital) who were taught the principles of evidence-based medicine, the effectiveness of surfactant, and the importance of early administration, as well as the principles of quality improvement. The groups were not told what to do. Instead, they received data about their hospitals' performance in comparison with the body of evidence and data on other hospitals in the network. Teams were told, "you decide what to do" in terms of interventions. The teams were asked to set aims and refine their aims after conferring with others at their hospital. Ongoing support was provided through a listserv and periodic conference calls.

Dr. Leviton noted that there is a wealth of information about the ecology of neonatal intensive care units and their logistics, but the focus here is on outcomes. Dr. Leviton noted that a cluster randomized design, which is known in other fields as a "place-based randomized experiment," was used in which neonatal intensive care units (NICUs) were randomized to treatment or control. The units for measurement and analysis are nested within other units. They chose the infant and NICU levels for direct study because the NICU staff essentially treats the infants as a team; therefore, the individual practitioners were less relevant for direct study. It was important to directly study the infants as evidence of the practitioner decisions about individual infants. Referral systems were indirectly studied because many infants were born in referring hospitals and transported to tertiary care centers. The researchers expected differences between in-born and out-born infants, thus they prospectively planned analyses of all infants combined and separate analyses for in-born and out-born infants.

The proportion of infants 23-29 weeks old receiving surfactant in the delivery room was significantly higher in the intervention group (55%) than the control group (18%). Dr. Leviton contrasted the effect size found here, a 37% difference, with those cited in a review of multifaceted QI interventions, which were in the range of a 2-9% difference. Thus, the effect size seen in this study was much greater than typically is seen in multifaceted QI interventions. Treatment subjects were less likely than controls to have surfactant administered more than two hours after birth. However, no differences in infant mortality or pneumothorax were found between treatment and control infants. There could be many reasons for this, but the most plausible is a lack of statistical power. As often happens in prevention trials, the world had changed such that when this study began, corticosteroid therapy for women in premature labor, which matures the organs of the fetus, had become prevalent and is a possible explanation for some of the findings.

"The road not taken" in methods choices precluded other things being done. They chose to "go broad" for a causal test of an organization-level intervention rather than to "go deep" to understand the intervention within organizations. Basically, this was a resource-allocation decision. When deciding whether to go broad or go deep, one must consider whether one is looking for causal information. If you do not have a mature intervention, it does not make sense to perform a causal study. The consequence of "the road not taken" is that the research team did not have enough data to examine the mechanisms by which these improvements occurred.

The team has some attitudinal data for the participants showing a high level of endorsement for the need for a practice change. They would like to better characterize and classify the rapid-cycle-improvement cycles that the NICUs underwent and know why the evidence was persuasive to the participants in the workshop and how they made their commitment to change. They would also like to investigate whether features of the NICU environment are associated with the degree of change in the institutions.

The take-away message from this study is not that "they had everything going for them (meaning a large sample size and a receptive audience) and that is why they could do a cluster randomized trial." Symposium participants should take away that one must make deliberate choices in setting up studies. We want to use the most rigorous methods we can, if it is possible and if a causal question is being asked. One would not want to do a randomized trial prematurely. Two key points from this presentation are:

  • Research networks help and can keep costs down.
  • Strong cases can be made for group randomized designs, and they can provide information about the organizations of interest.


A participant asked 1) why the presentations did not mention the economic implications of their interventions, and 2) how their process evaluation was conceptualized. The idea of mediators and moderators is important when considering causal pathways for how interventions achieve their effects. Concerning economic implications, Dr. Suresh responded that the grant was for $1.3 million for a period of 3 years. The dilemma for effectiveness was the absence of improvement of clinical outcomes, while the main improvements observed were in the process measures where the time from birth to administration of surfactant was decreased. Dr. Leviton added that Canadian colleagues had to judiciously use surfactant because it is very expensive. Concerning mediators and moderators, the process variables that they collected included observation of workshop participants and their deliberations, their stated aims, content analysis of recordings of conference calls, and review of listserv discussions of problems. All of this material can be characterized, as can hospital characteristics. A secondary analysis is underway to see if this information has any explanatory power.

After a basic cost analysis, Dr. Ornstein's group found the costs were lower in the intervention group than in the controls. This difference is thought to be based on efficiencies and new processes resulting from the intervention. However, this cost analysis was primitive, but highlighted some interesting things. Dr. Stange responded that inductive analyses of the qualitative data led to some interesting findings about mediators and moderators. For example, some of the practices that seemed like failures had made substantial improvements, but not in the areas for which outcome measures were recorded.

A participant remarked that it is unclear whether we know what questions need to be answered to be useful in the field. Do the participants know enough to show that some QIIs work without knowing what is in the "black box?" He also asked whether we know enough about the substance of the changes and what changes in process have been put in place. Or, do we need to learn what are the processes that facilitate change? These are different kinds of questions. Do we need to be conducting research at the level of asking questions such as how do we motivate leadership or how do we work with non-physician staff? Dr. Leviton replied that the dilemma described is between resources allocated to addressing a causal question versus resources allocated to understanding what it is that we are producing when implementing an intervention. We have limited resources for research so we must decide how to allocate resources to an appropriate causal test and methods decisions related to understanding. She regrets that they cannot do more to understand the molar construct, or package of intervention that was delivered, but would make the same resource allocation choice again.

A participant asked if information is available about the spread of the study's findings and if enough is known about what worked to persuade others to examine the results at their respective institutions. Dr. Suresh's group has not studied the spread of the intervention to determine who has applied the findings.

A participant noted that these projects have looked at small microsystems and individual practices that are relatively independent. We need to improve our models of QI involving larger organizations and address how we deal with them.

A participant offered that even when individuals report on the quantitative piece, other things occur that are not being reported and not being discussed because of a lack of training or terminology for some items. The problem for trying to generalize this work is that we are not learning as much as we should from other people because some parts of the process do not get described in publications.

A participant asked if the conference organizers are only interested in quality improvement efforts that are conducted by outside investigators, or are they, and participants, interested in developing a model for an organization that wants to make a change on its own. Another way to ask the question is: are participants interested in studying change efforts when they (the researchers) do not control the intervention? One would want to create a model for how organizations go about making that change and how they set priorities. This would mean conducting natural experiments, which are abundant, but typically go unstudied. Dr. Atkins responded that the group is interested in finding innovative ways to make those types of changes. Francis Chesley, Jr., M.D., Director, OEREP, AHRQ, said the Agency is always interested in taking advantage of and funding a natural experiment. However, the current research funding cycle does not allow them to catch up with that. In other words, the 7-9 month cycle of peer review makes it difficult to get the timing right to examine natural experiments. Peter Briss, M.D., M.P.H., Chief, Community Guide Branch, Centers for Disease Control and Prevention (CDC), said the planning committee is interested in real-life experiences, and the California tobacco presentation on September 15 will present a longer term look at a series of natural experiments, with less emphasis on the single study approach. Joseph Francis, Jr., M.D., M.P.H., Associate Director, Office of Research and Development, Department of Veterans Affairs (VA), indicated that the VA has an intramural research program and is interested in the aforementioned type of research. The VA has a variety of mechanisms of rapid response funding to take advantage of naturalistic events and other current "hot topics." However, even with rapid-cycle funding, there are other barriers, such as delays for institutional review board (IRB) approval or the hiring of staff, that often preclude fully exploiting the "natural experiments" that happen within health care systems. Neil Thakur, Ph.D., of the VA, stated that a question in which we are interested is: does doing this type of research better mean we will shorten the time to having interventions ready to broadly implement; or will it not save any time, but rather make the resulting intervention more effective? Maybe taking a long time in the research process is a good thing if it leads to a more effective intervention. David Abrams, Ph.D., Director, Office of Behavioral and Social Sciences Research, NIH, encouraged participants to think about how to combine new technologies with statistical methods presented at the symposium to combat the challenges they currently face. New technologies, such as web-based data collection and the use of Palm Pilots to collect data in real time, will provide new opportunities for the collection of experiential data. He also advocated combining process and outcome measures in new and rigorous ways. Lori Melichar, Ph.D., of the Robert Wood Johnson Foundation (RWJF), offered a potential solution. She stated that we should determine what level of evidence we really need when funding a research project. Maybe a communications firm or research firm could query key stakeholders, saying, for example, "We have a project to teach nurses QI interventions. What will it take for you to adopt this program in your hospital? Would it be statistical significance? Do you need to see case studies? What kind of evidence do you need?" This could allow researchers to propose a project of appropriate scale and scope using the methods that will best lead to a project that can have the impact it intends.

A participant encouraged everyone to take the ideas presented so far to their breakout groups for discussion to develop recommendations about levels of evidence and other issues raised.

Return to Contents
Proceed to Next Section

Current as of March 2009
Internet Citation: Summary of the Presentations (continued): Expanding Research and Evaluation Designs to Improve the Science Base for Health Care and Public Health Quality Improvement Symposium. March 2009. Agency for Healthcare Research and Quality, Rockville, MD.


The information on this page is archived and provided for reference purposes only.


AHRQ Advancing Excellence in Health Care