Skip Navigation Archive: U.S. Department of Health and Human Services U.S. Department of Health and Human Services
Archive: Agency for Healthcare Research Quality
Archival print banner

This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: Let us know the nature of the problem, the Web address of what you want, and your contact information.

Please go to for current information.

Translating Evidence into Practice 1998 (continued, 6)

Conference Summary


Session 8. Selling Evidence-based Medicine in the Medical Marketplace

Moderator: Kay Pearson, R.Ph., M.P.H., AHCPR

Selling Evidence-based Medicine in the Medical Marketplace—David C. Slawson, M.D., Associate Professor, Departments of Family Medicine and Health Evaluation Sciences, University of Virginia, and Allen Shaughnessy, Pharm.D., Director of Research, Harrisburg Family Practice Residency Program

Drs. Slawson and Shaughnessy said to be information masters, clinicians need to learn how and when to read the medical literature as well as what to read in it. Information mastery focuses on the usefulness equation and relevance. The usefulness of any source of information is defined by three characteristics: its validity, its relevance, and how much work it takes to use it.

Relevance has two areas: the type of evidence and the frequency of the problem. The type of evidence can be either patient-oriented evidence (POE), a final outcome or disease-oriented evidence (DOE), or an intermediate/surrogate outcome. POE that matters (POEM) will require a change in practice. Frequency of the problem reflects how common or rare the problem is in an individual practice.

Clinicians obtain information from three types of experts including content experts (e.g., colleagues), clinical scientists, and YODAs (Your Own Data Analyzers), both content experts and clinical scientists. A content expert is expert at diagnosing a disease, procedures, or treating disease. Clinical scientists (e.g., pharmacist, medical librarian, clinical epidemiologist, statistician) are good at evaluating evidence but are not content experts. YODAs are both content experts and clinical scientists. They base their recommendations first on POEMs, even if they conflict with DOEs or clinical experience, and, when POEMs are not available, on the best DOEs, with an open mind. They can demonstrate appropriate validity assessments when there is new information.

The challenges in information mastery are to generate POEs in clinical practice (e.g., charts), to conduct more POEM research and refinement, and to conduct reality checks (i.e., what clinicians can really be expected to do to identify information—POEM bulletin boards, Cochrane library, etc.). The big issues for clinicians include accepting responsibility for validity assessments, changing behavior, and institutional culture; and controlling health care cost by reducing unnecessary/ineffective services.

Return to Contents

Session 9. Grading the Evidence

Moderator: Steven H. Woolf, M.D., M.P.H., Medical College of Virginia

Grading Articles and Grading Evidence: Issues for Evidence-based Practice Activities—Kathleen N. Lohr, Ph.D., Director, Health Services and Policy Research, Research Triangle Institute

AHCPR asked Dr. Lohr to develop a report on grading the quality of articles and strength of evidence for the Evidence-based Practice Program. The definition of quality is evidence from studies designed and conducted to protect against systematic and nonsystematic bias and errors of inference. Nonmethodologic quality is the extent to which a study has significant clinical or policy relevance or both. Investigations are needed that address the relevance of the study design to the clinical questions being asked, as well as relevance to broader health care and policy issues.

A quality evaluation is done by multiple reviewers of various disciplines, e.g., clinicians and methodologists. With these multiple reviewers comes the problem of reconciling different grades, and the issue is whether to blind or mask reviewers to authors, journals, and, much more difficult, outcomes. Scales that focus on therapies have clear definitions of diagnostic and prognostic criteria, adequate descriptions of treatments, and cointerventions. Feasibility and costs are always important issues, but they are not always accounted for.

The most important variables in trying to develop a quality rating scale are selection of patients, randomization or allocation of patients, blinding, size of the study and its power, therapeutic regimen, outcomes and endpoints, study administration and followup, attrition of any cause, confounders and bias, and statistical analyses. The advice given to AHCPR was first to set a study filter to match articles to the topic and study questions, and then separate out randomized and nonrandomized trials and use well-known scores or develop and validate scales to rate the studies.

Efforts to Grade Evidence Should Be Based on Evidence Lessons from Research on Randomized Controlled Trials—Alejando Jadad, M.D., D.Phil., McMaster University

Dr. Jadad pointed out that this year marks the 50th anniversary of modern, randomized clinical trials (RCTs), with over 150,000 trials published. A survey found that over 90 percent of respondents (methodologists, editors, reviewers) considered systematic review of quality an important point.

A lack of agreement exists on what to mention about quality, and the inclusion of components often depends on the observer. The likelihood of bias and internal validity tends to be more resistant to context than to interpretation, so most of the measurements of context have focused on internal validity. The risk is forgetting about other important elements of quality, and most of the developments have to do with measurement of bias in trials, which are numbers, not opinions. In giving elements of validity, concealment of validation is not done, which makes most trials biased.

An RCT should be well funded by people who want to improve health care with the needs of patients and clinicians central to the design. However, often a gap exists between methodological research and methodological practice. People who are designing randomized clinical trials are not paying attention to the empirical evidence on how to do them.

Grading the Evidence: The Consumer's Point of View—Paul G. Shekelle, M.D., Ph.D., Senior Natural Scientist, RAND

Dr. Shekelle said consumers want to know which studies are more likely to report the true results. Therefore, what we need is a classification system that is simple to use, reliably produced in the hands of many investigators, and ordinal, separating studies into two or more groups based on internal validity. That classification scheme needs to deal with within-study and across-study design issues, which is the good prospective cohort versus the poorly done RCT issue. For observation studies, a research program should have the following components: it should use criteria that were established by consensus; they should be tested for reliability; the presence or absence of the peer support for the criteria should be established: and the criteria should be revised accordingly.

Return to Contents

Session 10. Submitted Abstracts

Moderator: Allan Braslow, Ph.D., AHCPR

From Policy to Clinical Practice Guidelines: Developing Evidence-based Guidelines in the Absence of "Evidence." Practice Guidelines on Screening for Prostate Cancer in the Province of Quebec—Marie-Dominique Beaulieu, M.D., M.Sc., C.F.P.C, Professor, Centre de Recherche, Centre Hospitalier de l'Université de Montréal

The College of Physicians in Quebec produces clinical practice guidelines and is asked by the government to produce guidelines in reaction to policies. A very strong Continuing Medical Education network within the medical associations is also headquartered in Quebec. The associations also develop guidelines, and this has the potential for conflict. Dr. Beaulieu described the goal of this project as creating a new dynamic in Quebec by forming a joint committee on clinical practice guidelines. It was decided that the guideline projects should cover four topic areas: stable angina, prostate-specific antigen (PSA), radiology, and diagnostic arthroscopy.

To develop the PSA guidelines, a needs assessment was done with general practitioners, and a focus group was conducted to determine what was needed in the guideline. The next step was the validation of the guideline by 30 physicians. This was followed by focus groups with consumers as part of the development of a patient guideline. It was learned that men want to know more about prostate cancer prevention. The guideline committee received feedback and developed the final guideline. The final guideline was published with good media coverage. The guidelines say that the College of Physicians and Urologists do not recommend screening. There is a strong recommendation against PSAs for elderly men and for men with limited life expectancy.

Putting Evidence to Use in Care of the Elderly—Marita G. Titler, Ph.D., R.N., F.A.A.N., Director of Research Development and Dissemination, Interventions Research Center, Department of Nursing, The University of Iowa Hospitals and Clinics

Since 1986, the University of Iowa Hospitals and Clinics have been engaged in a process called "research utilization," which is defined as using research findings as a basis for practice. Dr. Titler explained that research utilization encompasses dissemination of scientific knowledge, critique of studies, synthesis of findings, determining the applicability of findings, application/implementation of scientific findings in practice, and evaluation of the practice change. The Iowa "grass roots" model the was published in 1994 in Nursing Research and has been revised recently. The team assembles the relevant research and literature, critiques and synthesizes the research, and determines if there is a sufficient base for guideline practice. If so, the change in practice must be piloted by selecting the outcomes to be achieved, collecting baseline data, designing multidisciplinary practice guidelines, implementing changes on a pilot scale, evaluating the process and outcomes, and modifying the guideline. If there is not a sufficient base, then research must be conducted and other types of evidence such as case reports, expert opinion, scientific principles, and theory must be identified. If the change is appropriate, it will be adopted into practice. The process and outcome data are monitored, and the results are disseminated.

The vital sign protocol was given as an example, where a more intensive monitoring of vital signs, particularly for high-risk groups, would identify patients at risk earlier. When the patient is transferred to the medical ICU early, the mortality rate goes down significantly. Intensive training of staff in the identification of symptoms of sepsis led to a decrease in the arrests per patient days in the unit. Predicted mortality should have been about 53 percent; with this protocol, mortality rate dropped to 0 percent.

Janet Mentes, Ph.D. (candidate), R.N., C.S., GNP, Project Director, RDDC, Department of Nursing, The University of Iowa Hospitals and Clinics

Ms. Mentes has been involved in research-based clinical protocols/guidelines developed by nurse experts and sent to content experts before being disseminated. The guidelines are structured with definitions, purpose of the guideline, discussion of the individuals at risk, and the pertinent assessment criteria used. The guidelines contain an extensive description of the intervention. In the case of pain management, the focus is on assessment strategies and non-medication as well as medication strategies for treating acute pain. The evaluation of the protocol looks at the process and patient outcomes. Appendices include assessment tools based on research, intervention strategies, and analgesia algorithms.

Ms. Mentes also discussed the research utilization residency program for nurses in which the objective is to facilitate the development of a research utilization program at the resident's home facility through learning about the research utilization process and developing a research-based protocol. Other strategies used are state-of-the-science conferences, electronic modalities (e.g., GERONURSE list serve, GNIRC Web site, and hot links to other sites, etc.), and possibly setting up onsite consultation, particularly in nursing homes.

Perioperative Antibiotics in Non-Emergent Bowel Surgery: A Quality Improvement Project—Carl G. Bynum, D.O., M.P.H., Missouri Patient Care Review Foundation

Dr. Bynum described a multisite quality project as a part of HCFA's Health Care Quality Improvement Program. Six urban acute care hospitals were selected, and baseline and postintervention data were abstracted for bowel-surgery patients to determine the effectiveness of prophylactic antibiotic administration on postoperative infection.

The project began with a literature search to gather information on the use of antibiotics in surgery, and it involved empaneling a multidisciplinary study group. The consensus document, approved by the study group, gave recommendations that all antibiotics on non-emergent bowel surgery should be administered within 1 hour of the incision and as close to the incision time as possible, and that there was no need to continue antibiotics once a patient left the operating room.

The percentage of patients undergoing an elective bowel procedure who received a perioperative, parenteral antibiotic within 1 hour of incision improved from 42.7 percent of facilities at baseline to almost 52 percent receiving antibiotics between pre- and postintervention. One hospital had a relative increase of 86 percent. The percentage of patients whose postoperative, parenteral prophylactic antibiotic was discontinued within 24 hours of wound-closure improved from the baseline of 23 percent to almost 36 percent. This indicates that 64 percent of hospitals were still not discontinuing use after 24 hours. All of this occurred with no increase in postoperative infection.

Return to Contents
Proceed to Next Section


Current as of January 1997
Internet Citation: Translating Evidence into Practice 1998 (continued, 6): Conference Summary. January 1997. Agency for Healthcare Research and Quality, Rockville, MD.


The information on this page is archived and provided for reference purposes only.


AHRQ Advancing Excellence in Health Care