Current issues of ACP Journal Club are published in Annals of Internal Medicine


Quality Improvement

Continuing medical education: a review

ACP J Club. 1993 Mar-April;118:57. doi:10.7326/ACPJC-1993-118-2-057


Source Citation

Davis DA, Thomson MA, Oxman AD, Haynes RB. Evidence for the effectiveness of CME. A review of randomized controlled trials. JAMA. 1992 Sep 2;268:1111-7.


Abstract

Objective

To review the effect of continuing medical education (CME) interventions on physician performance and health care outcomes.

Data sources

Studies published from 1977 through 1991 were retrieved from MEDLINE, the Social Science Index, the National Technical Information Service, and the Educational Research Information Clearinghouse data bases using continuing medical education and other key phrases. Further studies were sought from the bibliographies and authors of relevant articles and from key informants for nonindexed articles.

Study selection

Inclusion criteria were randomized controlled trials that evaluated educational activities or programs intended to improve physician performance or patient outcomes with participation by ≥ 50% physicians, objective assessment of the study outcomes, and follow-up assessment of ≥ 75% of study participants. 50 of 1445 CME articles met the inclusion criteria.

Data extraction

Data were extracted on the setting, the population of physicians studied, the clinical area of practice and characteristics of the CME intervention, and the effects on physician performance and patient outcomes. CME interventions were classified as predisposing (communicating or disseminating information), enabling (facilitating the desired change in the practice site), and re-inforcing (by reminders or feedback). Study results were categorized as positive (statistically significant difference observed), negative (no difference or statistically significant negative effect), or inconclusive (showed no difference but lacked sufficient power to detect a difference).

Main results

32 trials analyzed physician performance, 7 studies evaluated patient outcomes, and 11 examined both measures. The majority of the 43 studies evaluating physician performance showed a positive result for outcomes relating to use of laboratory and radiologic investigations, prescribing practices, patient counseling, and primary prevention activities. Of the 18 studies analyzing the effects of CME interventions on patient outcomes, 8 showed some positive changes in at least 1 major measure. Predisposing CME interventions produced mostly negative or inconclusive results. Studies that used enabling or re-inforcing elements, or both, were more effective in changing outcomes.

Conclusions

Continuing medical education interventions using practice enabling or re-inforcing strategies, or both, consistently improved physician performance and, occasionally, patient or health care outcomes.

Sources of funding: Research and Development Resource Base in Continuing Medical Education; American Medical Association; Canadian Medical Association; Society of Medical College Directors of Continuing Medical Education; Royal College of Physicians and Surgeons of Canada; Alliance for Continuing Medical Education.

For article reprint: Dr. D.A. Davis, Continuing Health Sciences Education, Faculty of Health Sciences, 1200 Main Street West, Chedoke Campus, Building 74, Hamilton, Ontario L8N 3Z5. FAX 416-389-4224.


Commentary

The public eye has focused on the pharmaceutical industry's funding of CME. Unfortunately, this has directed attention away from the central question of whether CME is effective in improving physician performance and patient outcome. As the review by Davis and colleagues has shown, CME can be effective in certain situations.

The comparisons between successful and unsuccessful CME activities can help guide CME coordinators in planning programs for their sites. The review makes several important points. First, keep it simple. Complex goals (such as "to improve care of diabetic patients") seem to be more difficult to achieve than simple goals (such as "to increase the screening of diabetic patients for retinopathy"). Second, providing information only may not be effective. Additional steps may be needed to assist physicians in changing. Third, institute procedures that make it as easy as possible for the targeted physicians to change their behavior (e.g., set up a central telephone hotline to facilitate referral to ophthalmologists). Finally, close the quality improvement loop by providing feedback to the targeted physicians (e.g., computer-generated reports of the physician's monthly screening rate for retinopathy).

Individual physicians planning their own CME activities can also take home some advice from this review. They should continue to identify areas for their own improvement and obtain the necessary CME. Most of their formal learning activities (journal reading, hospital conferences, weekend symposia, videotape) are likely to be primarily "predisposing" and may not be effective alone. Physicians are responsible for incorporating CME activities into their practice that are facilitative and re-inforcing, thereby providing assurance that their time and money are well spent.

Michael A. Frasca, MD
University of Illinois Peoria, Illinois, USA

*seeThe Science and Practice of Continuing Medical Education: A Study in Dissonance