Current issues of ACP Journal Club are published in Annals of Internal Medicine


The science and practice of continuing medical education: a study in dissonance

ACP J Club. 1993 Mar-April;118:A18. doi:10.7326/ACPJC-1993-118-2-A18

Related Content in this Issue
• Companion Abstract and Commentary: Continuing medical education: a review

The editorial in the January/February 1993 issue of ACP Journal Club dealt with the evolving role of clinical practice guidelines in narrowing the gap between research evidence and clinical practice (1). An article abstracted in this issue ([2]; see “Continuing Medical Education: A Review” [Abstract]) reviews randomized controlled trials of continuing education, a key link in the dissemination of clinical practice information from medical research. This review is as interesting for what has not been studied as for what has: Most of what physicians say they do to keep up to date has either not been tested or has been tested and found wanting.

Two common methods used to determine how practicing physicians continue to learn are mailed questionnaire surveys and structured interviews. In a recent example of the former, Mann and Chaytor (3) mailed a pretested questionnaire to all physicians in a geographically designated area. Respondents indicated 3 resources they use to meet ongoing learning needs: journal subscriptions (61%), consultation with colleagues (56%), and journals received in the course of membership association (51%). Lectures were also rated as highly effective by 84% of primary care physicians and 78% of specialists. In contrast, only 42% of family physicians and 63% of specialists owned computers and over two thirds rated their own computer skills as low.

Using trained interviewers and a rigorously pretested interview format, Fox and colleagues (4) asked over 300 North American physicians what clinical changes they had made in their practices and what learning resources they used to make these changes. Physicians stated that they learned mostly from discussion with colleagues, reading, rounds, and conferences.

It is disturbing to compare physicians' self-reported educational preferences and activities with the evidence from randomized trials of continuing medical education (CME) modalities (2). No controlled studies were done of journal reading, but mailed materials, such as newsletters and CME correspondence courses, were found to be ineffective. Does this mean that physicians do not learn or change their performance as a result of their most popular CME activity, reading journals? This question has not been studied directly and quantitatively, but there is some indirect, qualitative evidence. In Fox and colleagues' study, physicians stated that they participated in formal and informal CME activities to validate their own practices against existing standards. Yet in one of the first randomized trials of CME (5), participants showed little change in performance after a CME program for topics they chose themselves. Participants, however, did improve their knowledge and performance for topics that were assigned to them from among topics they did not choose. Thus, individual clinicians may not be the best judge of what they need to learn, and CME activities that clinicians choose themselves—perhaps including selective reading of journal articles—may not have much effect if they address topics that the clinician already handles well.

Evidence for the effectiveness of consultation with peers or specialists, the second most frequently used CME resource, is also not found directly in the review (2). Closely allied to the process of consultation, however, is the role of the “educational influential” (4) or “opinion leader” (5). These individual practitioners are regarded by their colleagues as reliable sources of new information. Randomized trials show that engaging educational influentials in a tailored CME program leads to the message being spread to the influential's colleagues, with improvements in their performance (6) and in the health outcomes of their patients (7).

In contrast to the 2 most popular learning resources used by physicians, a wealth of evidence exists about formal CME programs such as clinical rounds, conferences, or workshops. It is difficult to show any effect on performance from didactic CME presentations. Rather, effective programs appear to possess one or more of the following elements: a precourse assessment of needs by auditing the participant's practice or testing his or her knowledge; provision of an opportunity for participants to practice or rehearse new techniques or skills; and provision of activities after the course that check and facilitate change in the clinician's office or practice setting.

Physicians' negative attitudes toward computers in the study of Mann and Chaytor (3) are at odds with some findings from the CME review. The computer was shown to be an effective CME tool, especially when used to generate reminders about preventive care or follow-up. Contrasting the effectiveness of some computer decision support systems with the discomfort that many clinicians have with computers suggests that practising physicians should seek opportunities to learn what computers can—and cannot—do to assist them in patient care. This learning, of course, should take place through what we know to be effective CME modalities such as hands-on preceptorships.

The dissonance is great between the CME activities that physicians prefer for keeping up to date and those that have been tested and shown to work. The consequences are documented by the numerous quality-of-care studies showing that clinicians are slow to pick up new, validated practices (such as prophylaxis for deep venous thrombosis for high-risk patients [1]) and are slow to discard practices that have been shown to be ineffective (such as the current widespread prescribing of calcium antagonists after myocardial infarction despite evidence that they are harmful for some patients [8]). We need to understand the reasons for this important dissonance so that the considerable time clinicians devote to CME can be better spent. In the meantime, physicians who want to improve their performance through formal CME should select courses that begin with a needs assessment, provide performance rehearsal, and facilitate practice changes. They should also explore the use of office computers to provide reminders or feedback about needed preventive and follow-up care.

David A. Davis, MD


1. Hirsh J, Haynes RB.Transforming evidence into practice: evidence-based consensus. ACP J Club. 1993 Jan-Feb;118:A16-7.

2. Davis DA, Thomson MA, Oxman AD, Haynes RB. Evidence for the effectiveness of CME: a review of 50 randomized controlled trials. JAMA. 1992;268:1111-7.

3. Mann KV, Chaytor K. Help! Is anyone listening? An assessment of learning needs of practising physicians. Acad Med. 1992;67(Suppl):S4-6.

4. Fox RD, Mazmanian PE, Putnam RW. Changing and Learning in the Lives of Physicians. New York: Praeger Publishers; 1989.

5. Sibley JC, Sackett DL, Neufeld V, et al. A randomized controlled trial of continuing education. N Engl J Med. 1982;306:511-5.

6. Hiss RG, MacDonald R, Davis WK. Identification of physician educational influentials in small community hospitals. Proc Annu Conf Res Med Educ. 1978;17:283-8.

7. Lomas J, Enkin M, Anderson GM, et al. Opinion leaders vs. audit and feedback to implement practice guidelines. JAMA. 1991;265:2202-7.

8. Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC. A comparison of results of meta-analyses of randomized control trials and recommendations of experts. JAMA. 1992;268:240-8.