Current issues of ACP Journal Club are published in Annals of Internal Medicine


Editorial

Drug dependence in a journal club

ACP J Club. 1999 Nov-Dec;131:A13. doi:10.7326/ACPJC-1999-131-3-A13



Lately, Iandrsquo;ve started worrying that I am turning into a mouthpiece for the pharmaceutical industry. My slide into the advertising profession did not start through lavish dinners or large consulting fees, nor was it propelled by less obvious pressures from pens, paper, or other trinkets. Indeed, the nature of my failure doesn't seem to have been described in any of the standard warnings for physicians about interacting with industry (1-3). Yet, I fear that my actions may have created lasting problems in patient care, physician education, and the medical profession.

It all started innocently. I was a junior attending physician who initiated a weekly journal club for housestaff at my hospital. Each Friday at noon, a recent article was reviewed with attention to both validity and importance. Individual articles were selected by housestaff according to 3 criteria: The article was interesting to the presenter and relevant to the audience and could conceivably change practice. Most of the time I was accompanied by a staff physician with expertise in a specific domain. The sessions became popular and were often praised in reviews of my teaching hospital.

The benefits were impressive. This journal club cultivated an appreciation for advances in medical care and provided practical skills on how to keep up with the literature. The housestaff learned how to quickly discard unhelpful articles, thereby saving valuable time. They also increased their ability to remember past articles through learning a systematic approach that improved retention. The interactive dialogue with attending staff and other invited guests also made each session lively and showed that science literature could have a thrilling character.

Here's how things went wrong. Reliance on housestaff preferences resulted in disproportionate attention to therapeutic studies. Desire for rigor created an affinity for randomized controlled trials. The standard of excellence led to dominance by drug trials because their feasibility and funding allowed them the advantages of large sample size, long follow-up, standardized interventions, double blinding, and other desiderata (4). The net effect of these 3 factors was a curriculum that increasingly popularized studies funded by drug companies.

During the past 8 years, I have unwittingly given a large amount of promotion to the pharmaceutical industry. I have almost always increased awareness about a drug—and awareness is nine tenths of advertising (5). This advertising has been particularly insidious because the audience had little forewarning to raise defenses. Of course, my efforts have benefited all major firms and not just 1 company. Regardless, many times the final outcome has been my recommending a pharmaceutical product. For me, the line between advertising and education has disappeared.

Drug companies seem to have discovered that good science turns into good advertising. One large randomized trial, for example, may cause more changes in a physician's prescribing behavior than 100 mediocre reports. Companies, therefore, realize that a blockbuster trial can be a wise investment, and clinicians know that blockbuster results demand attention. The net effect is that my journal club seems to feed into a flow of advertising campaigns. (For example, in my first year, we reviewed 3 studies of cimetidine, all of which were important at the time but are now irrelevant.)

I never intended to be in advertising, despite recognizing the similarity between good teachers and good sales representatives. For me, advertising work is tied to a commercial enterprise whose profits rise as sales expand. In contrast, educational work is a task where earnings cannot endlessly increase even with major gains in teaching effectiveness. Conflicts of interest must arise even if I view myself as an advertiser who follows high moral standards. Looking back, I see that a logical method for selecting individual articles is no guarantee against bias.

The problem won't soon go away. Government agencies require that pharmaceuticals be tested rigorously before licensing. These standards are more stringent than those for diagnostic devices or other medical technologies. Manufacturers rise to the challenge because a successful new medication can bring huge profits in a short time. Financial gains from other technologies (e.g., echocardiograms) tend to flow in slowly from many clinicians. Hence, regulatory and commercial interests conspire to ensure a continued pummeling of physicians by powerful drug trials.

I've tried to protect my journal club. One strategy has been to embellish the curriculum with studies on psychosocial issues, but doing so risks the danger of giving insufficient attention to drug trials. That is, the knowledge gained from drug trials is extremely valuable to society and ought to be a frequent component of the journal club. Drug companies should not be blamed for doing important studies. The distortion occurs because each drug trial seems compelling at the time, articles on other subjects seem less urgent, and the entire curriculum can cover fewer than 50 articles each year.

The bias in my journal club is different from that in individual drug trials. Drug trials underplay toxicity because of recruiting (fragile patients are usually excluded) and reporting (the main outcome is benefit). Thus, the search for toxicity is less fervent than the search for efficacy. Drug trials usually report small differences rather than massive changes. Major breakthroughs don't need fancy evidence. Drug trials tend to offer a simple experimental logic that may cause some “dumbing down” of medical science. Sophisticated designs are rarely used. Further biases are listed elsewhere (4), but the studies I encounter contain nothing egregious.

My failure to distinguish advertising from education serves as a warning to others and provides an opportunity to consider protective steps. In particular, I urge clinician-teachers who run journal clubs to check their curricula for signs of an impending flood. I recommend that we guard against this trend and sometimes make less popular choices in article selection. Finally, I wonder whether a bit of affirmative action for studies other than drug trials might be worthwhile. Medical trainees will survive best in a changing world if they are armed for practice with a broad set of analytic skills.

My experience might also be informative for those outside of academic settings. The lesson is that the pursuit of only high-quality evidence can consume all available time but still not cover all necessary terrain. To seek excellence is to sacrifice diversity. Readers of such journals as ACP Journal Club, for example, are unlikely to advance their understanding of humanism, ethics, psychology, or other fields. Therefore, it may be permissible to sometimes skip this journal or similar sessions. When doing so, however, the obligation is to find another activity that is even more valuable.

A worthwhile activity that I have neglected relates to appreciating basic science. Likewise, ACP Journal Club has never abstracted a single article from Science, Nature, or the Proceedings of the National Academy of Science. I doubt that this reflects a total shortage of high-quality articles or a complete shortfall in medical relevance (6). Instead, I wonder if the oversight betrays a limited appetite that seeks clinical demonstrations and shuns fundamental insights. Regardless, this oversight contributes to further delays in the movement of ideas from bench to bedside.

Another way to solve this problem is for me to focus on my clinical practice and ignore my mail service. Harvesting current patient issues, not recent journal issues, should yield lots of non-pharmaceutical topics (7). Moreover, the spark of tangible human suffering should be sufficient ignition for me to reach these other articles despite their poor promotion. The net effect of this approach is that I may not read journals on arrival but just shelve them for future reference. Tuning in to patients may require tuning out the news (although the tasks are not usually mutually exclusive).

A final concern relates to the long-term effects of highly promoted drug trials. The appeal of rigorous evidence may tempt people to ask only questions that are easily answered. Experienced clinicians will soon get a knack for the type of topic that is likely to be addressed in available literature. These clinicians might then start restricting their curiosity to topics that are easy to search (8). At no point will the person recognize the failure of scholarship and the questions that are unanswered. In keeping up with the evidence, we can forget just how much remains unknown.

Donald A. Redelmeier, MD, MSHSR, De Sousa Chair in Research
University of Toronto
Toronto, Ontario, Canada


References

1. Physicians and the pharmaceutical industry (update 1994). Canadian Medical Association. CMAJ. 1994;150:256A-F.

2. Lexchin J. Interactions between physicians and the pharmaceutical industry: what does the literature say? CMAJ. 1993;149:1401-7.

3. McKinney WP, Schiedermayer DL, Lurie N, et al. Attitudes of internal medicine faculty and residents toward professional interaction with pharmaceutical sales representatives. JAMA. 1990;264:1693-7.

4. Sackett DL, Haynes RB, Guyatt GH, Tugwell P. Clinical Epidemiology: A Basic Science for Clinical Medicine. 2d ed.Boston: Little, Brown; 1991.

5. Davis JJ. Advertising Research: Theory and Practice. New York: Prentice Hall; 1997.

6. Quinn GE, Shin CH, Maguire MG, Stone RA. Myopia and ambient lighting at night [Letter]. Nature. 1999;399:113-4.

7. Sackett DL, Richardson WS, Rosenberg W, Haynes RB. Evidence-Based Medicine: How to Practice and Teach EBM. New York: Churchill Livingstone; 1997.

8. Fitzgerald FT. Curiosity. Ann Intern Med. 1999;130:70-2.

Editor's response

Critics of evidence-based medicine (EBM) frequently begin by suggesting that EBM devotees pray only at the altar of randomized trials, that there is much less to randomized trials than there appears to be, and that there is much more to health care than randomized controlled trials (RCTs) (or any other kind of evidence from research) can address. For the most part, I have come to ignore these objections because I believe them to be unfair or misguided. The critics aren't actually arguing against using current best evidence in practice (and surely wouldn't refuse to be treated themselves for hypertension, diabetes mellitus, myocardial infarction, tuberculosis, or any other nontrivial disease with nontrivial treatments based on randomized trials). Rather, they seem to be mad at EBM advocates for being strident, narrow-minded, and staking claim to a territory that EBMers don't own. Blandishments don't seem to assuage the critics. For example, it has been pointed out repeatedly that EBM is about a lot more than RCTs of therapeutics—it includes studies of diagnosis, prognosis, clinical prediction, etiology, quality, and economics of care. I've even rationalized that EBM is like other good ideas: It is antigenic, and the critics merely represent some of the antibodies.

But when a stalwart advocate of EBM starts to complain, I get worried. Donald Redelmeier understands both medicine and research methods. He knows that EBM is about more than RCTs. And yet he feels overwhelmed by the way that RCTs—those sponsored by drug companies in particular—are taking over his teaching time. I take Dr. Redelmeier's comments very seriously. I invite the fans and critics of EBM and readers of ACP Journal Club to do so as well. I hope you will write to us—it would be good to thresh this out.

To start things off, here is my contrary view. What learners want is not necessarily what they need. An easy way to channel discussions of evidence to what learners need is to base the discussions on what patients need: the questions that arise in clinical practice. This also provides an opportunity to show search skills and to discuss topics that lead to no sound evidence. So, I encourage Dr. Redelmeier to follow the advice he has suggested as 1 of the solutions. Second, I don't see how drug companies can be blamed for publishing the results of large studies that rigorously document the benefits of a new product or novel use of a drug in a peer-reviewed journal. I wish that other manufacturers of health care interventions would do so as well. Third, I don't believe that discussing such studies with learners in a critical fashion constitutes advertising, even if the criticism concludes that the company trial “did it right.”

However, is there a more sinister basis for Dr. Redelmeier's provocative editorial? If, say, only a modest fraction of all health care practice has to do with therapeutics and only a fraction of the latter can be based on solid evidence, then shouldn't only a “fair share” of teaching time be devoted to drug studies? Perhaps that is what we should be debating: What is a fair allocation of time for therapeutics topics discussed in a journal club? Please send us your thoughts.

R. Brian Haynes, MD, PhD
McMaster University
Hamilton, Ontario, Canada