Current issues of ACP Journal Club are published in Annals of Internal Medicine

Quality Improvement

Review: Peer-comparison feedback has a modest effect on the use of clinical procedures

ACP J Club. 1997 May-Jun;126:81. doi:10.7326/ACPJC-1997-126-3-081

Source Citation

Balas EA, Boren SA, Brown GD, et al. Effect of physician profiling on utilization. Meta-analysis of randomized clinical trials. J Gen Intern Med. 1996 Oct;11:584-90.



To assess the effectiveness of a peer-comparison feedback intervention in changing clinical practice patterns.

Data sources

Searches were done in MEDLINE, Health Administration and Planning, CINAHL, and Science Citation Index databases using the textwords peer comparison and feedback and the Medical Subject Headings terms randomized controlled trials and clinical trials. A collection of information and utilization management trials at the University of Missouri School of Medicine was searched, and the references of all retrieved articles were scanned.

Study selection

Studies were selected if they used random assignment of the intervention, had peer-comparison feedback that was designed to change the average use of a targeted procedure in the study group but not in the control group, and measured the frequency with which a clinical activity or procedure was used.

Data extraction

2 research associates independently checked the eligibility of studies and noted the direction of effect for each trial, P values for comparisons of effect between intervention and control groups, and utilization data on the number of clinical actions (use of such health care resources as diagnostic tests, drugs, treatments, specialist referrals, and office visits). A quality score (1 to 100) based on site, sample, randomization, observation process, data quality, and statistical analysis was calculated for each study. A third research associate compared the degree of agreement between the 2 abstractors. Letters were sent to primary investigators if the number of clinical actions in a study could not be determined.

Main results

12 studies that examined the effect of feedback on various interventions, such as test ordering and prescribing, met eligibility criteria. Some included interventions in addition to peer-comparison feedback. The mean quality score was 64 (range 56 to 77). A 3-level analysis was used to evaluate the studies: vote counting for the direction of effect of each study, a z-transformation method to synthesize the P value from individual trials, and an odds ratio (OR) test to compare utilization. Vote counting included all 12 studies; 10 were positive for the effect of feedback, and 2 were negative. The z-transformation included 8 studies and had an overall z value of 1.98 (P < 0.05). The OR test included 5 studies. 2 of the studies had a nonsignificant OR; however, when the 5 studies were combined, the overall OR was significant (1.09, 95% CI 1.05 to 1.14). When studies that evaluated efforts to change drug prescribing (2 studies) and test ordering (4 studies) were combined, the feedback information showed no effect.


Peer-comparison feedback has a modest effect on various clinical procedures, including prescribing, ordering laboratory tests, and cancer screening.

Sources of funding: In part, Agency for Health Care Policy and Research and National Library of Medicine.

For article reprint: Dr. E.A. Balas, Program in Health Services Management, 324 Clark Hall, University of Missouri-Columbia, Columbia, MO 65211, USA. FAX 573-882-6158.


Report cards are a fashionable way to attempt to influence physicians' practices by providing them with information about their practice patterns compared with their peers' average. The review by Balas and colleagues was a valiant attempt to assess the difficult literature about the effect of this feedback on actual practice. Studies of peer-comparison feedback are difficult to find because they are poorly described by standard search terms and appear in various publications. Thus, it is understandable that the authors' literature search omits some relevant studies (1). It includes studies of diverse practice patterns, such as prescribing costly drugs and evaluating patients with hypertension. Combining these results is like combining results of studies on the effects of β-blockers on angina, hypertension, and migraine. The included studies often combined feedback with other interventions, making it difficult to separate their effects. However, these small methodologic problems should not have prevented finding a large effect of feedback, if it existed. The results of the review, which show that feedback had minimal effects, challenge those who advocate the report-card approach as a panacea to decrease costs and improve quality.

Utilization report cards raise other questions. Is the "average" pattern of practice the best one? I know of no evidence to support this assumption, although physicians no doubt need to improve their utilization rates for certain practices (2). Should physicians emulate the average of a peer group? Controlling for patient case mix may change the apparent pattern of variation across practices (3). Yet, report cards infrequently control for patient characteristics or preferences relevant to physicians' utilization decisions.

It is likely that rational physicians have trouble interpreting peer-group utilization report cards and are unconvinced that they should change practice in response to them.

Roy M. Poses, MD
Memorial Hospital of Rhode IslandPawtucket, Rhode Island, USA

Roy M. Poses, MD
Memorial Hospital of Rhode Island
Pawtucket, Rhode Island, USA


1. Nattinger AB, Panzer RJ, Janus J. Arch Intern Med. 1989;149:2087-92.

2. Ellerbeck EF, Jencks SF, Radford MJ, et al. JAMA. 1995;273:1509-14.

3. Salem-Schatz S, Moore G, Rucker M, Pearson SD. JAMA. 1994;272:871-4.