Current issues of ACP Journal Club are published in Annals of Internal Medicine


Editorial

The evolving science of translating research evidence into clinical practice*

PDF

ACP J Club. 2007 May-Jun;146:A8. doi:10.7326/ACPJC-2007-146-3-A08

Related Content in the Archives
Correction: The evolving science of translating research evidence into clinical practice



Practicing clinicians have to swim in an ocean of clinical research evidence that varies in rigor, consistency, and applicability to the care of individual patients. They are expected to stay up to date, be authoritative, and practice to a high standard. They work in an environment that obliges them to reconcile patient preferences and societal and professional expectations with the need for cost restraint and accountability for quality and safety of care.

The many reports of variations in practice patterns (1) and substandard care (2) have placed increased pressure on clinicians, health care institutions, and professional organizations to improve their ability to provide optimal care. This advancement is essential for the continuation of public trust and funding from public and private payers. Whereas standards of care may be debatable in the absence of definitive evidence, the fact that clinical practice in many instances seems to be at odds with even clear-cut research evidence has become an issue of increasing concern.

What have we learned and what lessons can we apply to minimize the pressure drops in the pipeline from the generation of research evidence to its consistent application in clinical decision making? This editorial looks at 6 models of evidence dissemination that have evolved over the past few decades.

1. Evidence speaking for itself

In the beginning, many researchers and clinicians believed implicitly in the “passive diffusion” model of putting evidence into practice (3). In this naive model, evidence published in journals and publicized at medical conferences would rain down and change clinical practice via an osmotic pressure gradient driven by strength of evidence and magnitude of treatment effect. Where the gradient was strongest, as in the case of large, definitive trials showing commonly used therapies to be, in fact, harmful, change was often dramatic. Examples of this phenomenon are the marked reduction in use of antiarrhythmic agents to prevent sudden cardiac death following myocardial infarction (4) and hormone therapy to prevent cardiovascular events in postmenopausal women (5). However, systematic reviews of traditional forms of continuing medical education revealed that browsing journals, attending conferences, and listening to didactic lectures (even, or especially, if the scenery was nice) had little effect on changing practice (6).

Training clinicians to search out, appraise, and apply evidence from primary trials was proposed as the best way for increasing the rate of diffusion (7). Empirical evidence suggested that this method bred better-performing clinicians (8). Unfortunately, it became clear that this approach was time consuming and difficult to master for many clinicians, and it seemed to worsen, not improve, information overload (9).

2. Evidence as prepackaged “ready-to-go” knowledge

The past decade has seen the rapid rise of an alternative approach: one of getting information methodologists and content experts to develop packages of high-quality evidence in summary form with clear and succinct bottom lines that would compel busy practitioners to action. Meta-analyses and systematic reviews (10), decision analyses (11), and practice guidelines (12) developed by authoritative groups would synthesize the best evidence from multiple studies and define clinicians' actions for particular sets of clinical circumstances. This rich knowledge harvest would be publicized and made easily accessible (or “pushed”) to clinicians wherever they practiced. Some evaluative studies suggested such tools promoted better care (12, 13). Learning how to access and verify the integrity of these secondary, preappraised sources of knowledge was now considered more important than skills in critically appraising individual trials (14).

But it still was not enough. Such integrative tools as guidelines and systematic reviews did not gain as much traction in guiding practice as initially hoped (15), either because clinicians were unaware of them or refused to adhere to them for reasons known only to themselves (16).

3. Evidence as an industrial commodity

The next stage was to become more aggressive in guideline implementation—to bring the horse to water and make it drink. Methods akin to Taylorist industrialism flourished in measuring and managing clinician behavior, spawning a whole new field of “implementation” (or evidence-to-practice) research. Educational outreach, case reviews by peers, audit and feedback, reminder systems, patient-mediated prompts, and clinician decision aids composed a constantly running sprinkler system for fertilizing optimal clinical practice from which there would be no escape (17). To these were added multiple tools of persuasion, such as total quality-management systems, administrative and regulatory mandates, financial incentives through case mix–based funding, maintenance of professional standards programs, and public lobbying (18).

But this industrial approach, although having notable effects in some areas, has also not lived up to its initial promise. Its tools exerted appreciable but still modest effects in optimizing care, shifting the absolute proportions of patients receiving optimal care upward by 6% to 13% (19). The downside was that these tools consumed considerable amounts of resources and antagonized many clinicians who, despite being invited to see themselves as leaders of practice reform (20), felt deprofessionalized by a perceived loss in decision-making autonomy. It also became clear that environmental factors inhibiting clinician access to knowledge or its application in everyday practice were more important than clinician ignorance or pigheadedness in retarding progress to best practice (21).

4. Evidence within a framework of systems engineering

So rather than coercing the horse to drink, the new course was to make the water more pleasurable to drink and better understand what attracts the horse to drink. Electronic information systems that were readily accessible, easy to use, and allowed clinicians to answer their immediate questions in real time at the bedside or in the clinic heralded a new age of instant clinical informatics (22). Here the clinician was offered the means to find (or “pull”) answers to queries the moment they arose in practice. Humans and computers united in a perfect symbiosis that featured not just knowledge retrieval and automated alerts and prompts but also a semblance of artificial intelligence–systems that could anticipate, interpret, reason, and advise (22).

Once again some evidence supported the effectiveness of computerized decision support, more so if coupled with electronic medical records (23). But the leap from prototype to conventional has proved elusive. The cognitive psychodynamics, technical reliability, and sociological repercussions of human-machine interfaces are more problematic than first thought (24, 25).

5. Evidence within a framework of social innovation

As ways and means for translating evidence into practice became more complex and sophisticated, so did our approach to understanding and evaluating what motivates and changes human behavior within social systems (26). A previously overlooked factor is the desire of clinicians to belong to, identify with, and achieve recognition within a social group of like-minded people that very much determines their thoughts and actions (27). The social environment of health professionals is governed by norms and customs that can be spoken and explicit, as well as unspoken and hidden, and that can engender both morally desirable and undesirable behaviors (28). Clinician behavior is predicated not on “rational” thought alone but also on the intersection of cognitive and behavioral aspects of the individual with external peer group, organizational, and sociopolitical determinants (27). Changing practice involves a social learning process wherein clinicians must synthesize new knowledge with existing knowledge, beliefs, and attitudes, and learn how to function as a community with new practices.

This complex dynamic is only now being illuminated using qualitative sciences hitherto not commonly employed within the biomedical scientific paradigm (29). Case studies on why guideline recommendations are not followed have revealed not only basic problems with the proposed changes to practice themselves (30) but also barriers of impaired knowledge, attitudes, and skills of clinicians (21)) and the “invisible” influences of opinion leaders, group psychology, peer influence, social marketing, organizational characteristics, and economic factors (31-34). The “precede-proceed” model, for example, distinguishes between “predisposing factors” (e.g., knowledge and attitudes of the target clinician group), “enabling factors” (e.g., capacity, resources, and service availability), and “reinforcing factors” (e.g., feedback and opinions and behaviors of others) (35). Changing clinical practice on the basis of new evidence must, therefore, be seen and researched as a form of social innovation, which to be successful requires a better understanding of how such innovations are generated, diffused, and sustained (26).

An inclusive, unifying, conceptual model developed from a recent systematic review of relevant literature by Greenhalgh and colleagues (36) usefully expands on “state-of-change” theories (37) and the original innovation–diffusion models of McKinlay (38) and Rogers (39). These ideas need to be blended with a better appreciation of 1) the complexity of health care systems and how they adapt to change, threats, and constraints in nonlinear ways (40); 2) the use by clinicians and clinical systems of heuristics (mental shortcuts or rules of thumb) (41), fuzzy logic (logic that does not mandate precisely defined variables or rules) (42), and “mindlines” (implicitly constructed guiding principles or lines of thinking) (43), as well as tacit knowledge; and 3) the nature of “storylines” or “meta-narratives” (44), which influence the direction in which research and practice paradigms evolve.

6. Evidence as common property in need of a common language

Another dimension in which better care could be promoted centers on empowering nonclinicians to become more aware of, and to advocate for, care that makes a real difference. Clinicians may need to relinquish their traditionally exclusive stewardship of implementing new medical discoveries and accept the role of patients and other stakeholders as evidence vectors (45). This change will not be easy, given the diverse interpretations that different players often hold of the “same” evidence (46). Patients' disinclination to follow clinician advice should not be viewed as irrational (or irascible) behavior but a product of assessing the credibility and applicability of that advice against personal perceptions, values, and circumstances. A common language is needed for reconciling research evidence of benefit and harm with patients' needs and expectations (47). The same applies to health administrators and policy makers who decide critical issues around planning and resourcing health services but who often hold views about what constitutes useful “evidence” that are more pragmatic, politically driven, and “need-it-now” than those of researchers (48).

Once again, research evidence cannot be expected to act as an agent of change in a world of different (and often competing) interests if it is presented in only 1 idiom, that of clinical science. The Cochrane Collaboration is aware of this need and now presents a “plain-language” synopsis of findings for every systematic review it produces. Senior health managers are espousing a similar agenda for rendering health policy making more informed by research evidence (49). Adopting a more universal language of benefit and harm may foster more of a common ownership of evidence that will, it is hoped, lead to a greater shared advocacy for more of what really works in clinical practice and less of what does not.

Redefining the science of clinician behavior change

A better understanding of the processes of translating evidence into practice will require a rethinking of the scientific methods best suited for providing it (50). The traditional hierarchy of evidential rigor has at its pinnacle the ultimate vehicle of investigational reductionism, the randomized clinical trial. Although very successful in evaluating the effects of a single study factor separate from all other potential confounders, this method struggles with complex health care programs (51) and is probably even more inadequate in dealing with the murky, multifaceted terrain of clinician behavior change (52).

Factorial designs, matrix (principal component) analyses, and qualitative methods are needed to disentangle a very clustered and interdependent set of variables in determining what is effective in effecting change within clinical cultures. However, to date, this sort of science has not been a common feature of clinical research or even health services research. But where it has been used, it has yielded some fascinating insights into the determinants of clinical decision making (21, 24, 30-32, 35, 36, 41, 43, 46, 51). Researchers and those who might want to use their work have to learn to match the analytical approach-quantitative hypothesis testing or impressionistic investigation–with the nature and complexity of the issues under study.

What of the future?

So how might we proceed in improving evidence uptake into practice? Perhaps the first thing is for those of us working in evidence translation to become more acquainted with the insights into determinants of clinician behavior and a clinician's view of “compelling evidence” that derive from nonbiomedical theories and disciplines. Armed with better models of behavior change, we might then be more able to map out the barriers to and incentives for evidence uptake operating at the “micro” level of the individual clinician and patient, the “meso” (social and organizational) level of the group or institution, and the “macro” (economic and political) level of administrators and policy makers. At all levels, we need to tease out potential points of leverage identified within social learning theories (53) and other models of human behavior (36). And then, once we have a more complete description of the “physiology” (and “pathophysiology”) of clinical decision making, we can make more informative “diagnoses” of why suboptimal care exists and start devising new “treatments” for optimizing behavior, which could be tested in appropriately designed experiments. Perhaps if we present the problem of evidence-practice gaps in words that allow clinicians to see it as a “complex clinical problem” much like a clinically challenging case, we may ignite a real interest that is currently being repressed by use of obtuse language and repellent theorems (54).

Acknowledgments

The author thanks Paul Glasziou and Sharon Straus for helpful comments on earlier drafts.

*This editorial was previously published in Evid Based Med. 2007 Feb;12(1):4-7.

Ian A. Scott, MD, MHA, MEd
Princess Alexandra Hospital
Brisbane, Queensland, Australia


References

1. Di Salvo TT, Paul SD, Lloyd-Jones D, et al. Care of acute myocardial infarction by noninvasive and invasive cardiologists: procedure use, cost and outcome. J Am Coll Cardiol. 1996;27:262-9. [PubMed ID: 8557892]

2. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348:2635-45. [PubMed ID: 12826639]

3. Lomas J. Retailing research: increasing the role of evidence in clinical services for childbirth. Milbank Q. 1993;71:439-75. [PubMed ID: 8413070]

4. Avanzini F, Latini R, Maggioni A, et al. Antiarrhythmic drug prescription in patients after myocardial infarction in the last decade. Experience of the Gruppo Italiano per lo Studio della Sopravvivenza nell’Infarto miocardico (GISSI). Arch Intern Med. 1995; 155:1041-5. [PubMed ID: 7748046]

5. Thunell L, Milsom I, Schmidt J, Mattsson LA. Scientific evidence changes prescribing practice–a comparison of the management of the climacteric and use of hormone replacement therapy among Swedish gynaecologists in 1996 and 2003. BJOG. 2006;113:15-20. [PubMed ID: 16398765]

6. Thomson O’Brien MA, Freemantle N, Oxman AD, et al. Continuing education meetings and workshops: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2001;(2):CD003030. [PubMed ID: 11406063]

7. Evidence Based Medicine Working Group. Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA. 1992;268:2420-5. [PubMed ID: 1404801]

8. Parkes J, Hyde C, Deeks J, Milne R.Teaching critical appraisal skills in health care settings. Cochrane Database Syst Rev. 2001;(3):CD001270. [PubMed ID: 11686986]

9. Godwin M, Sequin R. Critical appraisal skills of family physicians in Ontario, Canada. BMC Med Educ. 2003;3:10. [PubMed ID: 14651755]

10. Naylor CD. Better care and better outcomes: the continuing challenge. JAMA. 1998;279:1392-4. [PubMed ID: 9582048]

11. Grimshaw JM, Russell IT. Effect of clinical guidelines on medical practice: a systematic review of rigorous evaluations. Lancet. 1993;342:1317-22. [PubMed ID: 7901634]

12. Bero L, Rennie D. The Cochrane Collaboration. Preparing, maintaining, and disseminating systematic reviews of the effects of health care. JAMA. 1995;274:1935-8. [PubMed ID: 8568988]

13. Morabia A, Steinig-Stamm M, Unger PF, et al. Applicability of decision analysis to everyday clinical practice: a controlled feasibility trial.J Gen Intern Med. 1994;9:496-502. [PubMed ID: 7996292]

14. Guyatt GH, Meade MO, Jaeschke RZ, Cook DJ, Haynes RB. Practitioners of evidence based care. Not all clinicians need to appraise evidence from scratch but all need some skills [Editorial]. BMJ. 2000;320:954-5. [PubMed ID: 10753130]

15. Delamothe T. Wanted: guidelines that doctors will follow [Editorial]. BMJ. 1993;307:218. [PubMed ID: 8369678]

16. Hayward RS, Guyatt GH, Moore KA, McKibbon KA, Carter AO.Canadian physicians' attitudes about and preferences regarding clinical practice guidelines. CMAJ. 1997;156:1715-23. [PubMed ID: 9220923]

17. Grimshaw JM, Shirran L, Thomas RE, et al. Changing provider behaviour: an overview of systematic reviews of interventions. Med Care. 2001;39(Suppl 2):2-45. [PubMed ID: 11583120]

18. Grol R. Improving the quality of medical care: building bridges among professional pride, payer profit, and patient satisfaction. JAMA. 2001;286:2578-85. [PubMed ID: 11722272]

19. Grimshaw JM, Thomas RE, MacLennan G, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess. 2004;8(6):1-72. [PubMed ID: 14960256]

20. Berwick DM. A primer on leading the improvement of systems. BMJ. 1996;312:619-22. [PubMed ID: 8595340]

21. Cabana MD, Rand CS, Powe NR, et al. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999;282:1458-65. [PubMed ID: 10535437]

22. Shortliffe EH. Medical informatics and clinical decision making: the science and the pragmatics. Med Decis Making. 1991;11(Suppl):S2-14. [PubMed ID: 1837576]

23. Hunt DL, Haynes RB, Hanna SE, Smith K.Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. JAMA. 1998;280:1339-46. [PubMed ID: 9794315]

24. Rousseau N, McColl E, Newton J, Grimshaw J, Eccles M.Practice based, longitudinal, qualitative interview study of computerised evidence based guidelines in primary care. BMJ. 2003;326:314. [PubMed ID: 12574046]

25. Coiera E. Four rules for the reinvention of health care. BMJ. 2004; 328:1197-9. [PubMed ID: 15142933]

26. Campbell M, Fitzpatrick R, Haines A, et al. Framework for design and evaluation of complex interventions to improve health. BMJ. 2000;321:694-6. [PubMed ID: 10987780]

27. Macdonald KM. The Sociology of the Professions. London: Sage Publications; 1995.

28. Mizrahi T.Getting rid of patients: contradictions in the socialisation of internists to the doctor-patient relationship. Sociol Health Illn. 1985;7:214-35. [PubMed ID: 10272552]

29. Mays N, Pope C. Qualitative Research in Health Care. London: BMJ Publishing Group; 1996.

30. Thomson R, Lavender M, Madhok R. How to ensure that guidelines are effective. BMJ. 1995;311:237-42. [PubMed ID: 7627044]

31. Locock L, Dopson S, Chambers D, Gabbay J. Understanding the role of opinion leaders in improving clinical effectiveness. Soc Sci Med. 2001;53:745-57. [PubMed ID: 11511050]

32. Mittman BS, Tonesk X, Jacobson PD. Implementing clinical practice guidelines: social influence strategies and practitioner behaviour change. QRB Qual Rev Bull. 1992;18:413-22. [PubMed ID: 1287523]

33. Ferlie E, Fitzgerald L, Wood M. Getting evidence into clinical practice: an organisational behaviour perspective. J Health Serv Res Policy. 2000;5:96-102. [PubMed ID: 10947554]

34. Grol R, Grimshaw J.Evidence-based implementation of evidence-based medicine. Jt Comm J Qual Improv. 1999;25:503-13. [PubMed ID: 10522231]

35. Moulding NT, Silagy CA, Weller DP. A framework for effective management of change in clinical practice: dissemination and implementation of clinical practice guidelines. Quality Health Care.1999;8:177-83. [PubMed ID: 10847875]

36. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O.Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82:581-629. [PubMed ID: 15595944]

37. Reimsma RP, Pattendon J, Bridle C, et al. A systematic review of the effectiveness of interventions based on a stages-of-change approach to promote individual behaviour change. Health Technol Assess. 2002;6(24):1-231. [PubMed ID: 12433313]

38. McKinlay JB. From “promising report” to “standard procedure”: seven stages in the career of a medical innovation. Milbank Mem Fund Q Health Soc. 1981;59:374-411. [PubMed ID: 6912389]

39. Rogers EM. Diffusion of Innovations. New York: Free Press; 1995.

40. Fraser SW, Greenhalgh T.Coping with complexity: educating for capability. BMJ. 2001;323:799-803. [PubMed ID: 11588088]

41. McDonald CJ. Medical heuristics: the silent adjudicators of clinical practice. Ann Intern Med. 1996;124:56-62. [PubMed ID: 7503478]

42. Bates JH, Young MP. Applying fuzzy logic to medical decision making in the intensive care unit. Am J Respir Crit Care Med. 2003;167:948-52. [PubMed ID: 12663335]

43. Gabbay J, le May A. Evidence based guidelines or collectively constructed “mindlines”? Ethnographic study of knowledge management in primary care. BMJ. 2004;329:1013. [PubMed ID: 15514347]

44. Greenhalgh T, Robert G, Macfarlane F, et al. Storylines of research in diffusion of innovation: a meta-narrative approach to systematic review. Soc Sci Med. 2005;61:417-30. [PubMed ID: 15893056]

45. The SUPPORT Principal Investigators.A controlled trial to improve care for seriously ill hospitalized patients. The study to understand prognoses and preferences for outcomes and risks of treatments (SUPPORT). JAMA. 1995;274:1591-8. [PubMed ID: 7474243]

46. Devereaux PJ, Anderson DR, Gardner MJ, et al.Differences between perspectives of physicians and patients on anticoagulation in patients with atrial fibrillation: observational study. BMJ. 2001;323:1218-22. [PubMed ID: 11719412]

47. Chalmers I. What do I want from health research and researchers when I am a patient? BMJ. 1995;310:1315-8. [PubMed ID: 7773050]

48. Sheldon TA. Making evidence synthesis more useful for management and policy-making. J Health Serv Res Policy. 2005;10(Suppl 1):1-5. [PubMed ID: 16053579]

49. Muir Gray JA. Evidence based policymaking [Editorial]. BMJ. 2004;329:988-9. [PubMed ID: 15514318]

50. Norman GR. Examining the assumptions of evidence-based medicine. J Eval Clin Pract. 1999;5:139-47. [PubMed ID: 10471222]

51. Bradley F, Wiles R, Kinmonth AL, Mant D, Gantley M. Development and evaluation of complex interventions in health services research: case study of the Southampton Heart Integrated Care Project (SHIP). BMJ. 1999;318:711-5. [PubMed ID: 10074018]

52. Sackett DL, Wennberg JE. Choosing the best research design for each question. BMJ. 1999;315:1636. [PubMed ID: 9448521]

53. Grol R, Wensing M. What drives change? Barriers to and incentives for achieving evidence-based practice. Med J Aust. 2004;180(Suppl):S57-60. [PubMed ID: 15012583]

54. Berwick DM. The clinical process and the quality process. Qual Manag Health Care. 1992;1:1-8. [PubMed ID: 10131641]