The rocky road: qualitative research as evidencePDF
ACP J Club. 2001 Jan-Feb;134:A11. doi:10.7326/ACPJC-2001-134-1-A11
Health research grows ever more holistic in its understanding of health and illness, more comprehensive in empirical questions, and more interdisciplinary in approaches. As we investigate social and personal aspects of health, we become drawn to social science knowledge in addition to biomedical and epidemiologic perspectives. With this multidisciplinary basis for clinical knowledge comes “qualitative” research, an empirical method seemingly at odds with traditional rules of evidence and with the hierarchy of research designs propounded by evidence-based medicine (1, 2). The philosophy of evidence-based medicine suggests that as ways of knowing, induction is inferior to deduction, subjective perceptions are inferior to objective quantification, and description is inferior to inferential testing. Qualitative tenets invert these imperatives: Investigators aim for inductive description using subjective interpretation.
New readers of qualitative reports thus confront 3 issues. First, does qualitative inquiry belong at the bottom of evidence-based medicine's traditional research design hierarchy? Second, if familiar rules of evidence do not apply, what features distinguish a noteworthy study? Third, what is the clinical usefulness of qualitative research information compared with that of quantitative information?
“Qualitative” health research is best characterized not by its qualitative data but by several assumptions about what social reality is like (ontology) and how we can best learn the truth about this reality (epistemology). These premises differ from those required to conduct, analyze, and believe in the results of quantitative research, such as a randomized controlled trial.
Quantitative clinical research typically addresses biomedical questions. It tests hypothesized causal relations between quantified variables. (These include, of course, statistically “qualitative” variables, which are those that can be categorized and counted.) Quantitative research questions require key ingredients. First, they require variables that describe natural phenomena coupled with a belief that these variables exist and can be measured objectively. Second, they require a belief that causal laws govern the behavior of the variables. Third, they need a testable (falsifiable) hypothesis about a statistical relation between the variables. The resulting research question asks whether one variable (e.g., an intervention) quantitatively affects another (e.g., health status) and demands a “yes” or “no” answer. (This glosses over the falsification imperative, or the idea that we can only get “no” versus “maybe” answers by applying deductive logic, as promoted so convincingly by Karl Popper . Converting “maybe” to “yes” relies on logically fallible induction. Despite evidence-based medicine's traditional distaste for induction, this leap is made of necessity by the users of hypothetico-deductive studies.) Critical appraisal addresses confidence in this answer, given researchers' adherence to such standards of rigor as controlling for influences beyond prespecified variables and preventing subjective expectations from distorting objective measurement or analysis (1, 2, 4).
Qualitative research explores and describes social phenomena about which little is presumed a priori. It interprets and describes these phenomena in terms of their meaning and helps us make sense of these meanings. Qualitative reports offer access and insight into particular social settings, activities, or experiences. In contrast to quantitative approaches, qualitative research neither presumes that predetermined variables or causal relationships exist nor tries to find them. Paradoxically, the very features that strengthen the truthfulness of a quantitative study weaken a qualitative one. Prespecifying variables prohibits exploring and discovering other factors that may be important. Presupposing variables and causal laws precludes other meaningful models of social phenomena. Tacit knowledge, which quantitative researchers eschew as a bias, serves as an interpretation tool, a source of data, and a topic of analysis for qualitative researchers. Methodologic rigor derives from the depth of researchers' engagement with the data, the credibility of their interpretations, and others' agreement with narrated findings (5, 6). No single correct way exists to formulate interpretive conclusions; many different but credible findings could emerge from a given study. However, infinite incredible ones could also emerge: Qualitative research produces appraisable findings, but assessment involves nuanced, topic-specific judgments. Although critical-appraisal guides for qualitative studies vary in style and emphasis, they address similar issues (7-13). Key appraisal considerations are summarized elsewhere for the clinical user (5, 6).
Comparing qualitative and quantitative approaches
To illustrate the distinctive approaches and knowledge contributions of the 2 methods, consider a research program aimed to understand behavior at traffic lights (14). A quantitative researcher might hypothesize that red lights make cars stop, whereas green ones make them go. Researchers could randomly expose cars to red and green lights and record stopping and going responses. The study might disprove the hypothesis that the green light has less effect than the red light on stopping and going; it might estimate the likelihood of a car running a red light or sitting through a green one. Other quantitative researchers might be interested in drivers' rationales and whether they determine traffic behavior. They might administer a standardized questionnaire that would prespecify all plausible reasons for stopping and going, ask drivers to indicate which ones apply, and help quantify the association between certain rationales and driving behaviors.
Qualitative researchers would approach traffic behavior as a symbolically mediated social phenomenon. At the outset, the researchers would assume they know little about people's reasons for doing what they do or what their actions mean. Researchers would ask, “What do these lights mean to drivers, and why do they respond the way they do?” They might interview drivers, read traffic law, observe behavior at traffic lights, and try driving. On the basis of various information sources, they would develop a theory of driving behavior and report that green means “go” and red means “stop” (or to some, red means “go if you can get away with it”). The open-ended research question allows the researchers to discover the yellow light and its role.
The labels “qualitative” and “quantitative” are convenient, but they also oversimplify and sometimes mislead. One might just as well call the methods “chocolate” and “vanilla.” (Alternative labels for quantitative versus qualitative research include positivist versus interpretivist, deductive versus inductive, or experimental versus naturalistic; all of these labels tend to oversimplify, and they often lead to unnecessary misunderstandings and philosophical feuds.) As more qualitative research appears alongside quantitative research in the clinical literature, practitioners of evidence-based medicine may encounter 2 “rocky roads” to take toward understanding the unique contributions of each research method.
The first road follows an instinctive but philosophically futile desire to reconcile the 2 traditions' methodologic premises or standards of evidence. The best advice regarding this road is simply, “Don't go there”(15-17). Returning to the traffic example, which research provides a better representation of reality? The quantitative study's statistical findings are correct, and a qualitative approach could not generate these probabilities. But the former provides very limited information about what is “really” going on in this case. Although traffic light behavior seems law-like, it is governed not by natural laws but by social rules. We deal not only with brakes, accelerators, and light colors but also with differences of interpretation, lawfulness, and even social gestures. The qualitative study investigates this social meaning, which is at the heart of the action. Methodologic appropriateness depends foremost on the research question. Quantitative methods best answer questions about biomedical or natural causation. Qualitative methods best answer questions about social meanings. Applying either method to the other's domain of “reality” generates inadequate and potentially misleading evidence.
The second rocky road visits the appropriate places of the 2 traditions as contributions to knowledge and informed practice. This road is well worth traveling. Can we use qualitative evidence the same way that we use quantitative evidence? Can we combine quantitative and qualitative methods or use one to inform the other?
To determine which study yields more useful information depends on what we want to do. If we want to cross the street, the quantitative traffic study allows us to estimate the likelihood of getting run over, and on this basis, we can take an informed chance. However, the implied “law” that traffic light color makes cars go and stop would be useless if we want to reform drivers or simply understand them. The qualitative study gives more insight into why people do what they do. Intervening on the basis of evidence from this study (e.g., promoting the idea that yellow means “slow down”) will change the very patterns that the researchers so painstakingly explored and described. Future researchers (of either approach) will reach different conclusions. Further, evidence is never enough to guide clinical application, not only because values come into play (1) but also because clinical situations, clinicians, and patients always differ from research contexts and participants. Generalizing either type of research requires a rather “unscientific” inductive leap, invoking ideas beyond those provided by the research itself.
Some interdisciplinary health researchers suggest that qualitative and quantitative methods should alternate: The former generates hypotheses, the latter tests them, and so on (18-20). For certain research topics, both types of evidence can contribute to understanding. However, this reciprocal relationship is neither necessary nor always wise. Each study contributes to knowledge on its own (21). Leaping from one tradition to the other is ontologically and epistemologically hazardous. In particular, qualitative findings lose integrity when reduced and operationalized into quantitative variables (for example, such reduction has become routine practice in developing quality-of-life instruments). Practitioners of the 2 research approaches can learn from each other but only by creatively adapting each other's ideas rather than by following a systematic logic.
The 2 health-research traditions are distinctive in what they look at, how they see it, and what they can learn. Contrary to popular misunderstandings, both rely on systematic empirical observation, and both generate empirical evidence. Appealingly, they address essentially different questions about the world, so their findings tend to complement rather than compete as contributions to knowledge. By considering qualitative evidence, clinicians gain new and useful insights about social phenomena in health that are simply not available in any other flavor.
Mita K. Giacomini, PhD
Centre for Health Economics and Policy AnalysisMcMaster University
Hamilton, Ontario, Canada
7. Altheide DL, Johnson JM. Criteria for assessing interpretive validity in qualitative research. In: Denzin N, Lincoln Y, eds. Handbook of Qualitative Research. London: Sage Publications; 1994:485-99.