n A TYPOLOGY OF RESEARCH PURPOSES AND ITS RELATIONSHIP TO MIXED METHODS Isadore Newman Carolyn 5. Ridenour Carole Newman George Mario Paul DeMarco, Jr. .ssues of validity of social science research have never been more central. I The expanding array of methodologies that are accepted as pathways to new knowledge and understanding seems nearly limitless. To the questions "What counts as research?" and "What counts as 'data' or 'representation'?," there are increasingly diverse answers. Confirming the validity of one's research, however, is no less important. In fact, establishing validity is even more consequential as methodological choices expand. Researchers strengthen validity (e.g., legitimacy, trustworthiness, applicability) when they can show the consistency among the research purposes, the questions, and the methods they use. Strong consistency grounds the credibility of research findings and helps to ensure that audiences have confidence in the findings and implications of research studies. These audiences may range from practitioners, to policy makers, to the public. In this chapter, we discuss the links among purpose, methods, and implications of study findings. We suggest a tool ♦ 167 168 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES for thinking through the consistency of those connections. Much has been written about systematically approaching the "what" of social science research, that is, systematically looking at the questions we ask and the methods we use. Very little has been written about systematically approaching the "why" of social science research, that is, systematically considering the purposes or reasons for carrying out the studies we conduct. These considerations are necessary to truly understand the questions and the most appropriate method(s) for answering them. A CIRCUITOUS ROUTE TO THE TYPOLOGY OF PURPOSE Our original goal in writing this chapter was to present a typology of research questions. Pursuing that goal led us through several winding pathways to an unintended end result: not a typology of research questions but rather a typology of research purposes. We found that without a clear understanding of the purpose behind the questions, we were inhibited when identifying the most appropriate methods to investigate those questions. Even though we ended up at a quite different place from where we originally planned to go, we were convinced that understanding this typology of purposes is necessary for the researcher to be able to identify and collect relevant data. Because purposes are often complex, the research questions frequently require multiple methods that adequately reflect this complexity. We discovered that there is a logical link among what are often complex research purposes, the questions that are necessary to reflect those purposes, and the potential need for mixed methods. At the outset, we anticipated that our contribution would be a model of types of questions with links to suggested methodologies. For example, one might pose the question "Is Teaching Strategy A (lecture and discussion) a better choice for the general math class in a middle school than Teaching Strategy B (computer-based problem solving)?" This appears at first glance to be a fairly simple question that immediately conjures up an image of experimental or quasi-experimental design. However, there are many purposes that might drive a question such as this. One underlying purpose might be to raise students' performance on standardized tests. Another underlying purpose might be the need to meet various learning styles of students. Still another underlying intent might be to diversify the representation of ethnic cultures in classroom activities. We failed to draw a direct link to methods. As we struggled to create the typology of questions, each question led us to a dead end. We took a detour, then, in our pursuit of a typology of research questions. Realizing that the question with which one begins potentially comes from one or more purposes, we abandoned our original direction. Without having one's purpose (or purposes) clarified, and without time to reflect on that purpose, one cannot have a question that will directly dictate the research methodology. The researcher must understand the purpose of his or her study in all its complexity so as to make appropriate methodological choices. The research question alone will not produce links to methods unless the question is thought through seriously, as well as iteratively, and becomes reflective of purpose. In other words, we concluded that the research question is necessary but not sufficient to determine methodology. By considering the question and purpose iteratively, one can eventually get to a A Typology of Research Purposes ♦ 169 design or set of designs that more clearly reflect the intent of the question. OBJECTIVE OF THIS CHAPTER The objective of this chapter is to demonstrate that there is a link between understanding the purpose of one's research and selecting the appropriate methods to investigate the questions that are derived from that purpose. We argue that there is an iterative process between considering the research purpose and the research question. Out of this iterative process are decisions about methods made. We make the case that when the purpose is complex (as it often is), it is necessary to have multiple questions, and this frequently necessitates the use of mixed methods. We explain how the typology of purpose can help social scientists in forming research questions and in making logical decisions about the ways in which they plan and conduct their studies. We suggest that, logically, in addition to qualitative methods and quantitative methods, mixed methods are frequently aligned with purposes. After providing a professional, academic, and historical context for this typology, we present the typology itself. The typology is roughly hewn, tentative, fluid, and flexible. There is a risk that this typology will be interpreted as a "model" or a rigid framework that boxes in and limits the researcher. We are adamantly opposed to that; this is a tool intended not to limit but rather to help researchers organize their thinking so that they can more effectively develop appropriate research designs that will achieve their intended purposes. This schema is meant as a tool for thinking through research problems, it is a tool that will free researchers from dichotomous qualitative/quantitative thinking, and it is a tool through which researchers can test assertions. In other words, it is clearly a "starting" place and not a "stopping" place for researchers' thinking. We are familiar with research situations such as the following hypothetical example where insufficient attention was given to the "why." A study was designed to examine the impact of laser disc technology on science achievement among middle school students. Bypassing any focus at all on why the study was being done (or the purpose for it), the researcher embarked on designing what would undoubtedly be seen as a scientifically strong and rigorous study. Samples were selected, valid and reliable instruments were created, and data were collected and analyzed—all accomplished by following rigorous protocols. Only when the results failed to serve the actual purpose of the study did the researcher pause to consider what the actual purpose was. The purpose was indeed to measure student learning gains, but the study was also conducted to solicit community support and to obtain comparative costs of the two curricula. Reporting findings in the form of test scores to the board of education failed to provide all the board needed to know about cost and parental support. Simply, but systematically, exploring the purpose (or purposes) of a research study is the intent of what we suggest in this chapter. If this researcher had systematically studied the purposes of the research, then the design, data collection, analysis, results, and implications would have fulfilled the purposes more effectively. We attempt to show how the typology is an expansion of an earlier framework we developed, the qualitative-quantitative interactive continuum (Newman & Benz, 1998). Qualitative and quantitative research makes up a false dichotomy, we contended in that book. Debating their 170 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES comparative worth is pointless because multiple research perspectives enable social science researchers to approach questions of interest within a wide variety of ways of knowing. There are many right ways to approach research, not only one right way. One's purpose provides a way to determine the optimal path to studying the research question. Along the continuum are entry points through which a researcher can locate himself or herself and the study. An ethnographic interview and a holistic way of knowing will not empower the researcher interested in measuring heart rates, lung volume, and weight loss in a study of wellness education. Neither will a standardized paper-and-pencil test help a researcher to uncover what it means to a second-grader to learn math. Here, we suggest that the typology might lead to both a process of developing good research questions (purposefully grounded) and making subsequent effective methods decisions. ♦ Background Over the past 30 years, a debate has taken place between two groups of researchers in the social sciences: those who are trained to use quantitative research methods and advocate their use as most appropriate and those who are trained to use qualitative research methods and advocate their use as most appropriate. These two groups of researchers claim different views of reality. The term quantitative refers to a research paradigm designed to address questions that hypothesize relationships among variables that are measured frequently in numerical and objective ways. The term qualitative refers to a research paradigm designed to address questions of meaning, interpretation, and socially con- structed realities. Furthermore, mixed methods refer to using perspectives of both at particular points in a research project (Tashakkori & Teddlie, 1998). More than 15 years ago, we (Benz 8c Newman, 1986) began examining our own work and the writings of other research methodologists in light of our discomfort with such a fragmented view of inquiry. Searching for a different way of conceptualizing social science research, we believed strongly in the unity of science. The research question, we strongly believed, was the key; understanding the centrality of the question guided the researcher in all other decisions during a research project. Through feedback from our students, we constructed the "interactive continuum" (Newman & Benz, 1998), which is one way of presenting mixed methods. We found that such a model helped students to understand research questions and methods in a coherent way. They became comfortable in how to design a study, how to let the research question lead the design, and how to assess the quality of the studies they read in journals. In our model ("an interactive continuum," not a dichotomy between qualitative and quantitative |see Figure 6.1]), we emphasized four major principles: 1. The research question dictates the selection of research methods. 2. The assurance of "validity" of research— both measurement validity and design validity—is central to all studies. 3. The interactive continuum model is built around rhe place of "theory." 4. Consistency between question and design is the standard criterion for planning studies of high quality and scientific value. (Newman & Benz, 1998) A Typology of Research Purposes ♦ 171 / review literature hypotheses Figure 6.1. Qualitative-Quantitative Interactive Continuum of Research SOURCE: Newman and Benz (1998). With these four principles as a backdrop, in this chapter we set out to expand on that model. We begin by extending our discussion of the fourth principle with a focus on the researcher's purpose as even more fundamental than the researcher's question—our bottom line in previous work. Our circuitous journey from failed attempts to develop a typology of questions has led us to a typology of purposes. In this chapter, we argue that the quality and indeed the meaningfulness of research to the public are enhanced if pur-posefulness is clearly a part of the researcher's thinking. We argue that a way of systematically ordering one's thinking about purpose can be a valuable tool for researchers to accomplish the linkages among their research questions, why they intend to carry out their studies, their 172 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES methods, and their interpretations of findings. Traditionally, researchers have focused on research design. Less focus has been placed on the reasons for conducting the study, the consequences of the findings, and the potential audiences for the study. We want to shed light on that dynamic—the dynamic of purpose—in this chapter. Altheide and Johnson (1994) captured the spirit of why we think purpose has gained ground over the past several decades as central to the researcher's work. They claimed that prior to the current vast array of legitimized "ways of knowing," there was more unity regarding the fact that research was knowledge. That unity no longer exists. Altheide and Johnson stated, I "What has changed is the purpose of research, and what those standards for assessing the purpose might be. Research is no longer coupled with knowledge, but has been given multiple choices (such as liberation, emancipation, programmatic politics, expressive "art"). Depending on one's choice, research is defined accordingly, (p. 487) In other words, over the past few decades, the role that social science research plays has become broader and virtually unbounded. During the 1960s, we might have been taught that the role of social science research was prediction and control. By the year 2000, the role of research was not so easily and simply categorized. When a common unified understanding of purpose becomes a thing of the past, a typology of contemporary purposes seems justified. Not only have the purposes of research in general expanded over the past four decades, but also researchers increasingly are open to the fact that the purposes of a particular study may be multiple and may change as the study unfolds. During the era when positivism dominated, researchers resisted such flexibility and openness; the research hypothesis served not only to focus the study but also to build boundaries around it. The researcher typically followed the hypothesis with the data collection, analysis, and conclusions in a linear fashion, deliberately avoiding being "sidetracked" and ignoring distractions along the way. Contemporary researchers, on the other hand, tend to appreciate the fact that research projects are not linear but instead twist and turn and sometimes lead in unforeseen directions. Purposes drive the research question, but purposes can change over the course of the study. Purpose changes lead to question changes, which can lead to methods changes. Delgado-Gaitan (2000), in her study of family literacy in a Latino community, described how her purpose changed during the study itself: "At that point, I began to notice a shift in my research focus from concerns with literacy activities and processes in home and school to the process of empowerment" (p. 397). Such acquiescence to the natural unveiling of phenomena as they are being studied is not surprising; this is the essence of naturalistic; inquiry (qualitative research) itself (Lincoln & Guba, 1985). Historically, the dominance of positivism was antithetical to naturalistic assumptions; it was built on assumptions of variable control. Moreover, researchers aspired to conditions of stronger control for better internal validity; the tighter the control, the less likely fluctuating purposes would be tolerated, let alone recognized. ♦ Research Purposes The obvious purpose of research from any epistemological perspective is to answer A Typology of Research Purposes ♦ 173 questions. But stopping here, we realize now, avoided dealing with deeper and more complex intentions and purposes that go beyond mere "questions." The deeper purpose of a research study is the reason for doing it. The research question does not provide the reason for addressing it. The first benefit for the researcher who moves beyond considering only the research question is to decide, first of all, whether the study is worth pursuing at all. Halier and Kleine (2001) provided an example of purpose and the important role it plays in thinking through a research project. They referred to a study by Finn and Achilles (1990) in which the research question was "What is the effect of making a substantial reduction in class size on student achievement?" They characterized this as the research problem, but Halier and Kleine (2001) noted that the purpose was not actually about class size. They stated, I The problem of their study is not class size. The problem is that many primary children are not learning to read and do arithmetic as well as they should. Reducing class sizes in the primary grades is a possible solution to that problem of low achievement. The fact that children were not achieving is why the study was needed. Testing the relationship between classroom conditions (in this case, class size) and academic achievement was the research study. If the researchers found a relationship between reduced class sizes and increased test scores, then the need would be beginning to be filled. While one never can show a complete causal relationship, the researcher can support an impact (or fail to support it) in accounted for variance. If the relationship was not confirmed, then researchers could move to other potential ideas to fulfill the need to identify those dynamics that might affect student achievement. To accomplish this overall purpose very frequently requires mixed methods. When a research study has a purpose, there is a reason for carrying it out. The purpose for a social science research study is rooted in the unique conceptualization in the researcher's thinking about the study. The purpose is not the question. Purpose is not design. Purpose is not methodology. Purpose is not data collection or analysis. Purpose is not categories of research questions (Janesick, 2000), nor is it categories of types of studies (e.g., ethnography, life history, case study). Purpose is focus on the reasons why the researcher is undertaking the study. And purpose should not be kept disconnected from the research question and the methods, as it often is. Researchers should not be blind to the purpose (or purposes) of their work. Given these definitions, perhaps the "purpose" of the research study should be able to be written as the "rationale" of the research study, the "aim" of the research study, or the "objective" of the research study. The word "intention" implies that the research study has intent—that there is a "reason" (or those reasons) for it to be done. The researcher should make that intention visible. That reason (or reasons) should describe why the researcher is conducting the study. Within a written research proposal, the purpose is sometimes reflected in the section called the "justification for the study" or the "importance of the study." Thoroughly considering the purpose of the study helps the researcher in other ways. For example, it links to the study's implications. That is, purposefulness is revealed later when the researcher discusses what the study results "mean." After the analysis and interpretation are completed, results are documented. Drawing implications from those results becomes the final 174 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES THEORETICAL FRAMEWORK \ ' ~^ \ RESEARCH QUESTION REVIEW OTHERS' WORK ľ\, METHODS DECISIONS Figure 6.2. Thinking Through the Research Process step. Knowing the purpose returns to the researcher's aid. The implications of the findings flow from the original purpose— ideas that can be consistent with the original purpose, resistant to the original purpose, or even contrary to the original purpose. For example, in the Finn and Achilles (1990) study, the authors drew implications from their study and its findings because the study had a purpose (failure of students to learn reading and math) supporting it or a reason for conducting it. Their results can be crafted to meet the needs of those who are obligated to organize school environments that might best be related to student achievement. ♦ Thinking That Leads to the Research Question and Methods Studying the notion of purpose, we began to believe that there could be a typology of purposes or an ordering of purposes for social science research. A typology might be helpful for researchers as they are thinking through the research process, which might be portrayed as having at least six components as depicted, in a general way, in Figure 6.2. In Figure 6.2, we show that the purpose initiates the research study. There is intent, a reason, and a need. That purpose is identified through the lens of the researcher. How does the researcher experience the world vis-ä-vis that need or that intent? Researchers' lenses are their autobiographies, who they are, their lives—all of the factors about them, including their values, beliefs, experiences, age, and gender as well as their social, psychological, and spiritual development. Their perspectives (or lenses) are what screens or filters, clouds, or magnifies their views as they think through theoretical frameworks (if applicable) that relate to their purpose. The metaphor of "lens" could include the construct of "filter" as well. The lens provides the focus, and simultaneously the filter blocks out what detracts from that A Typology of Research Purposes ♦ 175 focus. The lens is coherent. Only one lens at a time works to the researcher's advantage. More than one simultaneous lens leads to fuzzy vision or potentially falling down the stairs of good research intent. At the same time, the researcher's lens views this purpose through the work of others who might have studied the topic. This dialogic process assists the researcher in forming a research question that not only "gets answered" but does so with answers that fulfill the purpose that originated his or her thinking. Because the research purpose and the research question are considered iteratively, the arrows between the two go both ways. The researcher may consider a purpose and then a question. The question may generate another possible purpose. That purpose feeds back into a new question. Ftom this iterative process are decisions made about research methods that might be appropriate. While Figure 6.2 is portrayed neatly and linearly, research is never linear. Faithfully reproducing on paper a messy and dynamic process is impossible. The sequence, as suggested, is intellectually logical, but the six events in the figure overlap and feed back on one another. The researcher could begin a study with a certain "justification" (or purpose) but conclude the study with newly found implications (or meanings). In other words, the purpose at the end of the study could very well be different from that at the beginning. Hunter and Brewer (Chapter 22, this volume) raise our awareness that the theoretical frame from which the researcher operates needs careful attention, particularly in mixed methods research. Within the same study, they argue that theory might be both constructed and tested. All of these concerns do not take away from the responsibility of the researcher to articulate up front what the purpose is. Implications of the findings flow from the detours and reconceptualizations through which the original purpose has traversed. That initial purpose does not disappear, but the fact that the results shift the purpose in itself might be the important finding or implication. ♦ Typology of Research Purpose The nine categories listed in the next paragraph make up the typology of purpose, as we have tentatively structured it at this point in time. We recognize the limitations of this claim at the outset. The only intent here is to provide a framework through which researchers could move to clarify their thinking most effectively. We do not claim that this typology is either exhaustive or of further value beyond its use as a thinking tool here. Like the interactive continuum of qualitative and quantitative research that we designed several years ago as a conceptual tool, so too is this typology. The full typology appears in Table 6.1. There is overlap in these specific categories. For example, "Inform constituencies" logically encompasses "Examine the past." The nine general purposes for social science research could be categorized as follows: 1. Predict. 2. Add to the knowledge base. 3. Have a personal, social, institutional, and/or organizational impact. 4. Measure change. 5. Understand complex phenomena. 6. Test new ideas. 7. Generate new ideas. 8. Inform constituencies. 9. Examine the past. 176 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES TABLE 6.1 A Typology of Research Purposes 1. PREDICT. a. Build general laws. 2. ADD TO THE KNOWLEDGE BASE. a. Confirm findings. b. Replicate others' work. c. Reinterpret previously collected data. d. Clarify structural and ideological connections between important social processes. e. Strengthen the knowledge base. 3. HAVE A PERSONAL, SOCIAL, INSTITUTIONAL, AND/OR ORGANIZATIONAL IMPACT. a. Deconstruct/reconstruct power structures. b. Reconcile discrepancies. c. Refute claims. d. Set priorities. e. Resist authority. f. Influence change. g. Promote change. h. Promote questioning, i. Improve practice, j. Change structures, k. Set policy. 4. MEASURE CHANGE. a. Measure consequences of practice. b. Test treatment effects. c. Measure outcomes. 5. UNDERSTAND COMPLEX PHENOMENA. a. Understand phenomena. b. Understand culture. c. Understand change. d. Understand people. 6. TEST NEW IDEAS. a. Test innovations. b. Test hypotheses. c. Test new ideas. d. Test new solutions. 7. GENERATE NEW IDEAS. a. Explore phenomena. b. Generate hypotheses. c. Generate theory. d. Uncover relationships. e. Uncover culture. f. Reveal culture. I___________________________________________ (Conti/med) A Typology of Research Purposes ♦ 177 TABLE 6.1 (Continued) 8. INFORM CONSTITUENCIES. a. Inform the public. b. Heighten awareness. c. Public relations. d. Enlighten. e. Hear from those who are affected by treatment/program. f. Describe the present. g. Comply with authority. 9. EXAMINE THE PAST. a. Interpret/reinterpret the past. b. Acknowledge past misunderstandings. c. Reexamine tacit understandings. d. Examine social and historical origins of current social problems. ___________________________________________________________________________ From these broad categories, we have delineated more specific purposes, as can be seen in Table 6.1. Each of the categories of purposes is briefly described in this section. (Figure 6.4 later demonstrates conceptually the iterative process between the purpose and the research question that helps the researcher to determine which methods to employ.) We offer these brief descriptions to show a general way of thinking about a variety of purposes, similar in conceptualization to what one of the most well-regarded research methodologists, Fred Kerlinger, constructed years ago as he described "science" in more than one way (Kerlinger, 1964). We are defining what Kerlinger might call science in nine different categories of purpose. (He loosely conceptualized four ways of thinking about scientists: the individual in the laboratory, the brilliant thinker "aloof from the world" (Kerlinger, 1964, p. 8), the person working to improve humankind's lot with technological progress, and the person attempting to build theory to explain phenomena.) Nearly 40 years ago, Kerlinger, a traditional quantitative methodologist, described the researcher's struggle with questions and purpose. As he wrote about social science research, he acknowledged science as process and as product. He was unwittingly supporting the holistic nature of science and, with us, showed that qualitative and quantitative research are not antithetical to one another. Every research study has elements of both qualitative and quantitative assumptions (Newman & Benz, 1998). Moreover, we believe that Kerlinger (1964) was describing some of this same phenomena in the following description of the "scientist" as he also brought in ideas of Dewey: I The scientist will usually experience an obstacle to understanding, a vague unrest about observed and unobserved phenomena, and a curiosity as to why something is as it is. His first and most important step is to get the idea out in the open, to express the problem in some reasonable manageable form. 178 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES Rarely or never will the problem spring full-blown at this stage. He must struggle with it, try it out, and live with it. Dewey says, "There is a troubled, perplexed, trying situation, where the difficulty is, as it were, spread throughout the entire situation, infecting it as a whole." Sooner or later, explicitly or implicitly, he states the problem, even if his expression of it is inchoate and tentative. Here he intellectualizes, as Dewey puts it, "what at first is merely an emotional quality of the whole situation." In some respects, this is the most difficult and most important part of the whole process. Without some sort of statement of the problem, the scientist can rarely go further and expect his work to be fruitful, (pp. 13-14) We suggest that Kerlinger's "getting it out in the open" is part of the path to exploring the reasons for pursuing the study. Kerlinger's description here is one of the researcher deep in thought and study, not one of the researcher automatically or superficially approaching his or her research. His warning that the research problem does not "spring full-blown" is a warning about relying on routine, automatic, and superficial shortcuts. Research questions also do not spring full-blown but rather require reflective thinking. This process often leads to a mixed methods approach, as multiple purposes are frequently driving one's study. Once again, Kerlinger was known as a quantitative researcher, and his writing was in that genre. However, his sensitivities to science and social science research were more holistic than most might recognize. It is to increase the likelihood that the researcher's work will be "fruitful," in Kerlinger's word, that we suggest this typology. The nine categories in the typology are not independent; they may be interdepen- dent and overlapping. We present each one with an explanation. 1. Predict. Social science research can serve the needs of explaining social and behavioral phenomena by testing theory. Struggling with a lack of understanding about teaching and learning, for example, the researcher can empirically test tentative relationships that might explain their success or effectiveness. Fulfilling the purpose of testing relationships helps to build general laws of human interactions that allow us to predict what is yet to happen. Kerlinger (1964), nearly 40 years ago, discussed this view of science as his preference for social science research in the field of education. 2. Add to the knowledge base. Social science researchers investigate phenomena to add to what is known—knowledge that has intrinsic value. Researchers conduct studies to strengthen the knowledge base. Clarifying what is known as well as correcting faulty knowledge drives some researchers' work. The knowledge base un-dergirds many decisions that determine public policy. The knowledge base about schooling, for example, becomes a reservoir accessible to many audiences. 3. Have a personal, social, institutional, and/or organizational impact. Breaking down policies and practices to reveal how they work drives research that has a purpose of subsequently rebuilding them to enhance their properties of equity and justice. In educational research, for example, schooling practices can produce discrepant outcomes in different constituencies; studies can reveal and lead to altering such differences. Strategic planning research assists organizational groups in structuring their work from high- to low-priority areas. Furthermore, those at the margins of schooling or their advocates examine their own experience as it is juxtaposed to A Typology of Research Purposes ♦ 179 the dominant discourse. Some critical researchers, for example, study with an intent to influence and change that which is being studied. Researchers pursue lines of inquiry that analyze both current status and future potential of organizations. Researchers can engage inductively in examining institutions so as to generate questions about them. 4. Measure change. Researchers design studies that aim to link treatments to their effects. We use the word measure here to mean "to quantify." Researchers construct instruments to measure the outcomes of behavioral innovations. For example, a researcher may construct a performance assessment tool for professionals in training and obtain validity and reliability estimates on it. Changes in policy, changes in professional practice, and changes in the demographics of constituencies that professionals in any field serve are often the purposeful targets of researchers. 5. Understand complex phenomena. Research intended to achieve understanding is made up of studies that delve below the surface of the phenomena, that is, investigations that have the goal of interpreting the meaning of phenomena. In other words, rather than measuring phenomena, the purpose of studies within this category is to understand the meaning of the phenomena. In addition, research that seeks to explicate the behaviors, rituals, language, symbols, values, and social structures of a particular group of people intends to understand the culture of that group of people. 6. Test new ideas. Researchers formulate statements of relationships among variables and then collect data on those variables to test the probability of the relationships occurring. Researchers aid innovators by designing studies to assist these entrepreneurs in assessing the extent to which their ideas might be supported. Researchers can design studies to examine whether or not constituents' needs are being met. Researchers can be commissioned as members of problem-solving teams. 7. Generate new ideas. Researchers, in addition to testing new ideas, can be part of a process of exploring ideas. For example, naturalistic researchers participate in the life of a social group, open to whatever might be revealed to them. Without hypotheses, such research is done for the purpose of allowing new ideas to be generated. Without formal restrictions on one's lenses, the researcher maintains a welcoming openness to what processes unfold naturally. These emerging ideas, then, can be subsequently tested (Newman & Benz, 1998), but their emergence is the purpose of the studies included here. 8. Inform constituencies. Researchers carry out studies that serve accountability needs. Publicly accountable agencies are obligated to serve the needs of the public in a democracy. Employees in these agencies are public employees, working in organizations that are accountable to that public. Researchers can be involved in reporting results of studies of these public programs. Nonpublic agencies are accountable to other governance structures; research can serve accountability purposes here as well. Similarly, societal institutions can serve professional needs by being accredited by professional organizations. Researchers serve as investigators to provide the data needed. 9. Examine the past. Researchers can study the historical origins of current social dynamics, patterns, and problems. Examining what has occurred before is the purpose of many studies that aim to interpret or reinterpret the history of social life. 180 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES ♦ Application of the Typology We intend this typology as an aid to researchers to think through the "why" of their research as systematically and rigorously as they have traditionally thought through the "what." With experience, routine ways to think about one's work are developed; this is natural for any expert in his or her field. The researcher develops some automatic responses and shortcuts in his or her work as well. These ways of thinking lend efficiency to the researcher's work. To the extent that automatic thinking attenuates the thoughtful-ness that is needed to consider the purpose for which one is doing a study, it can lead to misguided research. Simply put, we want to suggest that serious thinking is always needed to clarify the reasons for the study; this thinking is as important as the thinking that goes into the design of the study. In fact, it may be more important. There are inherent advantages to questioning one's purposes even after they are articulated. Thinking about research before conducting it is, we suggest, the bottom line. We use the following example, "A Study of the Public's Knowledge of Science," to suggest how using the typology can help researchers to probe their own thinking. We show how the typology, as a "thinking tool," helps investigators to formulate research questions and make design decisions most effectively because they take "purpose" into account. In this situation, researchers are interested in pursuing the public's understanding of science. I A Study of the Public's Knowledge of Science Scientists were interested in public understanding of scientific advancements. How does the public become knowledgeable about science? To explore this phenomenon, the researcher gathered evidence from the popular media for the 6-month period of time from January 1, 2000, to June 30, 2000: every issue of the Neiv York Times and USA Today; transcripts of all ABC, NBC, and CBS evening news broadcasts; transcripts of every 1-hour CNN Headline News summary from 10 to 11 p.m.; and all issues of Time, Newsweek, and U.S. News and World Report. In the first phase of coding, the researcher binary coded each individual story item according to the following: Scientific topic is the major theme yes/no Scientific topic is the secondary theme yes/no In the second phase of coding, the researcher selected only those story items that were coded "yes." This data set was made up of all news stories in which science was either the primary or the secondary theme. For each of these data units, the researcher coded on the following dimensions: Science category Physics Chemistry Medicine Biology Zoology Astronomy Other science areas Message Report research findings Announce a new study Apply scientific findings Refute earlier findings A Typology of Research Purposes ♦ 181 Alert public to danger Biography of scientist Other message Author Scientist Journalist Educator Other The researcher used descriptive statistics (frequency counts) to portray Í the proportion of scientific stories that are reported in the various media: newspapers, television, and maga-I zines. Subcategories within these I' larger groups were displayed as well. Three research hypotheses were tested. First, the extent to which the sources of scientific information differed across the various media was tested. Second, the extent to which the public is exposed to various branches of science (e.g., physics, chemistry) was tested. Third, the content of the messages in the stories was tested. Having this brief overview, we might think through this study's potential purposes as though we were contemplating such an investigation. What was the purpose (or what were the purposes) for examining the question "How does the public become knowledgeable about science?" Before designing the study, we could use the typology as a checklist, checking our intent against potential purposes in these nine categories. Moving from the question to possible purposes— and back and forth again—would be an iterative process. Proceeding in this way is a good way to understand the typology but admittedly a very bad way to conceptualize authentic research logic due to a chronology problem that is hard to avoid. Applying the checklist a posteriori is inconsistent with the chain of reasoning depicted in Figure 6.2 (p. 174). Because we are applying the typology here for illustrative purposes, we can explain that disconnect and justify disrupting it. The logic shown in Figure 6.2 indicates that the purpose of a research study (the shaded box labeled "PURPOSE") has an impact on the researcher's perspective on the study (the box labeled "LENS") and affects the focus of the inquiry—the question itself (the shaded box labeled "RESEARCH QUESTION"). Because the process is iterative, the question can have an impact on the purpose as well (thus the reverse arrows between the research question and the purpose). Therefore, we are retrospectively considering what potential purposes for this study might have been. In actuality, the researcher would think through his or her intent first. Yet proceeding through the checklist in this way remains a potential way to grasp its value as a thinking tool when planning a study. To return to this process, we ask what the purpose (or purposes) was (or were) for a study investigating the question "How does the public become knowledgeable about science?" We begin traversing through the typology, testing possible intentions and possible reasons why we might be contemplating this study. On the left side of the in-text table that follows is the typology, and on the right side of the table are possible purposes we might test out in our thinking. The comments on the right side are merely example questions and comments that the researcher might have asked himself or herself; they are not exhaustive and are only presented as possibilities. While proceeding through this "thinking tool," the researcher can make notes and raise questions and think through the question "What is the one purpose (or what are the several purposes) that might be driving the study?" 182 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES Typology of Purpose Application of the Typology to One Example: Studying the Public's Knowledge of Science 1. Predict. a. Build general laws. Add t base. 2. Add to the knowledge a. Confirm findings. b. Replicate others' work. c. Reinterpret previously collected data. d. Clarify structural and ideological connections between important social processes. e. Strengthen the knowledge base. 3. Have a personal, social, institutional, and/or organizational impact. a. Deconstruct/reconstruct power structures. b. Reconcile discrepancies. c. Refute claims. d. Set priorities. e. Resist authority. f. Influence change. g. Promote change. h. Promote questioning. i. Improve practice, j. Change structures, k. Set policy. 4. Measure change. a. Measure consequences of practice. b. Test treatment effects. c. Measure outcomes. 5. Understand complex phenomena. a. Understand phenomena. b. Understand culture. Do we want to be able to explain how the public uses science? Do we want to be able to predict from which sources they glean information? Why do we want to do this? Have there been other studies we want to confirm or disconfirm? Are we invested in only added knowledge without specific practica application? If so, which knowledge base? What knowledge? The knowledge of the market? The knowledge of science? We may go back to No. 1 at this point, especially probing "why." This may tell us what literature to go to. Do we want to critique the literature for gaps? Do we want to review methods used in prior studies to determine where weaknesses are? Based on that type of review, we would design our study differently. Are there social, institutional, or organizational dynamics we want to influence? Are there power structures in the media we want to challenge? Are television networks gaining too much authority at the expense of newspapers? Are we interested in influencing the regulation of the telecommunications industry? Are we curious about the racial and gender demographics of public policy decision makers? Do we intend to use our data to lobby for change? Are we disturbed that families at low income levels have less access to and knowledge of science that could improve their lives? Each of the preceding questions could generate a separate study. Each question signals a different set of stakeholders, audiences, methods (qualitative, quantitative, or mixed), and data sources. In the beginning, one simple study can, through the typology, illuminate a possible "thematic research agenda." Each segment of research (metaphorically like a periodic table of the elements) gets "filled in." Are we curious about trends in the media's coverage of science? Do we have comparative data from an earlier era? Do we want to measure the comparative use of television over newspapers in a search for statistic differences? Why are we interested in this type of change? Why these trends? How good are our data likely to be? These probes help us to identify variables, measures, possible data sources, and limitations. Do we intend to be able to reveal the story of how individuals access science through the media? Are we interested in detailed descriptions of people's lives vis-á-vis the media and science? Because both science and the media are dominant in contemporary life, do we want to contextualize their role in the wider culture? Do we A Typology of Research Purposes ♦ 183 Typology of Purpose Application of the Typology to One Example: Studying the Public's Knowledge of Science c. Understand change. d. Understand people. 6. Test new ideas. a. Test innovations. b. Test hypotheses. c. Test new ideas. d. Test new solutions. 7. Generate new ideas. a. Explore phenomena. b. Generate hypotheses. c. Generate theory. d. Uncover relationships. e. Uncover culture. f. Reveal culture. 8. Inform constituencies. a. Inform the public. b. Heighten awareness. c. Use public relations. d. Enlighten. e. Hear from those who are affected by treatment/ program. f. Describe the present. g. Comply with authority. 9. Examine the past. a. Interpret/reinterpret the past. b. Acknowledge past misunderstandings. c. Reexamine tacit understandings. d. Examine social and historical origins of current social problems. want to understand the "meaning" of science in everyday life and the "meaning" of media in providing a scientific understanding to specific people? Would illuminating a case of incurable disease or global warming tell the story best? What stakeholders are we concerned about? is the relationship of people's scientific knowledge to contemporary media a new idea that we want to test? Do we have some intent to ameliorate scientific misunderstanding by testing an innovative idea that popular media may play a role in debunking scientific myths? Do we want to test a new way to teach science to adults? How is our thinking different from No. 1 ? This purpose appears quantitative but has heuristic aspects that may be inductive. Is our investigation an exploratory one? Do we intend to take No. 5 a step further? In addition to enhancing our understanding of science and media, can we go further and generate a theory and hypotheses about their relationship? Do we want to incorporate what we learned from No. 5 and obtain another perspective? Do we want to be completely open to these phenomena until their meaning is naturally revealed to us? Is our intent to heighten the public's awareness about how scientific knowledge is dispersed? Do we want to hear from the members of the public about how science influences their lives to raise their awareness about what they know and do not know? Are we complying with some agency that accredits us, that is, an organization that regulates us and demands that we show that we serve the public appropriately?1 Are we interested in examining how science and the media have related in the past? Are we intending to show how the public gleaned scientific understanding during the history of the United States up to the present day? Are we interested in showing how the public became knowledgeable about medicine during the late 1800s, for example? Are we interested in how members of the public view the teaching of evolution and creationism in public schools, for example, so as to understand the changes over time in their scientific points of view? 184 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES Having proceeded through the typology, one can see that this simple research question—"How does the public become knowledgeable about science?"—can be embedded within a myriad of purposes. While some questions in the analysis that has just been presented might not be new, they are usually considered haphazardly, if at all. The advantage of the typology is that it is systematic. The research question alone is insufficient to substantially fuel the decisions a researcher faces in designing the study. The very nature of asking questions from various "purposeful perspectives" sensitizes the researcher to make good design decisions, including selecting the most relevant variables and knowingly facing limitations and underlying assumptions. If the researcher's purpose is to "measure change" in the ways the public obtains scientific information, then the methodology would be quite different from what it would be if the purpose is needing deeper understanding about the public's view of science, that is, "understanding complex phenomena." As a heuristic, the typology is generative; the categories and questions here can elicit more categories and more questions, and frequently they will lead to mixed methods. As we have claimed repeatedly, the concept is a way of thinking, the typology is tentative, and the process is one that will strengthen the researcher's thinking (Newman & Ridenour, 2000). The typology might help to clarify other concerns as well. Researchers can easily launch a study down one path toward data and answers that do not satisfy the real purposes they have failed to consider. The typology begins to force researchers to think in multiple dimensions; they have to think of possibilities or options. Methodologies then become substantive, safe, and trustworthy because the purposes they are to serve are better grounded. Debates over the past decade have raged around the question "Is it re- search?" as scholars consider what have emerged as nontraditional representations of research in the forms of novels, poems, or photographs. But such a question is the wrong question. The answer to "Is it research?" comes from asking whether or not it serves research purposes. If it does, then the answer is "yes." The dramatic reading may indeed "be research," and the double-spaced typed report filled with data and graphs might not "be research." It is impossible to tell when considering only the form of the representation. Labeling something "research" requires knowing what purposes it serves. The typology has advantages for the research consumer. It may be a tool that sensitizes the critical reader, first, to identify more clearly what is research and, second, to understand why the researcher conducted the study. Readers may approach research reports in a deeper and more thoughtful way; they will be more enlightened about the truth value of researchers' work and whether there are strong links between the studies and their implications. The circle of a study is complete when purpose (the genesis) links with implications (the conclusions). Two other ways to conceptualize the typology of purpose are, first, through a loosely constructed iterative flow of ideas (Figure 6.3). In this figure, one purpose flows into, overlaps with, and generates other purposes. The iterative flow represents the thinking process. On the one hand, we suggest the utility of the typology in linking ideas together in a coherent and holistic research pattern from "examining the past," "discovering" and "testing" and "understanding new knowledge," on through all nine categories to "prediction" as shown in Figure 6.3. This figure shows the typology from a distance, representing the "big" ideas at each level and how they might be connected. Researchers who create a thematic research agenda or a program of studies in an area A Typology of Research Purposes ♦ 185 1. Predict using all the things we know in this knowledge "base" to explain a field and what might yet unfold in the future (so that the historians can describe these things later; return to #1) 2. Add to the knowledge base organizing all the things we know into a "base" of knowledge 3. Have a personal, social, institutional, and/or organizationa struggling with the complex environments we experience; particularly when we know that some things we know and experience are not just, fair, and in keeping with our ethical or profes- impact sional purpose 4. Measure change measuring what happens when we change things 5. Understand complex phenomena understanding what things we now experience and know 6. Test new ideas testing these new things 7. Generate new ideas discovering some new things 8. Inform constituencies telling what things we know to those who need to know them 9. Examine the past what things we already know from the past Figure 6.3. Conceptualizing the Typology as Categories That Flow and Connect frequently build a research base that might be represented in this way. Questions vary in purpose across one's research career. One's area of expertise might be the American family, for example, building a base of studies on this topic—a base that began by inquiring into variables related to income levels. Then, perhaps questions guided by a purpose to test public policies related to welfare are investigated. And deeper and more varied purposes continue the agenda. Figure 6.3 attempts to depict this larger perspective. In Figure 6.4, a second conceptualization of the typology is presented. Columns 1 and 2 are meant to show the iterative nature of considering the research question and the research purpose. This iterative process results in decisions about methods, depicted in columns 3 and 4. In column 3, we attempt to show that traditionally research intents have been aligned with research paradigms in a one-to-one fashion. Contemporary apprecia- tion of the complex nature of social science research (multiple purposes and multiple stakeholders) continues to move away from this segmented framework and toward column 4, where a more holistic appreciation of the link between purpose and methods leads, in some cases, to methods beyond the traditional ones. These two conceptualizations (Figures 6.3 and 6.4) are heuristics; they merely demonstrate other frameworks that explain the typology and other steps that might help a researcher to think through the purposes or the reasons why he or she is conducting a study. ♦ Relationship Between the Typology of Purpose and Mixed Methods Any one study may be conceived of from a variety of perspectives. By using the 186 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES Iterative process between purpose and questions Decisions about methods I 1 3 4 Purpose of the Research Research Questions Traditionally. DUrDOSes in column 1 have led to these approaches ODDortunities for movlne bevond the traditional approaches to a Holistic ------► (Mixed Methods) ajjproach 1. Predict Research questions are impossible to represent on this table. The iterative process between the purpose (column 1) and the research question (column 2) is the key to deciding what methods to use. The Quantitative research (traditional "scientific method," positivistic) Traditional 2. Add to the knowledge base research purpose can be reflected, in many questions, as demonstrated in the example of the publico knowledge of Quantitative research (generalizable) Traditional plus qualitative (mixed) research can aid in developing theory to add to the tentative knowledge base of theories to be tested 3. Have a personal, social, institutional, and/or organizational impact science. These research questions should not be interpreted independent of the purpose as the researcher Qualitative research (context-bound; value-laden; politically contextualized) Traditional plus quantitative (mixed) research can be used to test hypotheses related to values idiosyncratic to the context 4. Measure change decides which methods to be used. Linking the purpose to Quantitative research (determining treatment effects) Traditional 5. Understand complex phenomena the question is an iterative process. The goal Is to acknowledge all the possible purposes, all possible questions; Qualitative research (holistic; inductive studies of settings, cultures, and people) Traditional plus quantitative (mixed) research that uses multivariate techniques, for example, and takes into account multiple stakeholders 6. Test new ideas and to make decisions about methods contingent on this process. Traditionally, some purposes have been linked to Quantitative research (hypothesis testing) Traditional plus qualitative (mixed) research such as focus groups that can "float" new ideas on a tentative basis but not test them for confirmation 7. Generate new ideas either qualitative or quantitative research (column 3). Mixed Qualitative research (holistic; naturalistic: hypothesis Renerating) Traditional 8. Inform constituencies methods opportunities exist as options for many purposes; these are shown on column 4. Quantitative or qualitative descriptive research (mixed methods) Traditional - mixed methods 9. Examine the past Qualitative research (historiography) Traditional Figure 6.4. The Iterative Process Between Research Purpose and Research Question as a Prerequisite to Research Methods NOTE: See the research example ("The Public's Knowledge of Science") in the narrative, which demonstrates that this is an idiosyncratic process for each research study. typology, a researcher can initiate sets of questions about his or her purposes in a systematic fashion, which will facilitate the analyses of the research question under investigation (i.e., studying the question by moving through the typology to see where it might fit and whether it might fit into more than one category). This systematic process helps to identify both the types of information needed for the study and the most appropriate strategies (research methods) to use in gathering that information. The process entails first studying the research question and then refining the question at a deeper and more substantive and purposeful level, with a greater awareness of potential multiple purposes. The more complex the purposes, the more likely that mixed methods will be necessary. By making the researcher aware of these considerations, he or she may choose to design, carry out, and interpret the research study for one purpose from the multiple purposes that exist. We are not assuming that there are right answers; A Typology of Research Purposes ♦ 187 rather, we are assuming that asking appropriate questions will improve the likelihood of doing research that has greater meaning and is more apt to lead to valuable implications. We can return now to our first example: "Is Teaching Strategy A (lecture and discussion) a better choice for the general math class in a middle school than Teaching Strategy B (computer-based problem solving)?" To determine what research methods are appropriate, one needs to know the purpose or to ask the following questions. A better choice based on what? Parent satisfaction? Teacher expertise? Student test scores? Cost of materials? Let us assume that, in moving through the typology with the stakeholders, we determine that two purposes drive this question: the need to raise students' performance on standardized tests (No. 4) and the need to diversify the curriculum away from solely Eurocentric materials (No. 3). To fulfill the first purpose, we might design an experimental study, collecting pretest and posttest quantitative data in the form of students' test scores before and after the implementation of these two instructional treatments. In addition, to fulfill our second need, we might plan to conduct a textual analysis of the materials used in both strategies for their ethnic representations and to interview parents from various ethnic backgrounds after they review the materials to determine their satisfaction with the ethnic content. This question, then, has led us to use mixed methods to conduct our investigation. Delving even deeper, our study might make use of the power of computer programs to incorporate qualitative "data" into our quantitative analysis. For example, according to Bazeley (Chapter 14, this volume), we might then cross-index the satisfaction levels of parents from different ethnic groups with student test scores. Discourse about purpose is challenging. Thinking through the reasons for car- rying out a study helps to form the question itself. Novice researchers frequently ask how to write a research question and want to know ways of identifying "good" research questions. The best response to this need is to take these researchers through the process of thinking through issues of purpose such as the following. Why are you doing this? Who needs to know what happens from your investigation if you carried it out? What has been done by others who have explored the same terrain? Who cares what you might or might not find? Struggling with these quandaries helps these researchers, first, to structure the question and, second, to write a clear rationale or justification for their research. At the same time, clarifying the lens, or the perspective, through which the researcher will conduct research is important to his or her thinking logically, coherently, and scientifically. The researcher is a dynamic component of the research study. Perspectives and perceptions might shift as the study progresses. How vigilant is the researcher in attending to these shifts? Serendipity, unanticipated outcomes, and unplanned events all can affect the study process. The purpose can change, the question can change, and the methods can change. But the researcher must initiate each study with a singular lens and a clear purpose—an intent that is grounded and rooted in meaning. We end this chapter where we began. Within that rapidly changing and turbulent context, social science researchers need to reinforce purposefulness in their research so that the needs of all stakeholders—children, families, professionals, and the policy audiences—are best served. In so doing, the complex nature of research becomes apparent, and it becomes clear that no one methodological approach is sufficient. We need to train a new generation of researchers who are comfortable in looking beyond a single tech- 188 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES nique. We need researchers who are competent in applying mixed methodology when the purpose and questions reflect the need. We believe that the position presented in this chapter demonstrates that even questions that appear to be simple must be examined in terms of the typology of purpose, which may clarify the complexity of the questions and indicate the need for using mixed methods. ■ Note 1. This concept is not too dissimilar from the concept of "multiple stakeholders" (Weiss, 1984) in the program evaluation literature. In this body of literature, evaluations tend to be sensitive to the different needs (or purposes) of various stakeholders—groups that have a vested interest in the program being evaluated. Different types of information must be gathered to answer different questions of different stakeholders. Potentially, different methodologies are employed for meeting those different stakeholder needs. ■ References Altheide, D. L., & Johnson, J. M. (1994). Criteria for assessing interprerive validity in qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 485-499). Thousand Oaks, CA: Sage. Benz, C. R., & Newman, I. (1986). Qualitative-quantitative interactive continuum: A model and application to teacher education evaluation. Resources in Education. (ERIC Clearinghouse on Tests, Measurements and Evaluation ED 269 405) Delgado-Gaitan, C. (2000). Researching change and changing the researcher. In B. M. Brizuela, J. R Stewart, R. G. Carrillo, &c J. G. Berger (Eds.), Acts of inquiry in qualitative research (pp. 389-410). Cambridge, MA: Harvard Educational Review. Finn, J. D., & Achilles, C. M. (1990). Answers and questions about class size: A statewide experiment. American Educational Research Journal, 27, 557-577. Halier, E. J., & Kleine, P. F. (2001). Using educational research: A school administrator's guide. New York: Longman. Janesick, V. J. (2000). The choreography of qualitative research design: Minuets, improvisations, and crystallization. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 379-400). Thousand Oaks, CA: Sage. Kerlinger, F. N. (1964). Foundations of behavioral research: Educational and psychological inquiry. New York: Holt, Rinehart & Winston. Lincoln, Y. S, & Guba, E. G. (1985). Naturalistic inquiry. Newbury Park, CA: Sage. Newman, I., & Benz, C. R. (1998). Qualitative-quantitative research methodology: Exploring the interactive continuum. Carbon-dale: Southern Illinois University Press. Newman, I., & Ridenour, C. S. (2000, November). Typology of purposes: A tool for thinking that leads to research questions, design, methods, and implications. Paper presented at the meeting of the Association for the Advancement of Educational Research, Ponte Vedre, FL. Tashakkori, A., & Teddlie, C. (1998). Mixed methodology: Combining qualitative and quantitative approaches (Applied Social Research Methods, No. 46). Thousand Oaks, CA: Sage. Weiss, C. H. (1984). Toward the future of stakeholder approaches in evaluation. In R. F. Connor, D. G. Altman, & C. Jackson (Eds.), Evaluation studies review annual (Vol. 9). Beverly Hills, CA: Sage. PRINCIPLES OF MIXED METHODS AND MULTIMETHOD RESEARCH DESIGN ♦ Janice M. Morse The goal of social science research is to understand the complexity of human behavior and experience. The researcher's task—to understand, describe, and explain the reality of this complexity —is limited by our research methods. But most of all, it is restricted by the methodological repertoire of each researcher and his or her knowledge and skill in using these research methods. While specific research methods enable us to describe, understand, and explain the complexity of living by providing us with various perspectives, different methods are best designed for, and used to answer, particular types of questions. They provide us with different perspectives that enable us to best answer individual questions. By combining and increasing the number of research strategies used within a particular project, we are able to broaden the dimensions and hence the scope of our project. By using more than one method within a research program, we are able obtain a more complete picture of human behavior and experience. Thus, we are better able to hasten our understanding and achieve our research goals more quickly. Research is a process—a puzzle-solving process. We come to understanding piece by piece, one step at a time. The researcher's AUTHOR'S NOTE: This research was funded by an MRC Scientist and AHFMR Health Scholar Award. ♦ 189 190 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES BOX 7.1 Terminology Core: This is the base project into which the other data, strategies, or projects fit. Dominance: This is the method that leads or directs inquiry at any particular point. Thus, within a qualitatively-driven research program, a quantitative method may be dominant at some particular stage or vice versa in a quantitatively-driven project. Methodological integrity: This is the rigor of a project, maintained by adherence to the assumptions, strategies, data appropriateness, adequacy, and so forth that are consistent with each particular method. Methodological triangulated design: This is a project that is composed of two or more subprojects, each of which exhibits methodological integrity. While complete in themselves, these projects fit to complement or enable the attainment of the overall programmatic research goals. Mixed method design: This is the incorporation of various qualitative or quantitative strategies within a single project that may have either a qualitative or a quantitative theoretical drive. The "imported" strategies are supplemental to the major or core method and serve to enlighten or provide clues that are followed up within the core method. Multimethod design: This is the conduct of two or more research methods, each conducted rigorously and complete in itself, in one project. The results are then triangulated to form a comprehensive whole. Sensitizing strategy: This is a single project in which multiple strategies are used. One or more strategies form the major mode of data collection. Sensitizing strategies are those strategies of data collection that supplement the major mode and may be either qualitative or quantitative strategies. They are not used as a stand-alone project but rather are used to generate clues that are confirmed within the project using another strategy. Sequential triangulation: These are projects conducted one after another to further inquiry, with the first project informing the nature of the second project. These may or may not use a method different from the first project. Simultaneous triangulation: These are projects conducted at the same time, with the results compared or contrasted on completion. Supplemental data: These are data that are collected to enrich or confirm the original data. Theoretical drive: This is the overall direction of the project as determined from the original questions or purpose and is primarily inductive or deductive. While quantitative inquiry may be placed within a project with an inductive quantitative drive, the theoretical drive remains inductive. The converse is also true for a deductive theoretical drive. Triangulation: This is the combination of the results of two or more rigorous studies conducted to provide a more comprehensive picture of the results than either study could do alone. It was originally applied to qualitative inquiry by Goffman in 1974 (see Goffman, 1989). Principles ♦ 191 comprehension of the phenomenon increases as data unfold, discrepancies are resolved, concepts are understood, and interconnections are made. In this way, the theory develops. Analysis, whether qualitative or quantitative, provides us with a progressive or an incremental understanding of reality. Knowledge is attained as pieces of information from various projects verify each other, or contradict earlier-findings and demand further attention, thereby extending the developing model. These units of understanding may be part of a single project or part of several linked but self-contained projects that fit under the rubric of one general problem, topic, or research program. In this chapter, I discuss the process and procedures for combining research strategies both within a single project (with methods to answer a particular question) and among different research projects as a series of complementary projects or a research program aimed at addressing one overall topic. In this context, when strategies derived from qualitative and quantitative methods are used within a single project, it is referred to as a mixed methods design. Qualitative and quantitative projects that are relatively complete, but are used together to form essential components of one research program, are referred to as a multimethod design. We must, however, remain aware that the ad hoc mixing of strategies or methods (i.e., "muddling methods" [Stern, 1994]) may be a serious threat to validity as methodological assumptions are violated. Thus, the purpose of this chapter is to discuss the principles of, and the strategies for, conducting research with either the mixed methods or multimethod design. Major terms are defined in Box 7.1. Using examples, the process is explored in a step-by-step manner, and the strengths and weaknesses of each design are examined. Finally, these designs are explicated by dis- secting published research to examine how the theoretical drive and qualitative or quantitative methods were used in comparison with the knowledge required and the pacing of the methods and how results were combined to answer the research question. ♦ Mixed Methods Design We first discuss the process of incorporating into a single project strategies that do not normally form a part of a particular research method.1 It may be necessary to import these strategies, not normally described in basic texts as a component of a particular method because of the nature of the phenomenon being studied, the context, or special circumstances for participants. When using mixed methods, it is important that methodological congruence be maintained, that is, that all of the assumptions of the major method be adhered to and that components of the method (such as the data collection and analytical strategies) be consistent. When speaking of mixed methods design, we are not talking about mix-and-match research (with strategies liberally selected and combined); rather, we are talking about using supplemental research strategies to collect data that would not otherwise be obtainable by using the main method and incorporating these data into the base method. QUALITATIVE INQUIRY AND MIXED METHODS DESIGN Mixed methods design is a standard part of the method in each of the major qualitative research designs. Ethnography, for example, consists of fieldwork (infor- 192 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES mal interviews and participant observation), formal interviews (unstructured, open-ended, or semistructured interviews, surveys, and techniques of componential analysis), and a diary (researcher's reflections/interpretations). It also includes "other" data, defined as any other sources that the ethnographer sees fit such as documents, psychometric tests or scales, biological measurements, analysis of food, time-motion studies, and whatever will help the ethnographer to answer the research question. Although grounded theory is fast becoming a method based only on interview data, Benoliel (1996) recently made a plea for observational data to be reincorporated as a standard data collection strategy. Even more broadly, Glaser (1978) stated that "all is data," following Goffman's (1989 [published posthumously]) example to give concepts the broadest application (see Fine & Smith, 2000). In phenomenology, the primary data are derived from conversations or interviews, and these data are then reflected from the phenomenological literature and other experiential accounts, including fiction, poetry, film, and one's own experience (van Manen, 1990). QUANTITATIVE INQUIRY AND MIXED METHODS DESIGN Quantitative projects, on the other hand, appear to be better delineated and more focused than qualitative methods; they are more reliant on a single method and less likely to be used with additional data collection strategies. Occasionally, single methods will be bolstered with the simultaneous use of focus groups or an observational component or, sequentially, with an instrument developed, for instance, from interview data. These projects are described as having triangulated designs (Breitmayer, Ayres, & Knafl, 1993). However, because of the inter-dependency of these different data collection strategies, it is preferable to consider these studies as one method—albeit a mixed method. Because these "supplementary" data provide only a glimpse of another perspective, and are confirmed and verified in the base project and not independently from the main study, triangu-lation is an inappropriate term. Mixed methods design, therefore, is a term that is applied when research strategies are used that are not normally described as a part of that design. For instance, in quantitative inquiry, it may be the incorporation of an observational component (a non-numerical fieldwork component) or supplementary open-ended questions at the end of a Likert scale; in qualitative inquiry (e.g., in grounded theory), it may involve the incorporation of strategies from ethnography to add a cultural dimension or the addition of quantitative measures. What is the role of these supplemental strategies in the project? In both quantitative and qualitative research, these strategies increase the scope and comprehensiveness of the study. In a quantitative study, these strategies then «aid in the interpretation of data in the core project, providing explanations for unexpected findings or supporting the results. In a qualitative study, the supplementary strategies serve one of the three functions. First, they may be used to identify notions, ideas, or concepts that are then incorporated into the main study. Second, they may provide different information or insights as to what is happening in the data as well as different explanations or ideas about what is going on—ideas that are subsequently verified within the data or used to guide subsequent interviews or the collection of additional information to verify emerging theory. Third, they may be Principles ♦ 193 used to reexamine a category in the main study from a different perspective. It is important to remember that, in both qualitative and quantitative studies, the supplemental data sets are mutually interdependent. For instance, in qualitative research, if the main method used was grounded theory, then the unstructured or open-ended interviews may be supplemented by one or two focus groups. Data from these focus groups are not saturated and therefore cannot stand alone. These focus group data are intelligible and inter-pretable (and publishable) only as they are linked to the interview data from the main grounded theory project. In qualitative inquiry, supplemental data may be quantitative—the results of psychometric testing, for instance—and these results are then incorporated into the emerging model, providing a richer explanation. Similarly, to use a quantitative example, open-ended or unstructured interviews that accompany a quantitative survey are incomplete by qualitative standards and not publishable apart from the survey data. Often in quantitative inquiry, the supplemental observational or interview data may be transposed by coding from textual to numerical data so that they may be integrated more firmly into the analysis. A coding scheme may be developed to numerically code the participants' actions or the interview responses in the data. PRINCIPLES OF MIXED METHODS DESIGN When using mixed methods, the major design principles to be considered are to (1) recognize the theoretical drive of the project, (2) recognize the role of the imported component in the project (i.e., how to inform the base project), (3) adhere to the methodological assumptions of the base method, and (4) work with as few data sets as possible. The base project is the project that provides the overall theoretical scheme into which the findings of other projects fit or which they complement. Principle 1: Recognize the theoretical drive of the project. When conducting a single project, awareness of the theoretical drive is important. If the purpose of the research is to describe or discover, to find meaning, or to explore, then the theoretical drive will be inductive. The method most commonly used will be qualitative, and the outcome will be thick description as in phenomenological or narrative inquiry or some level of theory as obtained from ethnography or grounded theory. Quantitative methods may also be used for exploratory purposes with an inductive theoretical drive (sometimes referred to as "fishing trips") such as exploratory factor analysis or a survey. The direction of the researcher's thinking when conducting a single study might not be continuously inductive—adductive thinking may be used to verify hunches or conjectures— but overall the major theoretical drive will be inductive. If the purpose of the research is to confirm (i.e., to test a hypothesis or a theory), or to determine the distribution of a phenomenon, then the theoretical drive is deductive and the method used is usually quantitative. Again, the direction of inquiry might not always remain deductive; induction may be used at times, but overall the theoretical drive will remain deductive. Recognizing the direction of the theoretical drive has important ramifications for some crucial design issues such as sampling. If the researcher is working inductively with a qualitative sample, then the sample is small and purposely selected and therefore does not meet requirements of adequacy and appropriateness necessary 194 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES for quantitative strategies or measures. If quantitative measures are used within a qualitative study, where does the researcher obtain the quantitative sample necessary to make sense of data? On the other hand, quantitative samples are too large and usually have been randomly selected. If the researcher decides to use a qualitative strategy, then how is a purposeful qualitative sample selected from the larger group? Recall the edict that the researcher must retain the assumptions of each paradigm. Therefore, if the main study is qualitative, and a quantitative component is being sought, then a separate randomized sample must be added. Or, if the instruments are being administered to the qualitative sample, then external normative values must be available for the interpretation of the data. If the main project is quantitative and a qualitative component is added, then the sample must be purposefully selected from the main study. These sampling strategies are discussed later. Principle 2: Recognize the role of the imported component in the project. In a single project, the main project forms the theoretical foundation, and information obtained from other strategies will be used to supplement or inform the main project. A researcher may, for instance, notice indications that important information is being missed if he or she adheres solely to the current data collection strategy. For example, when interviewing teachers about children's styles of learning, one teacher-may describe a unique but important style. Because this phenomenon appears so rarely in the data—perhaps because the other teachers are unaware of the phenomenon—it may be necessary to introduce an observational component into the data to actually observe what the more experienced teacher was describing. Thus, the information obtained may then be ver- ified outside the current data set using observations, or it may be verified within the core project during subsequent and more direct interviewing. Either way, the investigator must be aware of the interaction of the two components, and rigor must be maintained so that the project will not be jeopardized. Principle 3: Adhere to the methodological assumptions of the base method. It is important to be constantly aware so as not to violate the methodological assumptions of the core method but, at the same time, to respect the assumptions that underlie the supplemental strategy being used. For example, when using qualitative data, researchers are often tempted to count—to know exactly how much or how many— which gives the appearance of rigor. But this is actually a perilous activity if assumptions are not adhered to. Ask "Were all of the participants asked the same question?" If not, then such data cannot be quantified in a meaningful way. What is the significance of such quantification? For instance, does knowing word length, sentence length, and the number of times a word was used add to our understanding of the research question? Is it even a sensible analytical strategy to use? Conversely, quantitative researchers sometimes find themselves with unsolicited comments unexpectedly written in the margins of questionnaires or surveys. While conducting a content analysis of these responses is tempting, these comments are not good data; rather, they are a serendipitous indicator that something is wrong with the questionnaire. Evidently, the questionnaire was invalid and did not capture the experience or ask the right questions, so that the respondents felt compelled to use the margins to give the researcher the information they wished to convey. These comments indicate a serious problem with validity. Rather than analyz- Principles ♦ 195 ing these comments as a qualitative component, a qualitative study should be conducted to find out more accurately and comprehensively what is "going on." Principle 4: Work with as few data sets as possible. If possible, incorporate data obtained from the supplemental strategy into the core project. If working quantitatively, this may mean transposing qualitative textual data into numerical data and incorporating them into the statistical analysis of the core project wherever appropriate. If a quantitative project is being supplemented with qualitative data, then these data are often in the form of case studies to inform certain aspects of quantitative analysis at particular points or to illustrate the quantitative findings. They illuminate the quantitative research, often providing important context. Summary. When using mixed methods within a single project, remember that the main analysis takes place primarily within the core of the strategy. The supplemental data—or rather the ideas generated from the supplemental data—inform the analysis that is taking place within the main strategy and are verified within the main focus of the project. CHARACTERISTICS OF MIXED METHODS DESIGN Recall that methodological strategies are tools for inquiry and that methods are cohesive collections of strategies that fit a particular perspective. To incorporate a different strategy into a study is risky and should be done with care, lest the core assumptions of the project be violated. Maintaining balance between respecting these assumptions and the respecting the assumptions underlying your supplemental strategies is delicate, for they may often clash; consider, for instance, the previously mentioned differences in sampling. Consultation regarding this problem may be necessary.2 STRENGTHS AND WEAKNESSES OF MIXED DESIGNS The major strength of mixed methods designs is that they allow for research to develop as comprehensively and completely as possible. When compared with a single method, the domain of inquiry is less likely to be constrained by the method itself. Because the supplementary data are often not completely saturated or as in-depth as they would be if they were a study in their own right, certainty is attained by verifying supplemental data with data strategies used within the core study. On the other hand, the strengths of comprehensiveness from using mixed methods may also be perceived as weaknesses. Your research may be challenged on the grounds of being less rigorous than if a multimethod design were used. For instance, the supplemental data may be considered thin and therefore suspect. The researcher is advised to take care in describing both the methods and the way in which the less saturated data sets and complementary relationships between data sets were verified. To summarize, the major difference between a single study using multiple strategies (mixed methods design) and a research program using multiple methods is that in the single study the less dominant strategies do not have to be a complete study in themselves. That is, the strategy may be used to develop indicators or to "test the waters" to follow a lead or hunch. If something of interest or importance is found, then this new finding may be used to complement or confirm something new or something that is already 196 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES known or suspected. Within the research design, the new finding is treated as an indicator. As such, the new finding does not have to be completely verified itself; it does not have to be saturated or confirmed. Rather, the finding may be verified or confirmed elsewhere in another data set. ♦ Multimetbod Design Multiple methods are used in a research program when a series of projects are interrelated within a broad topic and designed to solve an overall research problem. Often—and this is more common in quantitative inquiry, where more is known about the topic and the expected findings—these projects are planned and submitted to a funding agency for program funding. Because of the role of discovery and the inability of the researcher to predict findings when working inductively, obtaining funding for a number of years and several projects is less common in qualitative inquiry. PRINCIPLE 1: IDENTIFY THE THEORETICAL DRIVE OF THE RESEARCH PROJECT All research projects, and particularly research programs or clusters of research projects on the same topic, have as an ultimate goal either discovery or testing. The primary way in which the researcher is thinking overall about a research topic is the theoretical drive (Morse, 1991) or the overall thrust of the entire research program. The theoretical drive may be inductive (for discovery) or deductive (for testing). The inductive theoretical drive is when the researcher is working in the discovery mode, trying to find answers to problems such as the following: What is going on? What is happening? What are the characteristics of____? What is the meaning of ____? The overall inductive drive does not change even if minor parts of the project are confirmatory or deductive; the researcher is interested only in the major direction of thinking used in the project as a whole. When in a research program the theoretical drive is inductive, the most important projects within the research program will probably be qualitative. As discussed later in the chapter, these studies will probably form the theoretical foundation of the research program. This does not mean that at particular times the researcher will not be testing ideas, hypotheses, or components of the emerging theory deductively; it only means that in the greater scheme of things, the agenda is one of discovery. If the major thrust of the program is to test a theory or hypothesis, to answer questions of how much or how many, to determine relationships, and so forth, then the theoretical thrust will be deductive. The researcher will probably be using quantitative methods. While the research program may have components of induction or may incorporate qualitative inductive/discovery projects, the overall agenda is one of testing and the theoretical drive is deductive. Because projects that have an inductive theoretical drive may embed minor deductive projects (and conversely, those with a deductive theoretical drive may include minor inductive projects), I prefer to use the term drive to refer to the direction or thrust of the overall design rather than the term dominance (as used by Tashakkori & Teddlie, 1998) or priority decision (Morgan, 1998). Because the Principles ♦ 197 minor components (i.e., inductive projects within the deductive program or vice versa) may be at any time to the fore, the term dominance may lead to confusion. It is imperative that the researcher at all times be aware of the mode of inquiry currently being used as well as how the current project fits into the overall agenda. The researcher must have a research question, and furthermore, inquiry is active; one cannot, and should not, have a blank mind when doing research. All projects have either an inductive or a deductive theoretical drive; they can neither be neutral nor be informed equally by inductive and deductive studies.' PRINCIPLE 2: DEVELOP OVERT AWARENESS OF THE DOMINANCE OF EACH PROJECT As well as being consciously aware of the thrust of the project, the researcher must also be aware of whether he or she is working inductively or deductively at any given time. This is crucial for successfully combining strategies within a single project or for conducting a research program containing two or more studies. While awareness of the thrust is essential for determining the fit of the results as core or supplemental (i.e., which project forms the core or base into which the results of the other projects are supplemental), awareness of working inductively or deductively at any given time will ensure that the assumption of each method is not violated. Awareness of the theoretical drive is best achieved by using uppercase/lowercase notations indicating the major methods (a plus [+] sign indicating that the methods are used simultaneously or an arrow [—>] indicating directions), with uppercase representing dominance and low- ercase representing the supplemental projects (see Box 7.2). TYPES OF MULTIMETHOD DESIGNS We have four possible combinations with an inductive drive and four with a deductive drive. For an inductive theoretical drive, the possibilities are as follows: 1. QUAL + qual for two qualitative methods used simultaneously, one of which is dominant or forms the base of the project as a whole 2. QUAL —> qual for two qualitative methods used sequentially, one of which is dominant 3. QUAL + quan for a qualitative and a quantitative method used simultaneously with an inductive theoretical thrust 4. QUAL —> quan for a qualitative and a quantitative method used sequentially with an inductive theoretical thrust For a deductive theoretical drive, the possibilities are as follows: 5. QUAN + quan for two quantitative methods used simultaneously, one of which is dominant 6. QUAN —> quan for two quantitative methods used sequentially, one of which is dominant 7. QUAN + qual for a quantitative and a qualitative method used simultaneously with a deductive theoretical drive 8. QUAN —> qual for a quantitative and a qualitative method used sequentially with a deductive theoretical drive Of course, within a research program, one need not be restricted to only two projects; 198 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES BOX 7.2 Notations The plus sign ( + ) indicates that projects are conducted simultaneously, with the uppercase indicating the dominant project. The arrow (—>) indicates that projects are conducted sequentially, again with the uppercase indicating dominance. QUAL indicates a qualitatively-driven project. QUAN indicates a quantitatively-driven project. Therefore, we have eight combinations of triangulated designs: Simultaneous designs: QUAL + qual indicates a qualitatively-driven, qualitative simultaneous design. QUAN + quan indicates a quantitatively-driven, quantitative simultaneous design. QUAL + quan indicates a qualitatively-driven, qualitative and quantitative simultaneous design. QUAN + qual indicates a quantitatively-driven, quantitative and qualitative simultaneous design, Sequential designs: QUAL —> qual indicates a qualitatively-driven project followed by a second qualitative project. QUAN —> quan indicates a quantitatively-driven project followed by a second quantitative project. QUAL —> quan indicates a qualitatively-driven project followed by a quantitative project. QUAN —> qual indicates a quantitatively-driven project followed by a qualitative project. Projects may have complex designs containing combinations of the above, depending on the scope and complexity of the research program. the program itself may be any number of combinations of these projects. However, the theoretical drive, determined by the overall question and design, remains constant within each project. The pacing of projects within the research program is crucial. Research is an evolving puzzle, and all of the pieces (projects) necessary to solve the puzzle might not be seen from the beginning. This is particularly evident when using sequential designs with an inductive drive; additional studies—even ones that are crucial to the overall validity of the project—may emerge as the analysis evolves. Therefore, as projects are added, the investigator must return for additional ethical review. Principles ♦ 199 CHARACTERISTICS OF MULTIMETHOD DESIGNS The major difference between multi-method and mixed methods designs is that in multimethod design all projects are complete in themselves. The major research question or problem drives the research program, but the program consists of two or more interrelated studies. Overall, the project retains either an inductive or a deductive theoretical drive, but projects conducted simultaneously or sequentially within the umbrella of the main project may have an inductive or a deductive drive depending on whether, at a particular point, the researcher needs to discover or confirm. It is the results of each method that inform the emerging conceptual scheme as the investigator addresses the overall research question. When using a multi-method design, data are not usually combined within projects, as may occur in a mixed methods design when, for instance, textual data are transformed to numerical data and used in the analysis of a quantitative study. Rather, in a multimethod design, each study is planned and conducted to answer a particular subquestion. In qualitatively-driven mixed methods designs, these questions usually arise from the previous project and are therefore conducted sequentially; if more than one question arises, then the two projects may be conducted simultaneously. For quantitative mixed methods design, several projects designed to address one topic may be planned in advance at the proposal stage, and frequently major funded grants include several projects designed to address one topic. In this case, the results of one project are not usually dependent on the findings of earlier projects, and results are anticipated as hypotheses or as pieces in the theoretical framework. If unanticipated findings are obtained, then the whole project has to be reconsidered as a new project, perhaps even with qualitative projects added to the research program. Thus, the results from the supplemental projects are fitted into the base project. Simultaneous Designs. When used concurrently, one method usually drives the project theoretically. That is, one method forms the basis of the emerging theoretical scheme. This base project has more comprehensive relevance to the topic and is usually conceived at the design phase. The "supplemental" project(s) may be planned to elicit information that the base method cannot achieve or for the results to inform in greater detail about one part of the dominant project. Sequential Designs. When used sequentially, the method that theoretically drives the project is usually conducted first, with the second method designed to resolve problems/issues uncovered by the first study or to provide a logical extension from the findings of the first study. PRINCIPLE 3: RESPECT METHODOLOGICAL INTEGRITY When using a multimethod design, keep each method intact. It is important not to violate the assumptions, sampling (appropriateness and adequacy of data), and so forth. Keep in mind that it is the results of each project that are triangulated to inform the research problem. Specific Multimethod Designs Designs With an Inductive Drive. The first four designs discussed in what follows are those with an inductive theoretical drive. That is, they are primarily used for developing description and for deriving meaning and interpretation of the phenome- 200 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES non, thus forming the foundation of the program. 1. QUAL + qual This indicates that two qualitative methods are used simultaneously, one of which is dominant or forms the base of the project as a whole. Types of Research Problems. This design is used when it is necessary to obtain more than one perspective on a research topic. One qualitative method will be dominant, with the second method used to provide additional insights. Example. Morse was interested in the provision of nurse comforting of patients and proposed three simultaneous projects. The first project was a grounded theory project to identify the process of providing comfort, and this study formed the base of the research program. Supplementary projects included an ethnographic study to explore the context of comfort and a phenomenological study to elicit the meaning of comfort (see the proposal in Morse & Field, 1995, pp. 197-235). Design Issues. When conducting several qualitative projects that are interrelated but separate, one project remains dominant or forms the base of the project as a whole, while the findings of the supplementary project inform or add to the results of the dominant one. For instance, a grounded theory project may form the base of a project, and a phenomenological study may inform the grounded theory, providing additional insight. Sampling Strategies. Can the same person participate in more than one study? Perhaps—if it is feasible and appropriate. In the preceding example, different partici- pants were used; the grounded theory used participants who had been through the experience and had been discharged from the rehabilitation hospital, whereas for the ethnographic study the participants were inpatients. Can the same data be used for more than one study? If data are pertinent and in the right form, then they may be used. However, if data are old or perhaps do not directly address the central research topic, then it is prudent to collect new data. Or, perhaps it may be feasible to use the first data set and collect supplemental confirmatory data (Thorne, 1994). In the comfort studies mentioned previously, the data for the phenomenological study fit into the latter category. Morse and Field (1995) used some data already collected and also conducted new interviews. Methodological Congruence. Good research is more than just using sets of data collection strategies; it is also a way of linking the philosophical foundations of the project with a particular question that will be best answered using a particular sampling strategy linked within the methodological framework. In other words, each method has a distinct way of thinking and approaching a research problem. Phenomenological research must be congruent with the assumptions and strategies of phenomenology, grounded theory research must be congruent with the assumptions and strategies of grounded theory, and so forth. Triangulation of Results. As stated previously, it is the results of each separate study that inform the researcher about the topic. The base project is usually the study that is most comprehensive. In the Morse comfort study discussed previously (Morse & Field, 1995), the grounded theory study formed the base project. The grounded theory study pro- Principles ♦ 201 vided information on the process; the stages of comforting; why and how comforting interactions were initiated; and how certain comforting actions were given, under what conditions, and why. The phenomenological study informed us about what it meant to experience discomfort and about different aspects of bodily comfort. The ethnographic study informed us about the nurse-patient interactions, how other patients competed for the nurses' time, how nurses decided when to give comfort, how they read patient cues, and so forth. By placing these pieces together, we could then build a midrange theory of comfort—one that was more comprehensive than the grounded theory findings alone and that provided us with information that we would not have obtained if we had used only a single method. One warning, however, is that the results of the studies triangulated might not be on the same level of abstraction. Some studies may be more abstract, whereas others may be less so and more micro-analytical. Some studies may inform only one part of the base project, whereas others may inform all aspects. For instance, the findings of the phenomenological study had broad application for many aspects of the emerging theory, whereas the ethnographic study was primarily useful for providing information during the acute care phase. 2. QUAL + qual Two qualitative methods may be used sequentially, and one—usually the first one conducted—is dominant. These studies may use different qualitative methods, for instance, a grounded theory study followed by a phenomenological study supplementing findings in the first stage (see Wilson & Hutchinson, 1991). These stud- ies may also use the same method but be at different levels of analysis. Example. Wilson used grounded theory to explore the experience of caregivers of persons with Alzheimer's disease (Wilson, 1989b). She then conducted a second study, also using grounded theory, to explicate the process within one of the phases of the first study (Wilson, 1989a). Design Issues. Both studies are independent. However, data obtained from the first study may be used in the second study if appropriate (i.e., relevant and in the correct form). Sampling Strategies. If appropriate (i.e., if participants have had the necessary experiences), the same participants may participate in both studies. Alternatively, new participants may be sought using the principles of sampling for qualitative research. Methodological Congruence. All procedures used in each method must adhere to, and be consistent with, the method selected. Triangulation of Results. Clearly, the strength of sequential projects is when they can be viewed as a set. Yet despite the logical progression of these projects, because each study is self-contained, researchers frequently publish these studies separately so that the interaction between the two studies is often difficult to appreciate. Occasionally, however, a researcher does prepare a review article or a monograph fully integrating the entire research program. 3. QUAL + quan A qualitative method used simultaneously with a quantitative method with 202 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES an inductive theoretical thrust is used when some portion of the phenomenon may be measured, and this measurement enhances the qualitative description or interpretation. Example. An ethnographic study exploring responses to parturition pain in Fijian and Fiji Indian women revealed that the response to pain varied between the two cultural groups. Interviews with traditional birth attendants provided cultural context of the interpretation of the behaviors. A paired comparisons test, comparing common painful events such as childbirth, enabled measurement of pain attribution in each culture. Thus, the study extended Zborowski's (1969) finding that pain behavior is culturally transmitted and found that the amount of pain associated with various conditions (and pain expectation) also differs between cultures (Morse, 1989). Design Issues. Each project must be methodologically exquisite, independent, and adherent to its own methodological assumptions. Sampling Strategies. The qualitative study necessarily uses a small purposeful sample, and the quantitative study uses a larger, randomly selected sample. Therefore, different sampling strategies must be used for each study. This begs the question: Can the same participants be used for both studies? Yes; if participants from the qualitative study are selected in the quantitative study's randomization process, then they may participate in the quantitative study. Methodological Congruence. Again, each project is complete in itself, and it is the results of the quantitative project that inform the qualitative project. Triangulation of Results. Once the projects have been completed, the results of the quantitative project are used to provide details for the qualitative project. 4. QUAL -> quan This design is used when a qualitative and a quantitative method are used sequentially with an inductive theoretical thrust. Types of Research Problems. This design is most often used to develop a model or theory and then to test the theory. Note that while testing is the second quantitative component (and forms a deductive phase), the overall theoretical thrust is inductive. Example. A research program investigating adolescents' response to menarche consisted of five projects. First, a qualitative project used semistructured questions to determine the experiences of seventh-and eighth-grade girls with menarche and to establish the dimensions of the experience (Morse & Doan, 1987). Second, using the qualitative analysis, a Likert scale was developed (Morse, Kieren, & Bottorff, 1993) using categories such as the dimensions, the textual data, and the adolescents' verbal expressions to form the scale items. This instrument was tested with 860 premenarcheal girls and 1,013 postmenarcheal girls from 49 randomly selected schools. The authors then revised the scale and obtained reliability and validity statistics and normative data (Morse 8c Kieren, 1993). Quantitative studies were then conducted to determine adolescents' preparation for menstruation (Kieren &c Morse, 1992) and the influence of developmental factors on attitudes toward menstruation (Kieren &c Morse, Principles ♦ 203 1995). Regardless of the fact that most of these projects were quantitative, all of the projects rested on the first qualitative project (which is considered the core project), and the theoretical drive of the project remained inductive. Design Issues. As with the previous categories, each project must be methodologically independent, exquisite, and adherent to its own methodological assumptions. Sampling Strategies. The samples are distinct, with the qualitative study using a small purposeful sample and the quantitative study using a larger, randomly selected sample. Because of the time lapse between the two studies, it is unlikely that they will have participants in common. Methodological Congruence. Each study is distinct, and each is congruent with its own assumptions. Triangulation of Results. The qualitative study moves the research program along by confirming the earlier qualitative findings. What happens if the qualitative findings are not confirmed? Depending on the discrepancy, the researcher must regroup. If it is clear that the model or theory is incorrect, then the researcher must consider why. Perhaps another qualitative study using a different design, or another quantitative study, will have to be conducted. However, it is difficult to find examples of this problem given that a researcher's failures are rarely published and, more likely, the qualitative study will result in minor-modifications of the theory. Designs With a Deductive Theoretical Thrust The following designs are used primarily for hypotheses or theory testing. 5. QUAN + quan This is a research program consisting of two quantitative methods used simultaneously, one of which is dominant. Types of Research Problems. This is the most common type of triangulation, in which a research question demands the administration of several instruments, all of which are related to, and measure different dimensions of, the same overall question. One instrument is usually more pertinent to the research question than the other(s) because it measures the concept most directly. The other instruments may be administered as a validity check to measure aspects of the concept that the first one might not include or to measure associated or allied concepts. Example. A study of coping may also include measures of stress and measures of social support, and the participants may be given the test battery in one sitting. Design Issues. Because all of the tests are administered to the same participants, test burden may be a problem. The time for testing may be lengthy, and participants may become tired. Sampling Strategies. The battery of tests is administered to the same sample, preferably randomly selected from a defined population. Methodological Congruence. Test and subscale scores are correlated with the most direct measure. Triangulation of Results. Results are triangulated by determining statistical correlations between the measures. 204 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES 6. QUAN -> quan This design is used when two quantitative methods are used sequentially. The first study is usually dominant, with the second study conducted to elicit further information about particular dimensions of the first study. Example. A study to identify factors that contributed to patient falls was conducted. Data on a large number of variables thought to contribute to the risk of falling were collected from 100 patients who fall and 100 controls. Using computer modeling and discriminant analysis techniques, a scale to identify those at risk of falling was developed. This scale was subsequently tested on six patient care units (Morse, 1997). Types of Research Problems. Quantitative research programs are usually used when considerable work has already been conducted in the area. Researchers have enough information to know the relevant variables, to be able to conduct a theoretical framework, and to make hypotheses about the expected results. Design Issues. Because the studies are conducted at different times, if the second study is delayed and the time lapse between the two studies is prolonged, the setting or the study populations may have undergone change. This would reduce the comparability of the results. Sampling Strategies. Samples are large, predetermined by power calculations according to expected group differences and the amount of error tolerable, and usually randomly selected. Methodological Congruence. Quantitative methods are usually well-explicated, and the assumptions are usually well-described. Triangulation of Results. Again, it is usually the results of the studies that inform the researcher about the emerging model. 7. QUAN + qua] This design is used when a quantitative and a qualitative method are used simultaneously with a deductive theoretical drive. Types of Research Problems. A theoretical model is created from the literature and previous research and is tested quantitatively. Because some of the components might not be quantifiable, or might require explanation or illustration, a qualitative study is conducted concurrently. Example. A study of infant feeding in Fiji was conducted to determine the influence of breast- and bottle-feeding on infant health. Regression analysis was conducted on data obtained from infants. Ethnographic interviews conducted with Fijian and Fiji Indian women provided contextual data that enabled further interpretation of the quantitative data (Morse, 1984). ' Design Issues. Due to the quantitative core of the project, this design has less flexibility than its qualitative equivalent. Sampling Strategies. The main study uses a quantitative sample (large and preferably randomized). If the qualitative sample is drawn from the quantitative sample, then principles of qualitative sampling must be respected, including the qualities of a "good informant" (Spradley, 1979). How are these participants located? The sample may be selected from the participants of the quantitative study. Seek Principles ♦ 205 assistance from interviewers, and ask them to assist with the purposeful selection by making recommendations. Methodological Congruence. When using this design, it is tempting not to saturate the qualitative data. Recall that both studies must be complete in themselves. Triangulation of Results. The description is primarily from the quantitative data, with qualitative description enhancing particular aspects of the study. 8. QUAN->qual This design is to conduct a quantitative study followed by a qualitative study. The studies are conducted sequentially using a deductive theoretical drive, although induction is used in the second project. Types of Research Problems. This design is most frequently used when the quantitative study results are unexpected, unanticipated findings, and a qualitative study is then conducted to ascertain the reasons for the results or to find out what is going on. Example. A survey of a small town produced some unexpected results, requiring the investigators to step back and reexamine some assumptions about certain parts of the community. Design Issues. The two projects are independent and may even be conducted by different research teams. Sampling Strategies. The first study uses a quantitative sample (large and randomized). For the second study, a separate purposeful qualitative sample is selected. Methodological Congruence. The quantitative study is usually completed prior to the initiation of the qualitative study. Triangulation of Results. The qualitative study provides explanation for particular parts of the quantitative study. STRENGTHS AND WEAKNESSES OF MULTIMETHOD DESIGNS The obvious strength of using a multi-method design is that it provides one with a different perspective on the phenomenon. While some authors have described this view or perspective as "having a different lens" or side (as provided by a crystal) (Sandelowski, 1995), the real strength in using multiple methods is to obtain a different level of data. For instance, one may conduct observational research and obtain information on group behavior and then conduct a microanalysis study of touching behavior. These two studies are interdependent and together provide a more comprehensive picture than either would alone. The credence and weight that one places on the findings are important. Again, this is done with the study findings when the studies have been completed. ♦ Discussion The pacing of the projects is important and is dictated by the theoretical drive. If the results of the first project are needed to plan the next study, then it is clear that the two projects should be conducted sequentially. If, on the other hand, the first project is lacking and incomplete without the second project, then the two projects should be conducted simultaneously. I have written about maintaining the inde- 20ó ♦ METHODOLOGICAL AND ANALYTICAL ISSUES pendence of these projects until they are completed. I add a warning here that these projects should not contaminate each other—in particular, if the two projects are qualitative. Ideally, the staff for each project should be separate to prevent cross-fertilization of ideas and data. Will the triangulation of multiple studies always work? Will it always enhance the results? An interesting question that has been raised is whether a researcher should expect convergence when exploring a phenomenon from two different perspectives using different methods. Should the phenomenon even be recognizable? Chesla (1992) noted that the difference in the methods themselves may account for the differences in the findings. She used an example of the measurement of coping with a card sort procedure (in which coping is scored according to whether it is seen as pertaining to the group, considers the work to be ordered and masterful, and approaches a problem as novel or as a repetition of past dilemmas [Chesla, 1992; Reiss, 1981]) versus qualitative findings that classified coping in couples as having identical, congruent, or conflicting coping patterns (Chesla, 1989). Chesla (1992) noted that, in such cases, the preponderance of evidence lies with the qualitative narrative data (although they were collected from a relatively small number of participants), as opposed to the quantitative data, and recognized correctly that differences in the context of data collection (e.g., public vs. private, family vs. investigator-controlled) and the degree of structure in the collection process may circumscribe findings. Rather than considering the theoretical drive, as recommended in this chapter, Chesla advised including some form of "synthesis" for weighing the evidence and resolving this dilemma, although that was not possible with her data. Would an "armchair walkthrough" (Morse, 1999) have prevented this dilemma? Perhaps. Researchers should always be cognizant of the nature of their research findings, and what may or may not be "done" with them, even before their studies commence so that they will not be blindsided. Some authors have considered triangulation as a form of convergent validity, somewhat resembling a test for constructing validity (Zeller & Carmines, 1980). This is a very poor reason for triangulation; two strong studies do not necessarily give a project more credence than does one study, and they require twice as much work. Nevertheless, demonstrating validity is a possible rationale for triangulated studies, but it should not be the main one. With the exception of divergent results such as those discussed in the preceding example, researchers should expect some overlap in findings, especially when using two qualitative or two quantitative methods. That overlap might not be helpful, however, as in the case of the frustrated parent who obtained divergent accounts from two children about "how the window was broken." If researchers trust their own abilities as researchers, and their own methods as valid and rigorous, then two methods should not haVe an advantage over one method; in fact, one could argue that using multiple methods for verification may be a waste of time and energy. Weinholtz, Kacer, and Rocklin (1995) made a case for "salvaging" a quantitative study with qualitative case studies. They argued that quantitative studies may often be "ambiguous and misleading" if not supplemented with qualitative data; this may be true, but it is possible that poor quality of the quantitative studies must be addressed in this case. While qualitative work may often enhance quantitative Principles ♦ 207 studies, QUAN + qual triangulation is not a substitute for poor qualitative work. Researchers must always be aware of the goal of inquiry, whether using qualitative or quantitative inquiry or some form of mixed methods or multimethod design. Again, research strategies and methods are only tools—tools that are only as good as the researcher's knowledge and skill. Inquiry is not a passive process but rather an active one for which the researcher—not the method, not the participants, and not the setting—is responsible for the outcome. Building one's toolbox, both qualitatively and quantitatively, aids the quality of one's research, as does thoughtful deliberate action coupled with foresight and skill. ■ Notes 1. While Tashakkori and Teddlie (1998) referred to mixed methods design to designate the combining of qualitative and quantitative strategies, in this chapter I also include the incorporation into a qualitative project of qualitative strategies that are nor normally used with that particular method and, conversely, the incorporation into the definition of quantitative strategies that are not normally a part of that particular quantitative method. Hence, the label mixed methods design is still applicable. 2. Because the strategies that are incorporated are often not anticipated earlier in the project and not described in the original proposal, it is important to obtain ethical review board clearance for additional data collection strategies. Clearance is usually obtained by filing a minor change report to the university committee and the ethic review board committee that is responsible for the agency in which the research is being conducted. It may also be prudent, if you are a graduate student, to obtain the blessing for the revised research design from your supervisory committee. 3. In fact, using the preceding description of theoretical drive, the "equal status mixed methods design" presented by Tashakkori and Teddlie (1998, p. 45) is actually a project that has an inductive thrust. ■ References Benoliel, J. Q. (1996). Grounded theory and nursing knowledge. Qualitative Health Research, 6, 406-428. ~" Breitmayer, B. J., Ayres, L, & Knafl, K. (1993). Triangulation in qualitative research: Evaluation of completeness and confirmation purposes. Image: Journal of Nursing Scholarship, 25, 237-243. Chesla, C. A. (1989). Parent's illness models of schizophrenia. Archives of Psychiatric Nursing, 3, 218-225. Chesla, C. A. (1992). When qualitative and quantitative findings do not converge. Western Journal of Nursing Research, 14, 681-685. Fine, G., &C Smith, G. (2000). Erving Goffman (Vol. 1). Thousand Oaks, CA: Sage. Glaser, B. G. (1978). Theoretical sensitivity. Mill Valley, CA: Sociology Press. Goffman, E. (1989). On field work. Journal of Contemporary Ethnography, 18(2), 123-132. Kieren, D., & Morse, J. M. (1992). Preparation factors and menstrual attitudes of pre-and postmenarcheal girls. Journal of Sex Education and Therapy, 18, 155-174. Kieren, D. K., & Morse, J. M. (1995). Developmental factors and pre- and postmenarcheal menstrual attitudes. Canadian Home Economics Journal, 45(2), 61-67. Morgan, D. L. (1998). Practical strategies fot combining qualitative and quantitative methods: Applications to health research. Qualitative Health Research, 8, 362-367. Morse, J. M. (1984). Breast- and bottle-feeding: The effect on infant weight gain in the Fiji-Indian neonate. Ecology of Food and Nutrition, 15, 109-114. 208 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES Morse, J. M. (1989). Cultural responses to parturition: Childbirth in Fiji. Medical Anthropology, 22(1), 35-44. Morse, J. M. (1991). Approaches to qualitative-quantitative methodological triangula-tion. Nursing Research, 40(2), 120-123. Morse, J. M. (1997). Preventing patient falls. Thousand Oaks, CA: Sage. Morse, J.M. (1999). The armchair walkthrough [editorial]. Qualitative Health Research, 9, 435-436. Morse, J. M., & Doan, H. (1987). Growing up at school: Adolescents' response to menar-che. Journal of School Health, 57,385-389. Morse, J. M., & Field, P. A. (1995). Qualitative approaches to nursing research (2nd ed.). London: Chapman & Hall. Morse, J. M., & Kieren, D. (1993). The Adolescent Menstrual Attitude Questionnaire, Part II: Normative scores. Health Care for Women International, 14, 63-76. Morse, J. M., Kieren, D., & Bottorff, J. L. (1993). The Adolescent Menstrual Attitude Questionnaire, Part I: Scale construction. Health Care for Women International, 14, 39-62. Reiss, D. (1981). The family's construction of reality. Cambridge, MA: Harvard University Press. Sandelowski, M. (1995). Triangles and crystals: On the geometry of qualitative research. Research in Nursing and Health, 18,569-574. Spradley, J. (1979). The ethnographic interview. New York: Holt, Rinehart. Stern, P. N. (1994). Eroding grounded theory. In J. Morse (Ed.), Critical issues in qualita- tive research methods (pp. 214-215). Thousand Oaks, CA: Sage. Tashakkori, A., & Teddlie, C. (1998). Mixed methodology: Combining qualitative and quantitative methodology (Applied Social Research Methods, No. 46). Thousand Oaks, CA: Sage. Thorne, S. (1994). Secondary analysis in qualitative research. Inj. M. Morse (Ed.), Critical issues in qualitative research methods (pp. 263-279). Thousand Oaks, CA: Sage. van Manen, M. (1990). Researching lived experience. London, Ontario: Althouse Press. Weinholtz, D., Kacer, B., &C Rocklin,T (1995). Salvaging qualitative research with qualitative data. Qualitative Health Research, 5, 388-397. Wilson, H. S. (1989a). Family caregiver for a relative with Alzheimer's dementia: Coping with negative choices. Nursing Research, 38(2), 94-98. Wilson H. S. (1989b). Family caregivers: The experience of Alzheimer's disease. Applied Nursing Research, 2(1), 40-45. Wilson, H. S., & Hutchinson, S. A. (1991). Triangulation of qualitative methods: Heideggerian hermeneutics and grounded theory. Qualitative Health Research, 1, 263-276. Zborowski, M. (1969). People in pain. San Francisco: Jossey-Bass. Zeller, R. A., & Carmines, É. G. (1980). Measurement in the social sciences. Cambridge, UK: Cambridge University Press. ADVANCED MIXED METHODS RESEARCH DESIGNS John W. Creswell Vicki L. Piano Clark Michelle L. Gutmann William E. Hanson ne approach to learning about mixed methods research designs is to begin with a mixed methods study and explore the features that characterize it as mixed methods research. Although many such studies are available in the literature, we begin here with a study in education exploring the factors associated with parental savings for postsecondary education, a topic to which many people can relate. Hossler and Vesper (1993) conducted a study examining the factors associated with parental savings for children attending higher education campuses. Using longitudinal data collected from students and parents over a 3-year period, the authors examined factors most strongly associated with parental savings for postsecondary education. Their results indicated that parental support, educational expectations, and knowledge of college costs were important factors. Most important for our purposes, the authors collected information from parents and students on 182 surveys and from 56 interviews. To examine this study from a mixed methods perspective, we would like to draw attention to the following: ♦ The authors collected "mixed" forms of data, including quantitative survey data ♦ 209 210 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES and qualitative open-ended interview data. ♦ The authors titled the study "An Exploratory Study of the Factors Associated With Parental Savings for Postsec-ondary Education," containing words suggestive of both quantitative and qualitative approaches. The word exploratory is often associated with qualitative research, while the word factors implies the use of variables in quantitative research. ♦ The authors advanced a purpose statement that included a rationale for mixing methods: "The interviews permitted us to look for emerging themes from both the survey and from previous interview data, which could then be explored in more depth in subsequent interviews" (p. 146). ♦ The authors reported two separate data analyses: first the quantitative results of the survey, followed by the findings from the qualitative interviews. An examination of these two sections shows that the quantitative analysis is discussed more extensively than the qualitative analysis. ♦ The authors ended the article with a discussion that compared the quantitative statistical results with the qualitative thematic findings. Based on these features, we see the authors mixing quantitative and qualitative research in this study—mixed methods research. More specifically, with information from recent literature on mixed methods research designs, the "type" of mixed methods design used by Hossler and Vesper (1993) in their study might be called a "concurrent triangulation method design," indicating a triangulation of data collection, separate data analysis, and the integration of databases at the interpretation or discussion stage of the report. Furthermore, their design gave priority to quantitative research. To give their study a mixed methods name and to identify the characteristics of the design may not have affected whether it was accepted for publication or whether it was given enhanced status in the social science community. However, being able to identify the characteristics of the study that make it mixed methods and giving the design a specific name conveys to readers the rigors of their study. It also provides guidance to others who merge quantitative and qualitative data into a single study. If they were presenting it to journal editors, faculty committees, or funding agencies, the labeling of the design and an identification of its characteristics helps reviewers to decide the criteria and the personnel most qualified to review the study. If Hossler and Vesper (1993) had created a visual representation or figure of their procedures, it would have enhanced the study's readability to audiences not used to seeing complex and interrelated data collection and analysis procedures. Like many other studies of its kind, the Hossler and Vesper (1993) study falls into a category of research call'ed mixed methods designs. Although these studies are frequently reported in the literature, they are seldom discussed as a separate research design. However, with an increasing number of authors writing about mixed methods research as a separate design, it is now time to seriously consider it as a distinct design in the social sciences. To do this calls for a review of disparate literature about mixed methods research designs found in journals across the social sciences as well as in chapters, books, and conference papers. This chapter presents a synthesis of recent literature about mixed methods research as a separate design. It creates an Advanced Designs ♦ 211 analysis of the discussion today and its historical roots over the past 20 years. It then reviews four criteria that have emerged during the past few years that provide guidance for a researcher trying to identify the type of mixed methods design to use in a particular study. From these criteria emerge six core designs under which many types of design currently being discussed can be subsumed. We then review three issues in implementing the designs: the use of paradigm perspectives, the data analysis procedures used with each design, and the use of expanded visualizations and procedures. We end by returning to the Hossler and Vesper (1993) study to review how it might be presented and understood as a mixed methods design. ♦ Mixed Methods Research as a Separate Design There are a number of arguments for why mixed methods research might be considered a separate research design in the social sciences. By design, we mean a procedure for collecting, analyzing, and reporting research such as that found in the time-honored designs of quantitative experiments and surveys and in the qualitative approaches of ethnographies, grounded theory studies, and case studies. These arguments take several forms. Authors have increasingly recognized the advantages of mixing both quantitative and qualitative data collection in a single study. Numerous mixed methods studies have been reported in the scholarly journals for social scientists to see and use as models for their own studies. In addition, authors have delineated more carefully a definition for mixed methods research, although consensus has been slow to develop for a single definition recognized by all inquirers. Finally, method and methodological authors who write about mixed methods research have identified procedures that point toward critical design elements such as a visual model of procedures, a notation system, the explication of types of designs, and specific criteria useful in deciding what type of design to employ in a given study. A RECOGNITION OF ADVANTAGES The collection and combination of both quantitative and qualitative data in research has been influenced by several factors. Unquestionably, both quantitative and qualitative data are increasingly available for use in studying social science research problems. Also, because all methods of data collection have limitations, the use of multiple methods can neutralize or cancel out some of the disadvantages of certain methods (e.g., the detail of qualitative data can provide insights not available through general quantitative surveys) (Jick, 1979). Thus, there is wide consensus that mixing different types of methods can strengthen a study (Greene & Caracelli, 1997). Qualitative research has become an accepted legitimate form of inquiry in the social sciences, and researchers of all methodological persuasions recognize its value in obtaining detailed contextualized information. Also, because social phenomena are so complex, different kinds of methods are needed to best understand these complexities (Greene & Caracelli, 1997). PUBLISHED MTXED METHODS STUDIES Given these advantages, authors writing about mixed methods research have frequently analyzed published mixed methods studies in terms of their procedures. For example, Greene, Caracelli, and Graham (1989) reviewed 51 evalu- 212 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES ation studies so as to develop a classification scheme of types of designs based on purpose and design characteristics. Creswell, Goodchild, and Turner (1996) discussed 19 mixed methods studies about postsecondary education and illustrated steps in the studies. The "box feature" was used extensively in Tashakkori and Teddlie's (1998) book to illustrate examples of mixed methods research projects. In fact, a review of the many procedural discussions about mixed methods research [see Datta's (1994) review of 18 methodological discussions about mixed methods research from 1959 to 1992] shows references to published studies across the social science disciplines. THE ISSUE OF DEFINITION Finding these published studies, however, requires some creative searching of the literature. The actual terms used to denote a mixed methods study vary considerably in the procedural discussions of this design. Writers have referred to it as multitrait-multimethod research (Campbell & Fiske, 1959), integrating qualitative and quantitative approaches (Glik, Parker, Muligande, & Hategikamana, 1986-1987; Steckler, McLeroy, Goodman, Bird, & McCormick, 1992), interrelating qualitative and quantitative data (Fielding & Fielding, 1986), methodological Triangulation (Morse, 1991), multimethodo-logical research (Hugentobler, Israel, & Schurman, 1992), multimethod designs and linking qualitative and quantitative data (Miles & Huberman, 1994), combining qualitative and quantitative research (Bryman, 1988; Creswell, 1994; Swanson-Kauffman, 1986), mixed model studies (Datta, 1994), and mixed methods research (Caracelli & Greene, 1993; Greene et al., 1989; Rossman & Wilson, 1991). Central to all of these terms is the idea of combining or integrating different methods. The term mixed methods is perhaps most appropriate, although one of the authors of this chapter has used others (Creswell, 1994; Creswell et al., 1996; Creswell & Miller, 1997). Mixing provides an umbrella term to cover the multifaceted procedures of combining, integrating, linking, and employing multi-methods. To argue for mixed methods research as a specific research design requires not only an accepted term but also a common definition. Building on earlier definitions of mixed methods research (Fielding & Fielding, 1986; Greene et al., 1989), a mixed methods research design at its simplest level involves mixing both qualitative and quantitative methods of data collection and analysis in a single study (Creswell, 1999). A more elaborate definition would specify the nature of data collection (e.g., whether data are gathered concurrently or sequentially), the priority each form of data receives in the research report (e.g., equal or unequal), and the place in the research process in which "mixing" of the data occurs such as in the data collection, analysis, or interpretation phase of inquiry. Combining all of these features into a single definition suggests the following definition: A mixed methods study involves the collection or analysis of both quantitative and/or qualitative data in a single study in which the data are collected concurrently or sequentially, are given a priority, and involve the integration of the data at one or more stages in the process of research. This definition, although a reasonable beginning point for considering mixed methods research designs, masks several additional questions that are developed further in this chapter. For example, this defini- Advanced Designs ♦ 213 tion does not account for multiple studies within a sustained program of inquiry in which researchers may mix methods at different phases of the research. It also creates an artificial distinction between quantitative and qualitative methods of data collection that may not be as firmly in place as people think (see Johnson and Turner's detailed discussion about types of data in Chapter 11 of this volume). Furthermore, it does not account for a theoretical framework that may drive the research and create a larger vision in which the study may be posed. THE TREND TOWARD PROCEDURAL GUIDELINES The history of mixed methods research has been adequately traced elsewhere (see Creswell, 2002; Datta, 1994; Tashakkori & Teddlie, 1998). Central to this discussion is the development of procedural guidelines that argue for viewing mixed methods research as a separate design. The evolution of procedural guidelines for mixed methods studies is seen in the creation of visual models, a notation system, and the specification of types of designs. Visual Models. Procedures for conducting a mixed methods study first emerged from discussions in which authors described the flow of activities typically used by researchers when they conducted this type of study. For example, Sieber (1973) suggested the combination of in-depth case studies with surveys, creating a "new style of research" and the "integration" of research techniques within a single study (p. 1337). Patton (1990) identified several forms of research as "mixed forms" such as experimental designs, qualitative data and content analysis or experimental designs, qualitative data, and statistical data. Soon, writers began to draw procedures graphically and create figures that displayed the overall flow of research activities. A good example of these visuals is found in health education research. As shown in Figure 8.1, Steckler et al. (1992) provided four alternative procedures for collecting both quantitative and qualitative research and gave a brief rationale for the reason for combining methods. These models show both quantitative and qualitative methods (actually data collection) and use arrows to indicate the sequence of activities in the mixed methods study. Models 2 and 3 are similar except that the procedures begin with qualitative data in Model 2 and with quantitative data in Model 3. Notation System. Models such as these provide a useful way for readers to understand the basic procedures used in mixed methods studies. Implied in these models is also the idea that a notation system exists to explain the procedures. In 1991, Morse, a nursing researcher, developed a notation system that has become widely used by researchers designing mixed methods studies (see also Morse's notation system as she discusses types of designs in Chapter 7 of this volume). As shown in Figure 8.2, Morse discussed several types of mixed methods studies and illustrated them with a plus ( + ) sign to denote the simultaneous collection of quantitative and qualitative data, an arrow (—>) to designate that one form of data collection followed another, uppercase letters to suggest major emphasis (e.g., QUAN, QUAL) on the form of data collection, and lowercase letters to imply less emphasis (e.g., quan, qual). It is also noteworthy that the terms quantitative and qualitative were now shortened to quan and qual, respectively, implying that both approaches to research are legitimate and of equal stature. 214 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES Model 1. Qualitative methods are used to help develop quantitative measures and instruments. QUALITATIVE QUANTITATIVE ,RESULTS Model 2. Quantitative methods are used to embellish a primarily qualitative study. QUALITATIVE QUANTITATIVE Model 3. Qualitative methods are used to help explain quantitative findings. QUANTITATIVE QUALITATIVE Model 4. Qualitative and quantitative methods are used equally and in paralle QUALITATIVE -*> (RESULTS QUANTITATIVE Figure 8.1. Example of Visual Presentation of Procedures SOURCE: Steckler, McLeroy, Goodman, Bird, and McCormick (1992). Approach Type QUAL+ quan Simultaneous QU AL—»quan Sequential QUAN+ qual Simultaneous QUAN->qual Sequential Figure 8.2. Examples of Types of Designs Using Morse's (1991) Notation System Advanced Designs ♦ 215 Types of Designs. As is apparent in Morse's (1991) notation system, she provided names for her approaches such as simultaneous and sequential. Terms such as these, and a few more, have now become types or variants of mixed methods designs. As shown in Table 8.1, authors from diverse discipline fields, such as evaluation, nursing, public health, and education, have identified the types of designs that they believe capture the array of possibilities. A brief review of eighr studies shown in the table indicates that Morse's simultaneous and sequential labels continue to be used routinely. However, new terms have also emerged such as a mixed methods study that is based on initiation or development (Greene et al., 1989), on complementary designs (Morgan, 1998), or on mixed model designs (Tashakkori & Teddlie, 1998). Unquestionably, authors have yet to reach consensus on the types of designs that exist, the names for them, or how they might be represented visually. ♦ Criteria Implicit in the Designs Although the variants of designs may be baffling, to distinguish among them is useful in choosing one to use for a study. To accomplish this requires examining the design's fundamental assumptions, a line of thinking already used by Morgan (1998). If one could understand the assumptions implicit within the designs, then a researcher could configure a procedure that best meets the needs of the problem and that includes the collection of both quantitative and qualitative data. Morgan identified two core assumptions: that the designs varied in terms of a sequence of collecting quantitative and qualitative data and that they varied in terms of the priority or weight given to each form of data. Other assumptions can be added as well. Tashakkori and Teddlie (1998) suggested that the design contain an integration of the data in different phases such as in the statement of the research questions, the data collection, the data analysis, and the interpretation of the results. Finally, in the recent writings of Greene and Caracelli (1997), we find that some mixed methods writers include a transformational value- or action-oriented dimension to their study. Thus, we have another assumption that needs to be included in the matrix for typing and identifying forms of mixed methods designs. Four factors, as illustrated in Figure 8.3, help researchers to determine the type of mixed methods design for their study: the implementation of data collection, the priority given to quantitative or qualitative research, the stage in the research process at which integration of quantitative and qualitative research occurs, and the potential use of a transformational value- or action-oriented perspective in their study. IMPLEMENTATION OF DATA COLLECTION Implementation refers to the sequence the researcher uses to collect both quantitative and qualitative data. Several authors have discussed this procedure in mixed methods research (Greene et al., 1989; Morgan, 1998; Morse, 1991). The options for implementation of the data collection consist of gathering the information at the same time (i.e., concurrently) or introducing the information in phases over a period of time (i.e., sequentially). When the data are introduced in phases, either the qualitative or the quantitative approach may be gathered first, but the sequence relates to the objectives being sought by the researcher in the 216 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES TABLE 8.1 Classifications of Mixed Methods Designs Author Mixed Methods Designs Discipline/Field Greene, Caracelli, & Graham (1989) Initiation Expansion Development Complementary Triangulation Evaluation Patton (1990) Experimental design, qualitative data, and content analysis Experimental design, qualitative data, and statistical analysis Naturalistic inquiry, qualitative data, and statistical analysis Naturalistic inquiry, quantitative data, and statistical analysis Evaluation Morse (1991) Simultaneous triangulation QUAL + quan QUAN + qual Sequential triangulation QUAL -» quan QUAN -> qual Nursing Steckler, McLeroy, Goodman, Bird, & McCormick(1992) Model 1: qualitative methods to develop quantitative measures Model 2: quantitative methods to embellish quantitative findings Model 3: qualitative methods to explain qualitative findings Model 4: qualitative and quantitative methods used equally and parallel Public health education Greene & Caracelli (1997) Component designs Triangulation Complementary Expansion Integrated designs Iterative Embedded or nested Holistic Transformative Evaluation ' (Continued) Advanced Designs ♦ 217 TABLE 8.1 (Continued) Author Mixed Methods Designs Disciplines/Field Morgan (1998) Complementary designs Qualitative preliminary Quantitative preliminary Qualitative follow-up Quantitative follow-up Health research Tashakkori & Teddlie(1998) Mixed method designs Equivalent status (sequential or parallel) Dominant-less dominant (sequential or parallel) Multilevel use Mixed model designs I: Confirmatory/Qual Data/Statistical analysis and inference II: Confirmatory/Qual Data/Qualitative inferences III: Exploratory/Quant Data/Statistical analysis and inference IV: Exploratory/Qual Data/Statistical analysis and inference V: Confirmatory/Quant Data/Qualitative inferences VI: Exploratory/Quant Data/Qualitative inferences VII: Parallel mixed model VIII: Sequential mixed model Educational research Creswell (1999) Convergence model Sequential model Instrument-building model Educational Policy mixed methods study. When qualitative data collection precedes quantitative data collection, the intent is to first explore the problem under study and then follow up on this exploration with quantitative data that are amenable to studying a large sample so that results might be inferred to a population. Alternatively, when quantita- tive data precede qualitative data, the intent is to explore with a large sample first to test variables and then to explore in more depth with a few cases during the qualitative phase. In concurrently gathering both forms of data at the same time, the researcher seeks to compare both forms of data to search for congruent find- 218 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES Theoretical Implementation Priority Integration Perspective No Sequence Concurrent Sequential - Qualitative first Equal At Data Collection At Data Analysis Qualitative Sequential— Quantitative first At Data Interpretation Quantitative With Some Combination Figure 8.3. Decision Matrix for Determining a Mixed Methods Design Explicit Implicit ings (e.g., how the themes identified in the qualitative data collection compare with the statistical results in the quantitative analysis). The choice of implementation strategy has several consequences for the form of the final written report. When two phases of data collection exist, the researcher typically reports the data collection process in two phases. The report may also include an analysis of each phase of data sepa- rately and the integration of information in the discussion or conclusion section of a study. The implementation approach also raises an issue about iterative phases of a design where a researcher may cycle back and forth between quantitative and qualitative data collection. For instance, the research may begin with a qualitative phase of interviewing, followed by a quantitative phase of survey instrument design and testing with a sample, and continued on Advanced Designs ♦ 219 with a third qualitative phase of exploring outlier cases that emerge from the quantitative survey. The implementation decision also calls for clearly identifying the core reasons for collecting both forms of data in the first place and understanding the important interrelationship between the quantitative and qualitative phases in data collection. These reasons need to be clearly articulated in any mixed methods written report. PRIORITY A less obvious issue, and one more difficult to make a decision about, is the priority given to quantitative and qualitative research in the mixed methods study (Morgan, 1998). Unlike the frame of reference of data collection in the implementation decision, here the focus is on the priority given to quantitative or qualitative research as ir occurs throughout the data collection process. This process might be described as including how the study is introduced, the use of literature, the statement of the purpose of the study and the research questions, the data collection, the data analysis, and the interpretation of the findings or results (Creswell, 2002). The mixed methods researcher can give equal priority to both quantitative and qualitative research, emphasize qualitative more, or emphasize quantitative more. This emphasis may result from practical constraints of data collection, the need to understand one form of data before proceeding to the next, or the audience preference for either quantitative or qualitative research. In most cases, the decision probably rests on the comfort level of the researcher with one approach as opposed to the other. Operationalizing the decision to give equal or unequal emphasis to quantitative or qualitative research translates is prob- lematic. For instance, the study may begin with essentially a quantitative orientation with a focus on variables, specific research questions or hypotheses, and an extensive discussion of the literature that informs the questions. Another study might convey a different priority through the length of discussions such as the inclusion of extensive discussions about the qualitative data collection with minimal information about the quantitative instruments used in the study. A project might be seen by readers as providing more depth for one method than for the other such as assessed by the number of pages given to quantitative research (e.g., as in the Hossler & Vesper [1993] article). A graduate student may of necessity delimit the study by including a substantive quantitative analysis and a limited qualitative data collection, a model referred to as the dominant-less dominant model (Creswell, 1994). A final example is that the published article provides equal emphasis on both quantitative and qualitative research as judged by separate sections of approximately equal length and treatment. Unquestionably, in each of these examples, researchers and readers make an interpretation of what constitutes priority, a judgment that may differ from one inquirer to another. On a practical level, however, we can see these different priorities in published mixed methods studies, and researchers need to make informed decisions about the weight or attention given to quantitative and qualitative research during all phases of their research. STAGE OF INTEGRATION Of the mixed methods design writers, it has been Tashakkori and Teddlie (1998) and Greene et al. (1989) who have emphasized the importance of considering the stage of the research process at which 220 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES integration of quantitative and qualitative data collection takes place. Integration can be defined as the combination of quantitative and qualitative research within a given stage of inquiry. For example, integration might occur within the research questions (e.g., both quantitative and qualitative questions are presented), within data collection (e.g., open-ended questions on a structured instrument), within data analysis (e.g., transforming qualitative themes into quantitative items or scales), or in interpretation (e.g., examining the quantitative and qualitative results for convergence of findings). The decision that needs to be made relates to a clear understanding of the sequential model of the research process and approaches typically taken by both quantitative and qualitative researchers at each stage. (As a contrast, see the interactive model as advanced by Maxwell and Loomis in Chapter 9 of this volume.) Examine Table 8.2, which presents four stages in the process of research and approaches researchers take in both the quantitative and qualitative areas. In quantitative research, investigators ask questions that try to confirm hypotheses or research questions, with a focus on assessing the relationship or association among variables or testing a treatment variable. These questions or hypotheses are assessed using instruments, observations, or documents that yield numerical data. These data are, in turn, analyzed descriptively or inferentially so as to generate interpretations that are generalizable to a population. Alternatively, in qualitative research, the inquiry is more exploratory, with a strong emphasis on description and with a thematic focus on understanding a central phenomenon. Open-ended data collection helps to address questions of this kind through procedures such as interviews, observations, documents, and audiovisual materials. Researchers analyze these databases for a rich description of the phenomenon as well as for themes to develop a detailed rendering of the complexity of the phenomenon, leading to new questions and personal interpretations made by the inquirers. Although both the quantitative and qualitative processes described here are oversimplifications of the actual steps taken by researchers, they serve as a baseline of information to discuss where integration might take place in a mixed methods study. During the phases of problem/question specification, data collection, data analysis, and interpretation, it is possible for the mixed methods researcher to integrate components of both quantitative and qualitative research. Unquestionably, the most typical case is the integration of the two forms of research at the data analysis and interpretation stages after quantitative data (e.g., scores on instruments) and qualitative data (e.g., participant observations of a setting) have been collected. For example, after collecting both forms of data, the analysis process might begin by transforming the qualitative data into numerical scores (e.g., themes or codes are counted for frequencies) so that they can be compared with quantitative scores. In another study, the analysis might proceed separately for both quantitative and qualitative data, and then the information might be compared in the interpretation (or discussion) stage of the research (see, e.g., Hossler & Vesper, 1993). Less frequently found in mixed methods studies is the integration at data collection. A good example of integration at this stage is the use of a few open-ended questions on a quantitative survey instrument. In this approach, both quantitative and qualitative data are collected and integrated in a single instrument of data collection. It is also possible for integration to occur earlier in the process of research such as in the prob- Advanced Designs ♦ 221 TABLE 8.2 Stages of Integration and Quantitative and Qualitative Approaches Research Problems/ Data Questions 1 Data Collection/ Method Data Analysis/ Procedure Data 1 Interpretation Quantitative Confirmatory Instruments Descriptive Generalization Outcome based Observations statistics Prediction based Documents Score oriented Inferential statistics Interpretation of theory Closed-ended process Predetermined hypotheses Qualitative Exploratory Interviews Description Particularization Process based Documents Identify themes/ (contextual- Descriptive Observations categories izing) Phenomenon Audiovisual Look for inter- Larger sense- of interest Participant-determined connectedness among cate- making Personal inter- process Open-ended process gories/themes (vertically and horizontally) pretation Asking questions Text/image oriented ------------— lem/question stage. In some studies, the researcher might set forth both quantitative and qualitative questions in which the intent is to both test some relationships among variables and explore some general questions. This approach is seen in studies where a concurrent form of data collection exists and the researcher is interested in triangulating (Mathison, 1988) data from different sources as a major intent of the research. Finally, it should be noted that integration can occur at multiple stages. Data from a survey that contains both quantitative and qualitative data might be integrated in the analysis stage by transforming the qualitative data into scores so that the information can be easily compared with the quantitative scores. Deciding on the stage or stages to integrate depends on the purpose of the research, the ease with which the integration can occur (e.g., data collection integration is easier and cleaner than data analysis integration), the researcher's understanding of the stages of research, and the intent or purpose of a particular study. What clouds this decision is the permeability of the categories displayed in Table 8.2. Data collection is a good case in point. What constitutes quantitative or qualitative data 222 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES collection is open to debate; indeed, LeCompteand Schensul (1999), and many ethnographers, consider both quantitative and qualitative data collection as options for field data. A similar concern might be raised about the fine distinctions being made between quantitative and qualitative research problems and questions. Many inquirers actually go back and forth between confirming and exploring in any given study, although qualitative inquirers refrain from specifying variables in their questions and attempt to keep the study as open as possible to best learn from participants. Despite these potential issues that need to be considered, the mixed methods researcher needs to design a study with a clear understanding of the stage or stages at which the data will be integrated and the form this integration will take. THEORETICAL PERSPECTIVES One question raised by qualitative researchers in the social sciences, especially during the 1990s (Creswell, 2002), is that all inquiry is theoretically driven by assumptions that researchers bring to their studies. At an informal level, the theoretical perspective reflects researchers' personal stances toward the topics they are studying, a stance based on personal history, experience, culture, gender, and class perspectives. At a more formal level, social science researchers bring to their inquiries a formal lens by which they view their topics, including gendered perspectives (e.g., feminist theory), cultural perspectives (e.g., racial/ethnic theory), lifestyle orientation (e.g., queer theory), critical theory perspectives, and class and social status views. Only recently have these theoretical perspectives been discussed in the mixed methods research design literature. As recently as 1997, Greene and Caracelli discussed the use of a theoretical lens in mixed methods research. They called such a lens the use of transformative designs that "give primacy to the value-based and action-oriented dimensions of different inquiry traditions" (p. 24). Greene and Caracelli (1997) further explicated the nature of transformative designs when they wrote, P Designs are transformative in that they offer opportunities for reconfiguring I the dialog across ideological differ- I ences and, thus, have the potential to I restructure the evaluation context.. . . I Diverse methods most importantly I serve to include a broader set of inter- I ests in the resulting knowledge claims I and to strengthen the likely effective- | ness of action solutions, (p. 24) The commonality across transformative studies is ideological, such that no matter what the domain of inquiry, the ultimate goal of the study is to advocate for change. The transformative element of the research can either be experienced by the participants as they participate in the research or follow the study's completion when the research spawns changes in action, policy, or ideology. Transformative designs are found in evaluative research as well as in health care. Issues as diverse as class, race, gender, feminist scholarship, and postmodernist thinking often inform transformative designs. To illustrate how this design might work, a researcher might examine the inequity that exists in an organization's salary structure that marginalizes women in the organization. The issue of inequity frames the study, and the inquirer proceeds to first gather survey data measuring equity issues in the organization. This initial quantitative phase is then followed by a qualitative phase in which several in-depth cases studies are developed to explore in more detail the quantitative results. These case studies might examine the issue of inequality from Advanced Designs ♦ 223 the standpoint of managers, middle managers, and workers on an assembly line. In the end, the researcher is interested in bringing about change in the salary structure and in using the research as evidence for needed change and to advocate for change. Also, through the research, the dialogue among organizational members is "transformed" to focus on issues of inequity. The use of a theoretical lens may be explicit or implicit within a mixed methods study. Those espousing the transformative model encourage researchers to make the lens explicit in the study, although Greene and Caracelli (1997) were not specific about how this might be done. However, examining the use of a theoretical or an ideological lens within other studies, we can see that it often informs the purpose and questions being asked. These purposes may be to promote equity and justice for policies and practices so as to create a personal, social, institutional, and/or organizational impact (as addressed by Newman, Ridenour, Newman, & DeMarco in Chapter 6 of this volume) or to address specific questions related to oppression, domination, alienation, and inequality. A transformative model would also indicate the participants who will be studied (e.g., women, the marginalized, certain groups that are culturally and ethnically diverse), how the data collection will proceed (e.g., typically collaboratively so as not to marginalize the study participants further), and the conclusion of the study for advocacy and change to improve society or the lives of the individuals being studied. In summary, the nature of transformative mixed research methodology is such that in both perspective and outcomes, it is dedicated to promoting change at levels ranging from the personal to the political. Furthermore, it is possible to conduct any quantitative, qualitative, or mixed methods study with a transformative or advocacy purpose. ♦ Six Major Designs The four criteria—implementation, priority, integration, and theoretical perspective—can be useful in specifying six different types of major designs that a researcher might employ. This short list of designs might not be as inclusive of types as those identified by other writers (see the types introduced in Table 8.1), but arguably, all variants of designs might be subsumed within these six types. Moreover, by identifying a small number of generic types, it can be suggested that the mixed methods researcher has the flexibility to choose and innovate within the types to fit a particular research situation. These six types build on the four decision criteria and integrate them into specific designs with a label that we believe captures the variants of the design. An overview of the types of designs by the four criteria is seen in Table 8.3. For each design, we identify its major characteristics, examples of variants on the design, and strengths and weaknesses in implementing it. In addition, a visual presentation is made for each design type and annotated with specific steps to be undertaken in the process of research. The visuals are shown in Figures 8.4 and 8.5. SEQUENTIAL EXPLANATORY DESIGN The sequential explanatory design is the most straightforward of the six major mixed methods designs. It is characterized by the collection and analysis of quantitative data followed by the collection and analysis of qualitative data. Priority is typically given to the quantitative data, and the two methods are integrated during the interpretation phase of the study. The steps of this design are pictured in Figure 8.4a. The implementation of this design (Text continued on p. 227) ^ _*: r~ \s ľ '~ V k i .-Oil. . ■ O i u c Í E g — J CD Q. •c C ro °-" J= E CZ CZ "> ro °-- Z> >t O. Q. >- Z) 5t QÍ O CD OJ ^ n y (D CD CD -ä U ä Q. ra JD JD r (U U q íÍE u ° c -Q JD Defini conce advoc ment) Oj ro ro CD E -ž CD ro ro Ě S 2 □ U ro t 2 2 5 ra (U OJ CD QJ CD CD rS p- c O o 5 Q_ 4J .E ro -C ro JZ ro -C ro JZ ro Q. Q. o_ D. CZ CZ c C ^ 01 C/l ally analys be during phase O) í: o 0) q ro "Íd Q_ g ro a; Q. .o ro OJ p. o ro 'u JZ J3 Q. 0) c/i P-'SL ro JZ Q. 'l/l >- OD aj 0) CD C c ro Cr, c c CZ -E S CZ < -ií ro c JUS u "rô c ro ro z> > JD -Jj ro ro > i" => u cr equal; car e or qualit — CD o-.> CL) S 5 ro .ž .^ CL> a; v. > O ro -.^ >~. cr.íť Quantita or equal -sl ro ro — .tť ro O Usual be qu Usual be qu Prefer quant a-š Quan or eq >- .5 JD "O JD 4ol og o ro o ro rz — i. U L. 3 c o Quantitative followe qualitative CD _o JD ^T CD er quantitative fc sd by qualitative itative followed ntitative t collectio e and qua t collectio e and qua t collectio e and'qua o U. -Q c u E CD Qualitative quantitativ eurren ntitativ data eurren ntitativ data curren ntitativ data Cl jz > ra ro ■- o D 3 ^r ro 0) £ ro cd tr ro CD O ZJ > C 60 E ,8 => > U du O D > U o--á t/) 01 Q c •4-O o o ro E o ro C« D "D CD en CD OJ c o DO Q. ro Q_ o C/l C c ro 1- u x CD CD ■" rz > 00 £ "ra "rô ro c CD CD c ro Sľ E co D n Zl O UJ DO U LJ ^ ^ _l c/i C rz c rz CO CD O O O ro < Q en c-n i/i U U U š 1- 224 ♦ Advanced Designs ♦ 225 Sequential Explanatory Design (8.4a) QUAN QUAN Data Collection QUAN Data Analysis qual Data Collection qual qual Data Analysis ► Interpretation of Entire Analysis Sequential Exploratory Design (8.4b) QUAL QUAL Data Collection QUAL Data Analysis quan quan Data Collection quan Data Analysis Interpretation of Entire Analysis Sequential Transformative Design (8.4c) QUAN —► qual Vision, Advocacy, Ideology, Framework ~~\ J Figure 8.4. Sequential Designs (a) Sequential Explanatory Design (b) Sequential Exploratory Design (c) Sequential Transformative Design 226 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES Concurrent Triangulation Design (8.5a) QUAN QUAL QU AN Data Collection QUAN DaTa Analysis Data Results Compared QUAL Data Collection QUAL Data Analysis Concurrent Nested Design (8.5b) qual QUAN Analysis of Findings Concurrent Transformative Design (8.5c) QUAN + QUAL Vision, Advocacy, Ideology, Framework Figure 8.5. Concurrent Designs (a) Concurrent Triangulation Design (b) Concurrent Nested Design (c) Concurrent Transformative Design quan QUAL Analysis of Findings quan QUAL Vision, Advocacy, Ideology, Framework Advanced Designs ♦ 227 may or may not be guided by a specific theoretical perspective. The purpose of the sequential explanatory design is typically to use qualitative results to assist in explaining and interpreting the findings of a primarily quantitative study. It can be especially useful when unexpected results arise from a quantitative study (Morse, 1991). In this case, the qualitative data collection that follows can be used to examine these surprising results in more detail. In an important variation of this design, the qualitative data collection and analysis is given the priority. In this case, the initial quantitative phase of the study may be used to characterize individuals along certain traits of interest related to the research question. These quantitative results can then be used to guide the purposeful sampling of participants for a primarily qualitative study. The straightforward nature of this design is one of its main strengths. It is easy to implement because the steps fall into clear separate stages. In addition, this design feature makes it easy to describe and report. In fact, this design can be reported in two distinct phases with a final discussion that brings the results together. The sequential explanatory design is also useful when a quantitative researcher wants to further explore quantitative findings. Furthermore, the implementation of qualitative data collection and analysis within this design framework can be comfortable for quantitative researchers, and therefore it can provide an effective introduction to qualitative research methods to researchers unfamiliar with the techniques. The main weakness of this design is the length of time involved in data collection to complete the two separate phases. This is especially a drawback if the two phases are given equal priority. Therefore, a sequential explanatory design giving equal priority to both qualitative and quantitative methods may be a more applicable approach for a research program than for a single study. SEQUENTIAL EXPLORATORY DESIGN The sequential exploratory design has many features similar to the sequential explanatory design. It is conducted in two phases, with the priority generally given to the first phase, and it may or may not be implemented within a prescribed theoretical perspective (see Figure 8.4b). In contrast to the sequential explanatory design, this design is characterized by an initial phase of qualitative data collection and analysis followed by a phase of quantitative data collection and analysis. Therefore, the priority is given to the qualitative aspect of the study. The findings of these two phases are then integrated during the interpretation phase (see Figure 8.4b). At the most basic level, the purpose of this design is to use quantitative data and results to assist in the interpretation of qualitative findings. Unlike the sequential explanatory design, which is better suited to explaining and interpreting relationships, the primary focus of this design is to explore a phenomenon. Morgan (1998) suggested that this design is appropriate to use when testing elements of an emergent theory resulting from the qualitative phase and that it can also be used to generalize qualitative findings to different samples. Similarly, Morse (1991) indicated that one purpose for selecting this design would be to determine the distribution of a phenomenon within a chosen population. Finally, the sequential exploratory design is often discussed as the design used when a researcher develops and tests an instrument (see, e.g., Creswell, 1999). One possible variation on this design is to give the priority to the second quantitative phase. Such 228 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES a design might be undertaken when a researcher intends to conduct a primarily quantitative study, but it needs to begin with initial qualitative data collection so as to identify or narrow the focus of the possible variables. In addition, it is possible to give equal weight to the quantitative and qualitative phases, but such an approach may be too demanding for a single study due to time constraints, resource limitations, and the limitations of a researcher's experience. The sequential exploratory design has many of the same advantages as the sequential explanatory design. Its two-phase approach makes it easy to implement and straightforward to describe and report. It is useful to a researcher who wants to explore a phenomenon but also wants to expand on the qualitative findings. This design is especially advantageous when a researcher is building a new instrument. In addition, this design could make a largely qualitative study more palatable to a quantitatively oriented adviser, committee, or research community that may be unfamiliar with the naturalistic tradition. As with the sequential explanatory design, the sequential exploratory design also requires a substantial length of time to complete both data collection phases, which can be a drawback for some research situations. In addition, the researcher may find it difficult to build from the qualitative analysis to the subsequent quantitative data collection. SEQUENTIAL TRANSFORMATIVE DESIGN As with the previously described sequential designs, the transformative sequential design has two distinct data collection phases, one following the other (see Figure 8.4c). However, in this design, either method may be used first, and the priority may be given to either the quantitative or the qualitative phase (or even to both if sufficient resources are available). In addition, the results of the two phases are integrated together during the interpretation phase. Unlike the sequential exploratory and explanatory designs, the sequential transformative design definitely has a theoretical perspective present to guide the study. The aim of this theoretical perspective, whether it be a conceptual framework, a specific ideology, or advocacy, is more important in guiding the study than the use of methods alone. The purpose of a sequential transformative design is to employ the methods that will best serve the theoretical perspective of the researcher. By using two phases, a sequential transformative researcher may be able to give voice to diverse perspectives, to better advocate for participants, or to better understand a phenomenon or process that is changing as a result of being studied. The variations of this design would be best described by the diverse range of possible theoretical perspectives instead of the range of possible methodological choices. The sequential transformative design shares the same methodological strengths and weaknesses as the other two sequential mixed methods designs. Its use of distinct phases facilitates its implementation, description, and sharing of results, although it also requires the time to complete two data collection phases. More important, this design places mixed methods research within a transformative framework. Therefore, this design may be more appealing and acceptable to those researchers already using a transformative framework within one distinct methodology such as qualitative research. It will also include the strengths typically found Advanced Designs ♦ 229 when using a theoretical perspective in other research traditions. Unfortunately, because to date little has been written on this design, one weakness is that there is little guidance on how to use the transformative vision to guide the methods. Likewise, it may be unclear how to move from the analysis of the first phase to the data collection of the second phase. CONCURRENT TRIANGULATION DESIGN The concurrent triangulation design is probably the most familiar of the six major mixed methods designs (see Figure 8.5a). It is selected as the design when a researcher uses two different methods in an attempt to confirm, cross-validate, or corroborate findings within a single study (Greene et al., 1989; Morgan, 1998; Steckler et al., 1992). This design generally uses separate quantitative and qualitative methods as a means to offset the weaknesses inherent within one method with the strengths of the other method. In this case, the quantitative data collection and qualitative data collection are concurrent, happening during one phase of the research study. Ideally, the priority would be equal between the two methods, but in practical application, the priority may be given to either the quantitative or the qualitative approach. This design usually integrates the results of the two methods during the interpretation phase. This interpretation either may note the convergence of the findings as a way to strengthen the knowledge claims of the study or must explain any lack of convergence that may result. This traditional mixed methods design is advantageous because it is familiar to most researchers and can result in well- validated and substantiated findings. In addition, the concurrent data collection results in a shorter data collection time period as compared with that of the sequential designs. This design also has a number of limitations. It requires great effort and expertise to adequately study a phenomenon with two separate methods. It can also be difficult to compare the results of two analyses using data of different forms. In addition, it may be unclear to a researcher how to resolve discrepancies that arise in the results. Other variations of this design also exist. For example, it would be possible for a researcher to integrate the two methods earlier in the research process such as during the analysis phase. This would require the transformation of the data from a quantitative to a qualitative form or from a qualitative to a quantitative form. While such transformations have been discussed in the literature (see, e.g., Caracelli & Greene, 1993; Tashakkori & Teddlie, 1998), there is still limited guidance for how to conduct and analyze such transformations in practice. CONCURRENT NESTED DESIGN Like the concurrent triangulation design, the concurrent nested design can be identified by its use of one data collection phase during which quantitative and qualitative data both are collected simultaneously (see Figure 8.5b). Unlike the traditional triangulation design, a nested design has a predominant method that guides the project. Given less priority, a method (quantitative or qualitative) is embedded, or nested, within the predominant method (qualitative or quantitative). This nesting may mean that the embedded method addresses a question different from that addressed by the dominant 230 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES method or that the embedded method seeks information from different levels [the analogy to hierarchical analysis in quantitative research is helpful in conceptualizing these levels (see Tashakkori & Teddlie, 1998)]. The data collected from the two methods are mixed during the analysis phase of the project. This design may or may not have a guiding theoretical perspective. The concurrent nested design may be used to serve a variety of purposes. Often, this design is used so that a researcher may gain broader perspectives from using the different methods as opposed to using the predominant method alone. For example, Morse (1991) noted that a primarily qualitative design could embed some quantitative data to enrich the description of the sample participants. Likewise, she described how qualitative data could be used to describe an aspect of a quantitative study that cannot be quantified. In addition, a concurrent nested design may be employed when a researcher chooses to use different methods to study different groups or levels within a design. For example, if an organization is being studied, then employees could be studied quantitatively, managers could be interviewed qualitatively, entire divisions could be analyzed with quantitative data, and so forth. Tashakkori and Teddlie (1998) described this approach as a multilevel design. Finally, one method could be used within a framework of the other method such as if a researcher designed and conducted an experiment but used case study methodology to study each of the treatment conditions. This mixed methods design has many strengths. A researcher is able to simultaneously collect the data during one data collection phase. It provides a study with the advantages of both quantitative and qualitative data. In addition, by using the two different methods in this fashion, a researcher can gain perspectives from the different types of data or from different levels within the study. There are also [imitations to consider when choosing this design. The data need to be transformed in some way so that they can be integrated within the analysis phase of the research. There has been little written to date to guide a researcher through this process. In addition, there is little advice to be found for how a researcher should resolve discrepancies that occur between the two types of data. Because the two methods are unequal in their priority, this design also results in unequal evidence within a study, and this may be a disadvantage when interpreting the final results. CONCURRENT TRANSFORMATIVE DESIGN As with the sequential transformative design, the concurrent transformative design is guided by the researcher's use of a specific theoretical perspective (see Figure 8.5c). This perspective can be based on ideologies such as critical theory, advocacy, participatory research, and a conceptual or theoretical framework. This perspective is reflected !in the purpose or research questions of the study (see Newman et al., Chapter 6, this volume). It is the driving force behind all methodological choices such as defining the problem; identifying the design and data sources; and analyzing, interpreting, and reporting results throughout the research process (see Mertens, Chapter 5, this volume). The choice of a concurrent design (whether it is triangulation or a nested design) is made to facilitate this perspective. For example, the design may be nested so that diverse participants are given a voice in the change process of an organization that is studied Advanced Designs ♦ 231 primarily quantitatively. It may involve a triangulation of both quantitative and qualitative data to best converge information so as to provide evidence for an inequality of policies in an organization. Thus, the concurrent transformative design may take on the design features of either a triangulation or nested design. That is, the two types of data are collected at the same time during one data collection phase and may have equal or unequal priority. The integration of these different data would most often occur during the analysis phase, although integration during the interpretation phase would be a possible variation. Because the concurrent transformative design shares common features with the triangulation and nested designs, it also shares their specific strengths and weaknesses. However, this design also has the added advantage of positioning mixed methods research within a transformative framework, and this may make it especially appealing to those qualitative or quantitative researchers already using a transformative framework to guide their inquiry. ♦ Issues in Implementing Designs Although there are several discussions currently under way among those writing about mixed design applications, issues related to implementation fall into three categories: whether the design needs to be lodged within a paradigm perspective; how data analysis varies by design and the use of computer programs that handle both quantitative and qualitative data; and the placement of design procedures within a study, especially the elaboration of visual presentations of the procedures. PARADIGMS AND DESIGNS Substantial discussion has taken place in the mixed methods literature about the "compatibility" of quantitative and qualitative research and whether paradigms of research and methods can be mixed. For example, can a qualitative philosophical perspective, such as the existence of multiple realities, be combined with a quantitative study that uses a closed-ended survey to gather data and restrict the perspectives of the participants? The linking of paradigms and methods has been referred to as the "paradigm debate" (Cook & Rekhardt, 1979; Reichardt & Rallis, 1994). Although this debate has largely subsided due to the use of multiple methods regardless of paradigm perspective, the discussion helped to raise the issue of whether philosophical perspectives should be explicitly stated and acknowledged in mixed methods studies. More specifically to the point of this chapter is this question: Should a philosophical position be embraced by the author of a mixed methods study, and will this position vary by types of design? Several authors (e.g., Patton, 1990; Rossman & Wilson, 1985; Tashakkori & Teddlie, 1998) have suggested that pragmatism is the foundation for these designs. This philosophy, drawn from Deweyan ideas and most recently articulated by Cherryholmes (1992), maintains that researchers should be concerned with applications, with what works, and with solutions to problems. In light of this, the authors have called for the use of both quantitative and qualitative methods to best understand research problems. However, as applied to the six designs advanced in this chapter, a single philosophical framework does not work with all designs. If one takes the perspective that the mixed methods researcher should 232 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES be explicit about the paradigm or philosophy behind his or her design, then a number of philosophical perspectives can enter into the study. Today, multiple paradigms exist for our inquiries such as positivism, postpositivism, interpretivism, and participatory/advocacy perspectives (Denzin & Lincoln, 2000). In a sequential explanatory design, strongly based on quantitative research, the paradigm stated may be postpositivist, while in a sequential exploratory design, with the lead taken by qualitative research, the paradigm may be more interpretive or participatory/advocacy oriented. A triangülation design may use several paradigms as a framework for the study. A transformative design may employ qualitative, quantitative, or mixed methods so long as the ideological lens of advocacy or participation is a central element in shaping the purpose, the questions, the collaborative nature of data collection and analysis, and the interpreting and report of results (see Mertens's chapter in this volume [Chapter 5]). While Greene and Caracelli (1997) recommended that researchers employing mixed methods research be explicit about their paradigms, we can now extend this suggestion to a consideration of what paradigm is best given the choice of a design for the mixed methods study. DATA ANALYSIS AND DESIGNS Approaches to data analysis also need to be sensitive to the design being implemented in a mixed methods study. Different analysis approaches have been suggested for integrating quantitative and qualitative data that explore how the information might be transformed or analyzed for outlier cases (Caracelli & Greene, 1993). Further approaches to an- alyzing data are also found in Tashakkori and Teddlie (1998), Creswell (2002), and Onwuegbuzie and Teddlie's chapter in this volume (Chapter 13). When the six types of designs are considered, we see in the sequential designs that the data analysis typically proceeds independently for both the quantitative and qualitative phases. The researcher relies on standard data analysis approaches (e.g., descriptive and inferential analysis of quantitative data, coding and thematic analysis of qualitative data). Alternatively, in the concurrent designs, the analysis requires some data transformation so as to integrate and compare dissimilar databases (e.g., quantitative scales are compared with qualitative themes, qualitative themes are converted into scores). Other options exist as well, as seen in Table 8.4, which shows the relationship among data analysis approaches as well as a description of each approach and its relationship to each of the six designs. A related issue is whether a computer program should be used in mixed methods research and what programs are amenable to the analysis of both quantitative and qualitative data (see Bazeley's discussion of computer data analysis in Chapter 14 of this volume). Several qualitative data analysis programs allow for the import and export of quantitative data in table formats (Creswell & Maietta, 2002). Programs such as ETHNOGRAPH 5, HyperRESEARCH 2.5, Classic NUD.IST Versions 4 and 5, NVIVO, ATLAS.ti, and WinMAX allow the user to move to and from quantitative and spreadsheet packages with direct links into document identification numbers. For example, it is now possible to create a numerical SPSS file at the same time that a text file is being developed and to merge the data using qualitative software computer packages. Advanced Designs ♦ 233 V Type of Mixed Methods Design Concurrent (triangülation, nested, transformative) s Design and Data Analysis/Interpretation Sequential (explanatory, exploratory, transformative) Examples of Analytic Procedures Quantifying qualitative data: Code qualitative data, assign numbers to codes, and record the number of times codes appear as numeric data. Descriptively analyze quantitative data for frequency of occurrence. Compare the two data sets. Qualifying quantitative data: Factor-analyze the quantitative data from questionnaires. These factors then become themes. Compare these themes to themes analyzed from qualitative data. Comparing results: Directly compare the results from qualitative data collection to the results from quantitative data collection. Support statistical trends by qualitative themes or vice versa. Consolidating data: Combine qualitative and quantitative data to form new variables. Compare original quantitative variables to qualitative themes to form new quantitative variables. (Caracelli & Greene, 1993) Examining multilevel: Conduct a survey at the student level. Gather qualitative data through interviews at the class level. Survey the entire school at the school level. Collect qualitative data at the district level. Information from each level builds to the next level. (Tashakkori & Teddlie, 1998) Following up on outliers or extreme cases: Gather quantitative data and identify outlier or residual cases. Collect qualitative data to explore the characteristics of these cases. (Caracelli & Greene, 1993) Explaining results: Conduct a quantitative survey to identify how two or more groups compare on a variable. Follow up with qualitative interviews to explore tlie reasons why these differences were found. Using a typology: Conduct a quantitative survey, and develop factors through a factor analysis. Use these factors as a typology to identify themes in qualitative data such as observations and interviews. (Caracelli & Greene, 1993) (Continued) 234 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES 3n TABLE 8.4 (Continued) Type of Mixed Method Design Examples of Analytic Procedures Sequential • Locating an instrument: Collect qualitative data and identify (Continued) themes. Use these themes as a basis for locating instruments that use parallel concepts to the qualitative themes. • Developing an instrument: Obtain themes and specific statements from individuals that support the themes. During the next phase, use these themes and statements to create scales and items in a questionnaire. Alternatively, look for existing instruments that can be modified to fit the themes and statements found in the qualitative exploratory phase of the study. After developing the instrument, test it out with a sample of a population. • Forming categorical data: Site-level characteristics (e.g., different ethnic groups) gathered in an ethnography during the first phase of a study become a categorical variable during a second-phase correlational or regression study. (Caracelli & Greene, 1993) • Using extreme qualitative cases: Qualitative data cases that are extreme in a comparative analysis are followed by quantitative surveys during a second phase. (Caracelli & Greene, 1993) SOURCE: Adapted from Creswell (2002). PROCEDURES AND DESIGNS With the discussion of mixed methods research designs have emerged additional questions about how researchers should conceptualize and present their discussions about designs and how they can articulate them so that proposal reviewers, editorial board reviewers, and conference attendees can easily understand the procedures involved in the mixed methods discussions. With the complex features often found in these designs, it is not surprising that writers have presented figures in their studies that portray the general flow of procedures such as those advanced by Steckler et al. (1992) and shown in Figure 8.1. But such visualizations do not go far enough. Added to these visual models can also be the procedures employed by the researcher, so that readers see the visual picture and learn about the accompanying procedures involved in each step. Thus, the discussion in the mixed methods literature about visual models (see Steckler et al., 1992) and the steps in the research process (as discussed by Creswell, 1999) can be combined. Such a combination of ideas in a single figure is illustrated in Figure 8.6. In this Advanced Designs ♦ 235 Phase I Qualitative Research—Year J Qualitative Data Analy Qualitative Findings Unstructured Interviews -50 participants 8 observations at the site 16 documents Text Analysis: Using NUD.IST6.0 (N6) Development of codes and themes for each site Phase II Quantitative Research—Year 2 Quantitative Instrument Development Create an instrument with approximately 80 items plus demographics Administer survey to 500 individuals Determine factor structure of items and conduct reliability analysis for scales Determine how groups differ using ANOVA test Figure 8.6. Elaborated Visualization for Mixed Methods Procedure figure, we see a two-phase mixed methods study. There are three levels introduced in the visualization of procedures. First, readers find the phases to be organized into qualitative research followed by quantitative research for each year of the project. Then, the more general procedures of data collection and analysis are presented in the circles and boxes on the left and, finally, the more specific proce- dures are identified on the right. Arrows help readers to see how the two phases are integrated into a sequential process of research. Although Figure 8.6 is only for the sequential exploratory model in our designs, one can extrapolate the basic design features to the other design possibilities and emerge with visualizations of designs that are both useful and clear to readers and reviewers of mixed methods studies. 236 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES QUAN qual Results QUAN Data Collection qual Data Collection QUAN Data Analysis \ qual Data Analysis /" Combined Data interpretation Figure 8.7. Proposed Visualization of the Concurrent Triangulation Design Used in Hossler and Vesper (1993) ♦ Returning to the Hossler and Vesper Mixed Methods Study The Hossler and Vesper (1993) study that began our discussion can now be advanced in a visual diagram and assessed in terms of the four criteria and the six types of designs. As mentioned earlier, we can now see the Hossler and Vesper study as a concurrent triangulation design with priority given to quantitative research. The study began with quantitative questions (i.e., "To what extent are parents saving for postsecondary education? What factors are associated with parental savings? Do certain kinds of information appear to influence parental savings?" [p. 141]), but the data were collected concurrently in the form of surveys and interviews. The authors then analyzed the survey data separately from the interview data. Their intent was to triangulate the findings, which readers will find in the discussion section. They did not use a theoretical framework to frame the study, and they did not provide a visualization of their research procedures. If they had incorporated this visualization, then it might have looked like the representation shown in Figure 8.7, where there are simultaneous quantitative and qualitative data collection and analysis and an interpretation in which they converged the data. If the data were presented in a "box text" diagram as shown in Box 8.1, as is used by writers of mixed methods research designs (e.g., see Tashakkori & Teddlie, 1998), then the essential information about the study that marks it as a mixed methods project could be illustrated through information about the methodology, aspects about the participants and data collection, the data analysis, and the discussion. Further information could be supplied about the four decision criteria made by the researchers. This review of the Hossler and Vesper study highlights how discussions about mixed methods designs need to consider the underlying decisions that go into Advanced Designs ♦ 237 BOX 8.1 Hossler, D., & Vesper, N. (1993). An exploratory study of the factors associated with parental saving for postsecondary education. Journal of Higher Education, 64(2), 140-165. This article provides an example of how qualitative and quantitative methods can be combined in educational research. As the title of the article suggests, two methodologies are used, and rationales for the use of each method are provided to readers. The primary goal of the research is to add information to the dearth of extant research in this area. The principal methodology of this study was quantitative with a strong qualitative complement. Student and parent data garnered from a longitudinal study involving multiple surveys over a 3-year time line served as the basis for logistic regression that was used to identify the specific factors most strongly associated with parental saving for post-secondary education. Additional insights into the phenomenon of interest were gained from interviews of a small subsample of students and parents who were interviewed five times during the 3-year duration of the study. Interviews were used both to explore emerging themes in greater detail and to triangulate findings. Components of data collection: A total of 182 students and parents participated. All participants completed surveys 10 times over a 4-year span. A total of 56 students and their parents from eight high schools in the sample participated in interviews four times each year while the students were in their junior and senior years in high school. Development of both the surveys and the interview protocols was an iterative process. Data analysis: Quantitative data were statistically analyzed via logistic regression, with significant discussion of coding of independent and dependent variables. Qualitative data were analyzed via thematic analysis, with data being unitized and categorized. Discussion and inferences: Both quantitative and qualitative results were discussed jointly in the discussion section of the article. Significant factors identified by the logistic regression were corroborated with the theme that had emerged from the interviews. Areas of overlap between the analyses were discussed, although there was little mention of any inconsistencies in the data. Triangulating the results from the survey and interview data allowed the authors to posit a model of parental saving. (Continued) 238 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES ... BOX 8.1 (Continued) Priority: QUANTITATIVE Sequence: qual + QUAN simultaneously Integration: data collection, data analysis, and inference stages Tranformative: not present Strengths: Combining methods of data collection and analysis allowed for the construction of more sensitive survey instruments as well as a better and broader understanding of the phenomenon of interest. Directions for intervention and policy development were identified and discussed. Weaknesses: It was difficult to separate the quantitative and qualitative components in the discussion section. Implementing a mixed method design would be difficult if contradictory quantitative and qualitative data were found. 1,_________________________________________.______ selecting a design; the type of design being used; and issues related to paradigms, data analysis, and the delineation of procedures using visuals. Undoubtedly, more issues will emerge about designing mixed methods studies, and a periodic assessment needs to provide an ongoing synthesis of the literature. In this way, we can continue to explore the methodology of mixed methods research and present additional guidelines for both novice and experienced researchers as they continue to develop, write, and publish these studies. ■ References Bryman, A. (1988). Quantity and quality in social science research. London: Routledge. Campbell, D., & Fiske, D. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81-105. Caracelli, V. J., & Greene, J. C. (1993). Data analysis strategies for mixed-method evalu- ation designs. Educational Evaluation and Policy Analysis, 25(2), 195-207. Cherryholmes, C. H. (1992, August-September). Notes on pragmatism and scientific realism. Educational Researcher, 21, 13-17. Cook, T. D., & Reichardt, C. S. (Eds.). (1979). Qualitative and quantitative methods in evaluation research. Beverly Hills, CA: Sage. Creswell, J. W. (1994). Research design: Qualitative and quantitative approaches. Thousand Oaks, CA: Sage. Creswell, J. W. (1999). Mixed method research: Introduction and application. In T Cijek (Ed.), Handhook of educational policy (pp. 455-472). San Diego: Academic Press. Creswell, J. W. (2002). Educational research: Planning, conducting, and evaluating quantitative and qualitative approaches to research. Upper Saddle River, NJ: Merrill/ Pearson Education. Creswell, J. W., Goodchild, L., & Turner, P. (1996). Integrated qualitative and quantitative research: Epistemology, history, and designs. In J. Smart (Ed.), Higher education: Handbook of theory and research (Vol. 11, pp. 90-136). New Yotk: Agathon Press. Advanced Designs ♦ 239 Creswell, J. W., & Maietta, R. C. (2002). Qualitative research. In N. J. Salkind (Ed.), Handbook of social research (2nd ed., pp. 143-184). Thousand Oaks, CA: Sage. ' Creswell, J. W., & Miller, G. A. (1997). Research methodologies and the doctoral process. In L. Goodchild, K. E. Green, E. L. Katz, & R. C. Kluever (Eds.), Rethinking the dissertation process: Tackling personal and institutional obstacles (New Directions for Higher Education, No. 99, pp. 33-46). San Francisco: Jossey-Bass. Datta, L. (1994). Paradigm wars: A basis for peaceful coexistence and beyond. In C. S. Reichardt &c S. F. Rallis (Eds.), The qualitative-quantitative debate: New perspectives (New Directions for Program Evaluation, No. 61, pp. 53-70). San Francisco: Jossey-Bass. Denzin, N., & Lincoln, Y. S. (2000). Handbook of qualitative research (2nd ed.). Thousand Oaks, CA: Sage. Fielding, N. G., &c Fielding, J. L. (1986). Linking data. Newbury Park, CA: Sage. Glik, D. C, Parker, K., Muligande, G., & Hategikamana, D. (1986-1987). Integrating qualitative and quantitative survey techniques. International Quarterly of Community Health Education, 7(3), 181-200. Greene, J. C, & Caracelli, V.J. (Eds.). (1997). Advances in mixed-method evaluation: The challenges and benefits of integrating diverse paradigms (New Directions for Evaluation, No. 74). San Francisco: Jossey-Bass. Greene, J. C, Caracelli, V.J., & Graham, W. F. (1989). Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis, 11, 255-274. Hossler, D., & Vesper, N. (1993). An exploratory study of the factors associated with parental savings for postsecondary education. Journal of Higher Education, 64(2), 140-165. Hugentobler, M. K., Israel, B. A., & Schurman, S. J. (1992). An action research approach to workplace health: Integrating methods. Health Education Quarterly, 19(1), 55-76. Jick, T D. (1979). Mixing qualitative and quantitative methods: Triangulation in action. Administrative Science Quarterly 24 602-611. LeCompre, M. D., & Schensul, J. J. (1999), Designing and conducting ethnographic research (Ethnographer's Toolkit, No. 1). Walnut Creek, CA: AltaMira. Mathison, S. (1988). Why triangulate? Educational Researcher, 17(2), 13-17. Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis (2nd ed.). Thousand Oaks, CA: Sage. Morgan, D. (1998). Practical strategies for combining qualitative and quantitative methods: Applications to health research. Qualitative Health Research, 8, 362-376. Morse, J. M. (1991). Approaches to qualitative-quantitative methodological triangula-tion. Nursing Research, 40, 120-123. Patton, M. Q. (1990). Qualitative evaluation and research methods. Newbury Park, CA; Sage. Reichardt, C. S., & Rallis, S. E. (1994). The relationship between the qualitative and quantitative research traditions. In C. S. Reichardt & S. E Rallis (Eds.), The qualitative-quantitative debate: New perspectives (New Directions for Program Evaluation, No. 61, pp. 5-11). San Francisco: Jossey-Bass. Rossman, G. B., & Wilson, B. L. (1985). Number and words: Combining quantitative and qualitative methods in a single large-scale evaluation study. Evaluation Review, 9, 627-643. Rossman, G. B., & Wilson, B. L. (1991). Numbers and words revisited: Being "shamelessly eclectic." Washington, DC: Office of Educational Research and Improvement. (ERIC Document Reproduction Service No. 337 235) Sieber, S. D. (1973). The integration of field work and survey methods. American Journal of Sociology, 78, 1335-1359. Steckler, A., McLeroy, K. R., Goodman, R. M., Bird, S. T, & McCormick, L. (1992). Toward integrating qualitative and quantitative methods: An introduction. Health Education Quarterly, 19(1), 1-8. ♦ METHODOLOGICAL AND ANALYTICAL ISSUES Swanson- Kauffman, K. M. (1986). A combined qualitative methodology fot nursing research. Advances in Nursing Science, 8(3), 58-69. Tashakkori, A., & Teddlie, C. (1998). Mixed methodology: Combining qualitative and quantitative approaches (Applied Social Research Methods, No. 46). Thousand Oaks, CA: Sage. MIXED METHODS DESIGN: AN ALTERNATIVE APPROACH ♦ Joseph A. Maxwell Diane M. Loomis The explicit use of both quantitative and qualitative methods in a single study, a combination commonly known as mixed methods research, has become widespread in many of the social sciences and applied disciplines during the past 25 years. Tashakkori and Teddlie (1998, p. 14) dated the explicit emergence of mixed methods research to the 1960s, with this approach becoming common by the 1980s with the waning of the "paradigm wars." They also identified a subsequent integration of additional aspects of the qualitative and quantitative approaches—not just methods—beginning during the 1990s, which they called "mixed model" studies (p. 16). Such aspects include epistemological assump- tions, types of investigation and research design, and analysis and inference strategies. However, the practice of mixed methods (and mixed models) research has a much longer history than the explicit discussion of the topic. In natural sciences such as ethology and animal behavior, evolutionary biology, paleontology, and geology, the integration of goals and methods that typically would be considered qualitative (naturalistic settings, inductive approaches, detailed description, attention to context, and the intensive investigation of single cases) with those that are generally seen as quantitative (experimental manipulation; control of extraneous variables; formal hypothesis testing; ♦ 241 242 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES theory verification; and quantitative sampling, measurement, and analysis) has been common for more than a century. In addition, many classic works in the social sciences employed both qualitative and quantitative techniques and approaches without deliberately drawing attention to this. (Several of these classic studies are analyzed in detail later in the chapter.) From this broader perspective, mixed methods research is a long-standing (although sometimes controversial) practice rather than a recent development. Indeed, a case could be made that mixed methods research was more common in earlier times, when methods were less specialized and compartmentalized and the paradigm wars were less heated. Staw (1992) observed, "When the field of organizational behavior was beginning in the 1950s, there was less of an orthodoxy in method. People observed, participated, counted, and cross-tabulated. There was ready admission that each methodology was flawed" (p. 136). And Rabinowitz and Weseen (2001) argued, I There was a time in psychology when qualitative and quantitative methods were more easily combined that they are today. Famous experimental social psychologists such as Solomon Asch, Stanley Milgram, and Leon Festinger combined both approaches in some of their most famous works, although the qualitative aspects of the pieces tend to get lost in textbook accounts of their work, as well as in the minds of many instructors and researchers, (pp. 15-16) This widespread but relatively implicit use of methods, approaches, and concepts from both the qualitative and quantitative paradigms makes it important, in understanding mixed methods design, to investigate the actual conduct of the study (in- sofar as this can be determined from the publications resulting from the research) rather than depending only on the authors' own characterization of what they did. Kaplan (1964) coined the terms "logic-in-use" and "reconstructed logic" to describe this difference (p. 8). This issue is magnified when mixed model research is considered because aspects of the study other than methods are often less explicitly identified. It is thus important to pay particular attention to the logic-in-use of mixed methods studies in attempting to understand how qualitative and quantitative methods and approaches can be integrated. In this chapter, we address this issue by presenting an alternative to the usual ways of thinking about mixed methods design. There are two points on which our position differs from most other approaches to mixed methods studies. First, our concept of "design" is different from that employed in most approaches to designing mixed methods studies. The authors of the latter works have typically taken a typological view of research design, presenting a taxonomy of ways to combine qualitative and quantitative methods. In this handbook, for example, the chapters on design by Morse (Chapter 7) and Creswell, Piano Clark, Gutmann, and Hanson (Chapter 8) both focus on the different types of mixed methods research, delineating the dimensions on which such studies can vary and identifying the possible and actual combinations of qualitative and quantitative methods. We approach the issue of design from a fundamentally different perspective. We see the design of a study as consisting of the different components of a research study (including purposes, conceptual framework, research questions, and validity strategies, in addition to "methods" in a strict sense) and the ways in which these components are integrated with, and An Alternative Approach ♦ 243 mutually influence, one another. We present what Maxwell (1996) called an "interactive" model for research design and apply this model to mixed methods research, showing how the different components of actual mixed methods studies are integrated. The model is termed interactive {systemic would also be appropriate) because the components are connected in a network or web rather than a linear or cyclic sequence. The second way in which our approach is distinctive is that we base our approach to mixed methods research on a conceptual analysis of the fundamental differences between qualitative and quantitative research (Maxwell, 1998; Maxwell &Mohr, 1999; Mohr, 1982,1995,1996). This analysis employs a distinction between two approaches to explanation, which we call variance theory and process theory. The use of this distinction leads to somewhat different definitions of these two types of research from those found in most other works, and thus it leads to a somewhat different idea of what mixed methods research consists of. Our purpose in this chapter is to provide some tools for analyzing such studies and for developing mixed methods designs. We begin by presenting the contrast between prevalent typological views of design and an interactive approach to research design. We develop the latter approach in detail, explaining the components of the interactive model and the systemic relationships among these components. We then turn to the nature of the qualitative-quantitative distinction, presenting an analysis of this distinction that is grounded in the contrast between two fundamentally different ways of thinking about explanation. This leads to a discussion of paradigms and of whether qualitative research and quantitative research constitute distinct or incompatible paradigms. These two analyses are then com- bined in a presentation of the ways in which qualitative and quantitative approaches to each of the design components differ and of some of the sources of complementarity that these differences generate. Finally, we apply this approach to a variety of actual studies that combine qualitative and quantitative strategies and methods, providing an in-depth analysis of how the designs of these studies actually functioned and the strengths and limitations of the designs. In proposing this alternative approach, we are not taking a polemical or adversarial stance toward other approaches to mixed methods design. We see our approach as complementary to others and as providing some tools and insights that other approaches might not as clearly provide. The complementarity that we see between different approaches to design is similar to the complementarity that we advocate in mixed methods research, which Greene and Caracelli (1997) called "dialectical" (p. 8), and we believe that combining typological and systemic strategies for analyzing and creating research designs will be more productive than either used alone. ♦ Existing Approaches to Mixed Methods Design We stated previously that existing approaches to mixed methods design have been primarily typological. This is not to claim that issues other than typology have been ignored. We believe that these issues have generally been framed within an overall typological approach and that the analysis of mixed methods studies has focused on the classification of these studies in terms of a typology of mixed methods designs. For example, Caracelli and Greene (1997) identified two basic types 244 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES of mixed methods designs, which they called "component" and "integrated" designs. Component designs are ones in which "the methods are implemented as discrete aspects of the overall inquiry and remain distinct throughout the inquiry," while integrated designs involve "a greater integration of the different method types" (pp. 22-23). Within these broad categories, they described seven subtypes, based largely on the purposes for combining methods. Patton (1990) presented a different typology, based on qualitative or quantitative approaches to three key stages of a study (design, data, and analysis); he used this to generate four possible mixed designs, involving a choice of qualitative or quantitative methods at each stage (not all sequences were viewed as possible by Patton). Tashakkori and Teddlie (1998) built on Patton's approach to create a much more elaborate typology. They distinguished mixed methods designs (combining methods alone) from mixed model designs (combining qualitative and quantitative approaches to all phases of the research process) and created an elaborate set of subtypes within these. Not all work on mixed methods design has been typological. For example, Bryman (1988) focused on identifying the purposes for combining qualitative and quantitative methods, and Brewer and Hunter (1989) took a similar approach, organizing their discussion in terms of the different stages of the research. Creswell (1994) presented three models for mixed methods research but then related these models to each of his "design phases," which correspond roughly to the different sections of a research proposal. Typologies are unquestionably valuable. They help a researcher to make sense of the diversity of mixed methods studies and to make some broad decisions about how to proceed in designing such a study. In particular, distinctions based on the sequence or order in which approaches are combined, the relative dominance or emphasis of the different approaches, whether the approaches are relatively self-contained or integrated, and the different purposes for combining methods are particularly important in understanding mixed methods design. However, typological approaches also have their limitations. First, the actual diversity in mixed methods studies is far greater than any typology can adequately encompass; this point was emphasized by Caracelli and Greene (1997) as well as Tashakkori and Teddlie (1998, pp. 34-36, 42). In particular, the recognition of multiple paradigms (e.g., positivist, realist, con-structivist, critical, postmodern) rather than only two, the diversity in the aspects of quantitative and qualitative approaches that can be employed, the wide range of purposes for using mixed methods, and differences in the setting where the study is done and the consequences of this for the design all make the actual analysis of a mixed methods design far more complicated than simply fitting it into a taxonomie framework. Second, most typologies leave out what we feel are important components of design, including the purposes of the research, the conceptual framework used, and the strategies for addressing validity issues. All of these components are incorporated into the interactive design model presented next. Typologies also tend to be linear in their conception of design, seeing the components as "phases" of the design rather than as interacting parts of a complex whole. Third, typologies by themselves generally do little to clarify the actual functioning and interrelationship of the qualitative and quantitative parts of a design; the An Alternative Approach ♦ 245 typology presented by Caracelli and Greene (1997) is an exception to this criticism because that typology is based partly on the purposes for which a mixed approach is used. Similarly, Pawson and Tilley (1997, p. 154) argued tha t a pragmatic pluralism in combining methods leads to no new thinking and does not clarify how to integrate approaches or when to stop. ♦ An Interactive Model of Design We believe that an interactive approach to research design can help to address these problems. Rather than seeing "design" as a choice from a fixed set of possible arrangements or sequences in the research process, such approaches (e.g., Grady & Wallston, 1988; Martin, 1982; Maxwell, 1996) treat the design of a study as consisting of the actual components of a study and the ways in which these components connect with and influence one another. This approach to design is consistent with the conception of design employed in architecture, engineering, art, and virtually every other field besides research methods in which the term is used: "an underlying scheme that governs functioning, developing, or unfolding" and "the arrangement of elements or details in a product or work of art" (Merriam-Webster, 1984). A good design, one in which the components are compatible and work effectively together, promotes efficient and successful functioning; a flawed design leads to poor operation or failure. The interactive model presented here has two essential properties: the components themselves and the ways in which these are related. There are five components to the model, each of which can be characterized by the issues that it is intended to address (Maxwell, 1996, pp. 4-5): 1. Purposes What are the goals of this study? What issues is it intended to illuminate, and what practices or outcomes is it intended to influence? Why is the study worth doing? These purposes can be personal, practical, or intellectual; all three kinds of purposes can influence the rest of the research design. 2. Conceptual Framework What theories and beliefs about the phenomena studied will guide or inform the research? These theories and beliefs may be drawn from the literature, personal experience, preliminary studies, or a variety of other sources. This component of the design contains the theory that the researcher has developed, or is developing, about the setting or issues being studied. 3. Research Questions What specifically does the researcher want to understand by doing this study? What questions will the research attempt to answer? 4. Methods How will the study actually be conducted? What approaches and techniques will be used to collect and analyze the data, and how do these constitute an integrated strategy? There are four distinct parts of this component of the model: (a) the relationship that the researcher establishes with the participants in the study; (b) the selection of settings, participants, times and places of data collection, and other data sources such as documents (what is 246 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES Figure 9.1. An Interactive Model of Research Design often called "sampling"); (c) data collection methods; and (d) data analysis strategies and techniques. 5. Validity How might the conclusions of the study be wrong? What plausible alternative explanations and validity threats are there to the potential conclusions of the study, and how will these be addressed? These components are not radically different from the ones presented in many other discussions of research design (e.g., LeCompte & Preissle, 1993; Miles & Huberman, 1994; Robson, 1993). What is innovative is the way in which the relationships among the components are conceptualized. In this model, the components form an integrated and interacting whole, with each component closely tied to several others rather than being linked in a linear or cyclic sequence. Each of the five components can influence and be influenced by any of the other components. The key relationships among the components are displayed in Figure 9.1. In this diagram, the most important of these relationships are represented as two-way arrows. There is considerable similarity to a systems model of how the parts of a system are organized in a functioning whole. While all of the five components can influence other components of the design, the research questions play a central role. In contrast to many quantitative models of design, the research questions are not seen as the starting point or guiding component of the design; instead, they function as the hub or heart of the design because they form the component that is most directly linked to the other four. The research questions need to inform, and be responsive to, all of the other components of the design. An Alternative Approach ♦ 247 Perceived problems Personal goals N. Participant concerns RESEARCH .QUESTIONS Funding and funder goals r^J^K^ jH Jk^ Ethical standards METHODS J4Ě ^^ Tv^ \ Research setting Researcher skills and preferred style of Research paradigm research Personal experience Existing theory and prior research CONCEPTUAL FRAMEWORK Thought experiments Preliminary data and conclusions Figure 9.2. Contextual Factors Influencing a Research Design There are many other factors besides these five components that can influence the design of a study. These include the resources available to the researcher, the researcher's abilities and preferences in methods, perceived intellectual or practical problems, ethical standards, the research setting, the concerns and responses of participants, and the data that are collected. These additional influences are best seen not as part of the design itself but rather either as part of the environment within which the research and its design exist or as products of the research (Maxwell, 1996, pp. 6-7). Figure 9.2 presents some of the factors in the environ- ment that can influence the design and conduct of a study. The five components of this design model, by contrast, represent issues that are not external to the design of the study but rather are integral parts of it; they represent decisions and actions that must be addressed, either explicitly or implicitly, by the researcher. One way in which this design model can be useful is as a tool or template for conceptually mapping the design of a study, either as part of the design process or in analyzing the design of a completed study. This involves filling in the boxes or circles for the five components of the model with the actual components of a 248 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES particular study's design. We apply this technique specifically to a variety of mixed methods studies later in the chapter. ♦ The Qualitative-Quantitative Distinction Because there are so many points of difference between the qualitative and quantitative approaches, there has been considerable variation in how the distinction between the two has been framed. Early work often based this distinction simply on the kind of data employed (textual or numerical). Creswell (1994), by contrast, saw the distinction between inductive and deductive approaches as most important, while Tashakkori and Teddlie (1998, p. 55) distinguished three different stages or dimensions for which the distinction can be made: type of investigation (exploratory or confirmatory), data collection (qualitative or quantitative), and analysis and inference (qualitative or statistical). Guba and Lincoln (1989) made the distinction at a more philosophical level, as a distinction between constructivism and positivism. In our view, the qualitative-quantitative distinction is grounded in the distinction between two contrasting approaches to explanation, which Mohr (1982) termed variance theory and process theory. Variance theory deals with variables and the correlations among them; it is based on an analysis of the contribution of differences in values of particular variables to differences in other variables. Variance theory, which ideally involves precise measurement of differences on and correlations between variables, tends to be associated with research that employs extensive pre-structuring of the research, probability sampling, quantitative measurement, statistical testing of hypotheses, and experi- mental or correlational designs. As Mohr noted, "The variance-theory model of explanation in social science has a close affinity to statistics. The archetypal rendering of this idea of causality is the linear or nonlinear regression model" (p. 42). Process theory, by contrast, deals with events and the processes that connect them; it is based on an analysis of the causal processes by which some events influence others. Because process explanation deals with specific events and processes, it is much less amenable to quantitative approaches. It lends itself to the in-depth study of one or a few cases or a small sample of individuals and to textual forms of data that retain the contextual connections between events. Weiss (1994) provided a concrete example of this strategy: In qualitative interview studies the demonstration of causation rests heavily on the description of a visualizable sequence of events, each event flowing into the next. . . . Quantitative studies support an assertion of causation by showing a correlation between an earlier event and a subsequent event. An analysis of data collected in a large-scale sample survey might, for example, show that there is a correlation between the level of the wife's education and the presence of a companionable marriage. In qualitative studies we would look for a process through which the wife's education or factors associated with her education express themselves in marital interaction, (p. 179) Mohr (1996) has more recently extended his original distinction between process theory and variance theory to identify two conceptions of causation that he has called "factual causation" and "physical causation." Factual causation is An Alternative Approach ♦ 249 the traditional mode of reasoning about causes in quantitative research, where the argument for causality is based on the comparison of situations in which the presumed causal factor is present or absent or has different values. Physical causation, by contrast, does not rely on such comparative logic; it is based on a notion of a mechanical connection between a cause and its effect (p. 16). Similar distinctions have been developed by realist philosophers such as Harre (1972; see also Harre & Madden, 1975) and Salmon (1984, 1989, 1998). While factual causation is an appropriate concept for comparative studies with large N's, physical causation is appropriate for case studies or qualitative interview studies that do not involve formal comparisons. Maxwell and Mohr (1999) used this distinction to identify two aspects of a study that can be productively denoted by the terms qualitative and quantitative: data and design/analysis. We define quantitative data as categorical data, with either enumeration or measurement within categories. A conceptual dimension that is itself a category subdivided by measurement, or that is divided into subcategories for enumerative or frequency data, is generally called a "variable," which is a hallmark of the quantitative approach. Qualitative data, in contrast, are typically textual in nature, consisting of written or spoken words, but may include video recordings and photographs as well as narrative text, (p. 2) Categorical data lend themselves to aggregation and comparison, and they are easily quantified. Textual data, on the other hand, lend themselves to investigation of the processes by which two events or characteristics are connected. I In addition, we propose that quantita-I tive design/analysis is research design I and consequent analysis that rely in a I variety of ways on the comparison of I frequencies or measurements across I subjects or across categories. Such de-I signs focus on identifying differences I between groups or correlations be-I tween variables. In contrast, qualita-I tive design/analysis is design and anal-I ysis that rely in various ways on the I treatment of focal entities as singular I wholes in context, with an emphasis I on the identification of meaning and I process. With these definitions of secondary I terms in mind, the two fundamentally I distinct ways of understanding the world can be specified as two distinct combinations of types of data on the one hand with types of design/analysis on the other. Thus, a quantitative way of understanding the world is a way that views the world in terms of categorical data, featuring the comparison of frequencies and measurements across subjects and categories. A qualitative way of understanding is a way that views the world in terms of textual data, featuring the treatment of focal entities as singular wholes in context, (p. 2) ♦ Paradigmatic Unity and Compatibility This analysis of the qualitative-quantitative distinction reframes the nature of the qualitative and quantitative paradigms but does not address the issue of paradigmatic unity or of the compatibility of different paradigms. This unity is often assumed to be a critical issue in combining methods. For example, Patton (1980, p. 110) emphasized the "integrity" of 250 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES each approach, and Morse (Chapter 7, this volume) argues, I When using mixed methods, it is important that methodological congruence be maintained, that is, that all of the assumptions of the major method be adhered to and that components of the method (such as the data collection and analytical strategies) be consistent. However, the need for such paradigmatic integrity cannot be assumed. McCawley (1982) examined the debate between two positions in linguistics, generative semantics and interpretive semantics, that had generally been seen as unitary paradigms. He showed that both of these approaches in fact consisted of two packages of positions on a large number of issues, with each package corresponding to the views of some prominent members of two communities of linguists. However, I neither of these communities was completely homogeneous, no member of the community retained exactly the same set of views for very long,... and the relationships among the views that were packaged together as "generative semantics" or as "interpretive semantics" were generally far more tenuous than representative members of either community led people (including themselves) to believe, (p. 1) Pitman and Maxwell (1990) similarly argued that the supposed paradigmatic unity of one area of qualitative research, qualitative evaluation, is largely illusory and that major figures in this field hold widely divergent and conflicting views on many of the fundamental issues regarding the use of qualitative approaches for program evaluation. On the quantitative side, the recent debate over null hypothesis significance testing has revealed how the development of this approach incorporated fundamentally incompatible assumptions from different schools of statistics. Such a position does not entail that there is no relationship among the different aspects of each paradigm, as Reichardt and Cook (1979, p. 18) appeared to argue. We agree with Sayer (1992) that there are "resonances" among the different components of each paradigm that "encourage the clustering of certain philosophical positions, social theories, and techniques" (p. 199). The relationship is simply not a necessary or invariant one. Each paradigm constitutes a "loosely bundled innovation" (Koontz, 1976, cited in Rogers, 1995, p. 178), and researchers often resemble the innovation adopters described by Rogers (1995), "struggling to give their own unique meaning to the innovation as it is applied in their local context" (p. 179). Thus, we do not believe that there exist uniform, generic qualitative and quantitative research paradigms. Despite the philosophical and methodological resonances among the components of each paradigm, both of these positions include a large number of distinct and separable components, and there is disagreement even within each approach over the nature, use, and implications of some of the different components. The classic qualitative approach includes the study of natural real-life settings, a focus on participants' meanings and context, inductive generation of theory, open-ended data collection, analytical strategies that retain the textual nature of the data, and the frequent use of narrative forms of analysis and presentation. The quantitative approach includes the formulation of prior hypotheses, the use of experimental interventions, a comparison of treatment and control groups, random sampling or assignment, standardization of instruments and data collection, quantitative data, statistical hypothesis testing, and a focus on causal explanation. Each of these (and other variations too numerous to list) is a separable module with its own requirements and implications rather than an integral and inseparable part of a larger methodological and epistemological whole (Maxwell, Sandlow, & Bashook, 1986; Patton, 1990; Pitman & Maxwell, 1992). While the connections among these components are crucial to the overall coherence of a particular research design (Maxwell, 1996), the possible legitimate ways of putting together these components are multiple rather than singular and, to a substantial extent, need to be discovered empirically rather than logically deduced (Maxwell, 1990). However, we also agree with Kidder and Fine's (1987) statement, "We share the call for 'synthesis,' but at the same time, we want to preserve the significant differences between the two cultures. Instead of homogenizing research methods and cultures, we would like to see researchers become bicultural" (p. 57). Out-view of mixed methods design includes the position that Greene and Caracelli (1997) termed "dialectical" in which differences between the paradigms are viewed as important and cannot be ignored or reconciled. Bernstein (1983), in discussing the differences between Habermas and Derrida, provided a clear statement of what we advocate: II do not think there is a theoretical position from which we can reconcile their differences, their otherness to each other—nor do I think we should smooth out their "aversions and attractions." The nasty questions that they raise about each other's "project" need to be relentlessly pursued. One of the primary lessons of "modernity/ An Alternative Approach ♦ 251 Ipostmodernity" is a radical skepticism about the possibility of a reconciliation—an aufbebung, without gaps, fissures, and ruptures. However, torn gether, Habermas/Derrida provide us I with a force-field that constitutes the I "dynamic, transmutational structure ■ of a complex phenomenon"—the phenomenon I have labeled "modernity/ postmodernity." (p. 225) From this perspective, the "compatibility" of particular qualitative and quantitative methods and approaches becomes a much more complex issue than either paradigmatic or pragmatist approaches usually suggest. Maxwell (1990) claimed that "the theoretical debate about combining methods has prevented us from seeing the different ways in which researchers are actually combining methods, and from understanding what works and what doesn't" (p. 507). What we want to do here is use the interactive design model to understand how qualitative and quantitative approaches can productively be combined. ♦ Qualitative and Quantitative Approaches to the Design Components In this section, we identify the distinctive properties of the quantitative and qualitative approaches to each of the components of design described previously: purposes, conceptual framework, research questions, methods, and validity. The ways in which the two paradigms typically frame each of the components are described briefly and are summarized in Table 9.1. A more detailed discussion of each of the components, focusing mainly on qualitative research but also contrasting this with 252 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES TABLE 9.1 Possibl e Quantitative and Qualitative Elements of the Design Components Quantitative Qualitative Purposes Precise measurement and Meaning comparison of variables Context Establishing relationships Process between variables Discovering unanticipated events, Inference from sample to influences, and conditions population Understanding single cases Inductive development of theory Conceptual framework Variance theories Process theories Research questions Variance questions Process questions Truth of proposition How and why Presence or absence Meaning Degree or amount Context (holistic) Correlation Hypotheses as part of conceptual Hypothesis testing framework Causality (factual) Causality (physical) Research methods Relationship Objectivity/reduction of Use of influence as tool for influence (researcher as understanding (researcher as extraneous variable) part of process) Sampling Probability sampling Establishing valid comparisons Purposeful sampling Data collection Prior development of instruments Inductive development of strategies Standardization Adapting to particular situation Measu rement/testi ng— Collection of textual or visual quantitative/categorical material Data analysis Numerical descriptive analysis Textual analysis (memos, coding, (statistics, correlation) connecting) Estimation of population variables Grounded theory Statistical hypothesis testing Narrative approaches Conversion of textual data into numbers or categories Validity Internal validity Statistical conclusion validity Descriptive validity Interpretive validity Construct validity Construct validity Causal validity (control of Causal validity (identification extraneous variables) and assessment of alternative explanations) Ceneralizability External validity Transferability (comparability) Generalizing to theory An Alternative Approach ♦ 253 quantitative research, is provided in Maxwell (1996). PURPOSES The possible purposes of a study are too numerous and disparate to list, and specific personal and practical purposes are usually not tightly linked to one or the other approach. Intellectual purposes, in contrast, do tend to segregate into qualitative and quantitative categories. Quantitative purposes include precise measurement and comparison of variables, establishing relationships between variables, identifying patterns and regularities that might not be apparent to the people in the settings studied, and making inferences from the sample to some population. Qualitative purposes include understanding the context, process, and meaning for participants in the phenomena studied; discovering unanticipated events, influences, and conditions; inductively developing theory; and understanding a single case. CONCEPTUAL FRAMEWORK The conceptual framework for a study consists of the theory (or theories) relevant to the phenomena being studied that inform and influence the research. The key issue for mixed methods studies, then, is the nature of these theories. Are they variance theories, process theories, some combination of these, or theories that do not fit neatly into this dichotomy? A mismatch between the conceptual framework and the research questions or methods used can create serious problems for the research; a variance theory cannot adequately guide and inform a process-oriented investigation and vice versa. Mismatches between the conceptual framework and the pur- poses or validity strategies are less common but can also be problematic. A mixed methods study is often informed by both variance and process theories, and the main design issue is sorting out specifically how different parts of the conceptual framework are integrated with one another and how they are linked to the other design components. RESEARCH QUESTIONS As with conceptual frameworks, research questions can usually be categorized as variance questions or process questions. The research questions in a quantitative study typically are questions about the measurement or analysis of variation—the amount or frequency of some category, the value of some variable, or the relationship between two or more variables. Such questions are usually framed in terms of the values of key variables, and specific hypotheses are often stated. The questions and hypotheses are nearly always specifically formulated (or presented as if they were formulated) in advance of any data collection, and they are frequently framed in "operational" terms, connecting directly to the measurement or data collection strategies. In a qualitative study, by contrast, the research questions typically deal with the verbal description of some event, phenomenon, or process (What is happening here? What are the characteristics of this phenomenon?); its meaning to participants in the setting studied; or the process by which some events or characteristics of the situation influence other events or characteristics. The questions might not be explicitly stated, and when they are, they might include only the broad initial questions with which the study began and not the more focused questions that developed during the research. 254 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES METHODS As described previously, "methods" as a design component include (a) the relationship that the researcher establishes with individuals and groups being studied; (b) the selection of sites, participants, settings, and times of data collection; (c) the methods used for data collection; and (d) the strategies used for data analysis. Research Relationship. The relationship the researcher has with participants in the study, or with others who control access to these individuals or groups or that may influence the conduct of the study, is a key component of the research design and can have a major impact on the conduct and results of a study. This aspect of design tends to be treated very differently in quantitative and qualitative studies. Quantitative researchers tend to see the research relationship as an extraneous variable—something to be controlled. This can be done either to prevent the relationship from influencing the results or affecting the variables studied or to prevent variance in the relationship from introducing confounding variance in the dependent variables (e.g., standardizing survey interview procedures so that differences in procedures, either within or between interviewers, do not create an additional source of variation in the results). Qualitative studies, on the other hand, typically treat the research relationship not as a variable but rather as a process, one that can have important positive as well as negative consequences for the research. The goal is not to create a standardized relationship but rather to create a relationship that maximizes the understanding gained from each participant interviewed or each situation observed. Such a relationship is often much more personal and informal than is the case in quantitative studies. Sampling. The two main strengths of quantitative sampling (and for experimental research, this can be extended to include assignment of participants to conditions) are to establish valid comparisons and to allow generalization from the sample to the population of interest. Some form of probability sampling (or random assignment) is usually the preferred method; in the absence of this, post hoc strategies (matching or analytical techniques such as analysis of covariance) can be used to increase comparability and generalizability. Qualitative research normally places less emphasis on formal comparisons, and the usual sampling strategy is some form of purposeful sampling. In this approach, participants are selected because they are most likely to provide relevant and valuable information or to allow the researcher to develop or test particular theoretical ideas (in grounded theory research, the latter strategy is called theoretical sampling). Data Collection. Quantitative data collection is typically preplanned, structured, and designed to ensure comparability of data across participants and sites. The data are normally collected in numerical or categorical form, using instruments or procedures that have been designed and tested to ensure reliability and validity. Qualitative data collection is typically more open-ended, flexible, and inductive, and the data are usually textual descriptions, either written notes or recorded verbal data that are converted to textual form by transcribing (increasingly, visual means such as videotaping are being used). Data Analysis. Quantitative analysis can be descriptive (assigning numbers or category labels to data or aggregating data on particular variables) or relational (investigating the relationship between two or more variables in the sample). Quantitative analysis can also make inferences to An Alternative Approach ♦ 255 the population from which the sample was drawn, either estimating the values of population variables or testing hypotheses about the relationship of variables in the population. In addition, textual data can be converted into categorical or numerical form for analysis. Qualitative analysis is more diverse but typically addresses the goals listed under purposes (meaning, context, process, inductive theory development, and in-depth understanding of single cases). The analysis can involve the categorization (coding) of the textual data, but the purpose is quite different from that of quantitative categorization. Rather than being a preliminary step to counting instances of something or aggregating measurements on some variable, the function of qualitative categorization is to collect all of the instances of some type of phenomenon for further qualitative comparison and investigation. The goals of this strategy are to develop an in-depth description of this phenomenon, to identify key themes or properties, and to generate theoretical understanding. The categories are often inductively developed during the analysis rather than systematically formulated prior to the analysis. Both quantitative and qualitative analysis can be either exploratory (on exploratory quantitative data analysis, see Tukey, 1977) or confirmatory, although qualitative researchers usually do not simply test a prior theory without further developing that theory. VALIDITY Under validity, we include both causal (internal) validity and generalizability (external validity). Quantitative researchers, most notably Campbell and Stanley (1963) and Cook and Campbell (1979), have developed a detailed typology of validity issues, validity threats, and strategies for addressing these threats. In addi- tion to causal validity and generalizability, Cook and Campbell identified statistical conclusion validity (the validity of inferences from the sample to the population sampled) and construct validity (the validity of the theoretical constructs employed) as distinct issues. There is less agreement on classifying validity issues in qualitative research. Maxwell (1992) distinguished four main categories of validity in qualitative research: descriptive validity (the validity of the descriptions of settings and events), interpretive validity (the validity of statements about the meanings or perspectives held by participants), explanatory (or theoretical) validity (the validity of claims about causal processes and relationships, including construct validity as well as causal validity proper), and generalizability. Inferences about causality are controversial in qualitative research. Some researchers (e.g., Guba & Lincoln, 1989) deny that causality is an appropriate concept in qualitative research, and this view has been widely accepted. In contrast, Sayer (1992, 2000) and Maxwell (1998), taking a critical realist perspective, argue that causal explanation not only is legitimate in qualitative research but is a particular strength of this approach, although it uses a different strategy from quantitative research, based on a process rather than a variance concept of causality. Construct validity is similar for both approaches, although quantitative research may use quantitative means of assessing the construct validity of instruments. Generalizability is also similar (statistical generalization to the population sampled is included under statistical conclusion validity) and is always a matter of transferring the conclusions of a study to other situations, an inherently judgmental process; Guba and Lincoln (1989) referred to this as "transferability." However, in quantitative research, generalizability is usually seen as a matter of the results of the study 256 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES being valid in other settings (replicability). Qualitative researchers, by contrast, tend to "generalize to theory" (Yin, 1984, pp. 39-40)—developing a theory and then applying that theory to other settings that may be dissimilar but that can be illuminated by the theory in question, appropriately modified (Becker, 1990). In Table 9.1, we have tried to summarize the typical features of both quantitative and qualitative research as these involve the five design components of the interactive model. ANALYSIS OF SELECTED EXAMPLES OF MIXED METHODS DESIGNS Uncovering the actual integration of qualitative and quantitative approaches in any particular study is a considerably more complex undertaking than simply classifying the study into a particular category on the basis of a few broad dimensions or characteristics. It requires an understanding of each of the five components of the study's design and of the ways in which each component incorporates quantitative elements, qualitative elements, or both. In addition, as stated previously, it is important to examine the actual conduct of the study rather than simply depending on the author's assertions about the design. This issue is illustrated by Blumstein and Schwartz's (1983) study of American couples, which used both survey questionnaires and open-ended interviews. The authors described the results of their study as based entirely on the statistical analysis of the survey data, while the qualitative data were relegated to providing illustrative instances: Ewe use the phrase "we find ..." in presenting a conclusion based on statistical analysis of data from the question- I naires. . . . The interview data help us I interpret our questionnaire findings, I but unless we are using one of the parts I of the interview that is readily quantifiable, we do not afford them the same I degree of trust we grant to information I derived from the questionnaires. The interviews serve another pur-I pose. We use the interview materials to I illustrate both majority patterns and | important exceptions, (p. 23) And the authors characterize the chapters in their book that deal with relationship histories, which are based mainly on the interviews, by stating, "In these chapters, which have nothing to do with our analysis of the data but are included only for their illustrative value . . ." (p. 22). However, this does not explain why Blumstein and Schwartz (1983) conducted in-depth interviews, lasting 2.5 to 4.0 hours, with both partners, separately and together, for 300 couples; transcribed and coded these interviews; and followed up with questionnaires to fill in any gaps. It also seems inconsistent with the fact that, in addition to their extensive use of quotes in the thematically organized sections of the book, they devoted 213 pages, nearly half of the results section of the book, to detailed case studies of 20 couples' relationships. A closer analysis of their account reveals that triangulation of methods was an important feature of the study so as to "see couples from several vantage points" (p. 15) and that the case studies "helped to illuminate some of the ways in which money, sex, and work shape the nature of [the partners'] relationships" (p. 332). It appears that the "reconstructed logic" of the design was heavily influenced by a quantitative ideology of what counts as "results," distorting the study's logic-in-use and the actual contribution of the qualitative component. An Alternative Approach ♦ 257 The main purpose of this section is to present in-depth analyses of well-documented, complex examples of mixed model research, illustrating the numerous ways in which qualitative and quantitative approaches to each of the design components can be combined. We discuss these studies in terms of Caracelli and Greene's (1997) distinction between "component" and "integrated" mixed methods designs, moving from studies that resemble component designs to those that resemble integrated designs. Component designs are those in which the different methods remain discrete throughout the study and only the results of the methods are combined (p. 22). Integrated designs, by contrast, are those in which there is "a greater integration of the different method types" (p. 23); such designs involve the use not of relatively self-contained qualitative and quantitative methods modules but rather of qualitative and quantitative elements or strategies integrated within a single phase or strand of the research; the elements occur concurrently and in constant interaction with one another rather than as conceptually separate enterprises that are later linked together. Their distinction is most useful when applied to methods; it is less meaningful when applied to the other components of a research design, and in fact the use of both qualitative and quantitative elements of components other than methods seems to have been treated by Caracelli and Greene (1997) as an "integrated" design almost by definition. In addition, Caracelli and Greene's two types are not categorically distinct; actual studies exhibit a continuum of the amount of integration of methods and also a variety of different strategies for integration. We have nonetheless organized the studies in this order for two reasons. First, doing so provides a clearer organization to this section. Second, it al- lows us to address the design features of particular types of mixed methods studies as well as the specific studies we describe. A common approach to using both quantitative and qualitative methods is to use them sequentially. Sutton and Rafaeli (1992) provided an unusually detailed and candid account of such a design, a study of the relationship between expressed emotion and sales in convenience stores (see also Sutton & Rafaeli, 1988). They began their research with a well-developed theory of the expression of emotion by employees, based not only on published literature but also on informal querying of waitresses, clerks, and telephone operators. They had numerous ideas for possible empirical studies, but no actual research in progress, when they unexpectedly gained access to a quantitative data set derived from covert observations of employees and from company sales records, with detailed data on numerous control variables. Although one of the authors had considerable experience with qualitative research, this study was originally designed as a purely quantitative multiple regression analysis of this data set. Sutton and Rafaeli's statistical analysis of this data was intended to achieve two main purposes. First, it would support their theory and further develop their scholarly agenda on expressed emotion. Second, it would advance their careers without involving all the work of collecting their own data. Unfortunately, the analysis flatly contradicted their hypotheses; expressed positive emotions had a consistently negative correlation with sales. They tried tinkering with the analysis, but to no avail; they could find no errors, and dozens of runs using different combinations of variables gave the same result. Their validity checks were unable to resolve the contradiction between their 258 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES theory and their results. It was clear that they needed to revise their conceptual framework. Fortunately, a colleague suggested an alternative theory, which came to be called the "Manhattan effect": that in busy stores, employees did not have time and/or were too harassed to express positive emotions. This theory was consistent with their data, and the authors' initial inclination was to simply revise their hypotheses and submit the paper for publication, having learned from experienced colleagues that this was common practice in both the natural and social sciences. There were two reasons why they did not do this. First, it would contradict their previously published theoretical work, potentially impairing their career advancement. Second, they wanted to write a paper that conveyed their actual process, believing that, although it would be harder to publish, it would be a better paper. To do this, however, they needed a clearer theoretical understanding of their findings. This led to the qualitative phase of the study, which consisted of interviews with managers and executives, four case studies, informal observations in stores, and one of the authors working for a day as a store clerk. Sutton and Rafaeli (1992) stated, These qualitative data proved to be essential for helping us to refine our revised conceptual perspective. For example, while we had thought about how a crowded store suppresses the display of positive emotion, we had not thought about the ways in which a slow store supports the display of good cheer. During the day that Bob spent working as a clerk, he learned that customers are an important source of entertainment, and that clerks are more friendly during slow times because they are genuinely Í pleased to see customers and want to encourage customers to engage in conversation, (p. 123) Their revised and elaborated theory was used to develop a different hypothesis, which was supported by a further quantitative analysis of the original data set. This research thus involved two cycles of induction and deduction. The first cycle was typical of quantitative research; it began with informal data collection and literature-based theorizing about how the display of positive emotion influences sales, and it ended with the statistical test of a hypothesis derived from this theory. The failure of the study to support the hypothesis forced the authors into a second cycle, beginning with a colleague's suggestion and continuing with a diverse array of qualitative data collection and analysis, which eventually led to the inductive development of a new conceptual framework that emphasized the reverse process: how store pace has a negative effect on the display of positive emotion. This conceptual framework was used to generate a new quantitative hypothesis, which was then tested statistically. In this study, the quantitative and qualitative phases were relatively distinct. The qualitative phase was largely self-contained, and its purpose was nearly exclusively to revise and develop the conceptual framework, incorporating a process model of how the pace of work affects displayed emotion. This framework was then used to generate a variance theory hypothesis that was tested with quantitative data. Figure 9.3 provides a design map of the study. In other component studies, rather than shifting from one approach to another in sequence, the two approaches are used concurrently, although separately, and integrated only in drawing conclusions. An Alternative Approach ♦ 259 Phase 1 Purposes: support their theory advance their careers Conceptual Framework: variance theory of the effect of positive expression of emotion on sales Research Question: hypotheses derived from theory Methods: large quantitative data set multiple-regression analysis Phase 2 Purposes: maintain consistency in work communicate actual process of research Validity: statistical hypothesis testing Conceptual Framework: ''Manhattan effect" a more complex process theory of how store pace affects expression of emotion Research Question: revised hypothesis what is the process by which store pace affects the expression of emotion? Methods: case studies interviews with managers and executives informal observations in stores working for a day as a store clerk large quantitative data set multiple-regression analysis Validity: rich description triangulation statistical hypothesis testing Figure 9.3. Design Map of Sutton and Rafaeli (1 988, 1 992) Study Trend (1978/1979) gave an account of such a study, an evaluation of an experimental federal housing subsidy program involving both quantitative and qualitative data collection and analysis. Trend described the study as a "naturalistic experiment" (p. 69), but it would more accu- rately be called a "pre-experiment" in Campbell and Stanley's (1963) typology because it did not involve a control group. Extensive quantitative data were collected on agency activities, expenses, demographic characteristics of clients, and housing quality, mainly through surveys. 260 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES In addition, each site had an observer (usually an anthropologist) who prepared a qualitative case study of that site, using field observations, interviews, and documents. The intent was that program outcomes would be determined through analysis of the quantitative data, while the case studies would provide a holistic picture of program process (Trend, 1978/1979, p. 70). However, this plan began to unravel when the conclusions of an observer in one site directly contradicted the results of the quantitative analysis of program outcomes in that site. While neither side doubted "the facts" produced by the other, the two interpretations of these facts differed radically. The agency conducting the evaluation sided with the quantitative results, and the observer was repeatedly told to rewrite his analysis to fit the quantitative conclusions. Finally, Trend and the observer made a sustained effort to get at what had really been going on, using both the quantitative and qualitative data. They eventually came up with a coherent process explanation for nearly all of the data that went well beyond either the quantitative or the initial qualitative conclusions and that revealed serious shortcomings in both accounts. Although this study clearly fits into the "component" type of design in that the quantitative and qualitative data were collected and analyzed separately and were combined only in developing conclusions, it also resembles the most developed subtype of integrated design described by Caracelli and Greene (1997), the transformative design. In such designs, the value commitments of different traditions are integrated, giving voice to different ideologies and interests in the setting studied. (In the interactive design model, these value commitments can form part of both the purposes and the conceptual framework.) The quantitative ana- lysts tended to represent the views of the program managers and funders, while the observer was an advocate for the agency staff and clients. These differing value stances, as well as the separation of the quantitative and qualitative strands of the study, led to polarization and conflict; "each side held so tightly to its own views that it was impossible to brush aside the lack of congruence" (Trend, 1978/1979, p. 84). Trend (1978/1979) concluded that multiple methods might not lead to an easy integration of findings and that "unanimity may be the hallmark of work in which other avenues to explanation have been closed off prematurely" (p. 68). If the discrepancy between the qualitative and quantitative accounts had been discovered earlier, or if the two approaches had been more closely integrated, then it is possible that the observer would have been subtly or overtly coerced into making his conclusions fit the "hard" data (p. 84). Trend thus argued that "the proliferation of divergent explanations should be encouraged" (p. 68) but also that an effort should be made to develop an account that does justice to all of the conflicting perspectives. A third study that initially appears "component-like," in that the quantitative and qualitative elements are conceptually distinct phases of the research, is the research described in Festinger, Riecken, and Schachter's (1956) book, When Prophecy Fails. This was a psychological study of an end-of-the-world cult and the consequences for cult members of the failure of its predictions. The study began with a variable-oriented theory and a hypothesis about the conditions under which disconfirmation of belief will paradoxically be followed by increased commitment. The data were collected entirely through participant observation; a number of researchers pretended to be con- An Alternative Approach ♦ 2Ó1 Purposes: create generalizable knowledge about the phenomenon studied test predictions of theory Conceptual Framework: integrated variance and process theory of the conditions supporting belief following disconfirmation, based on historical research Research Question: hypothesis about the conditions leading to increased proselytizing following disconfirmation questions about the meaning, processes, and context of the events studied (implicit) Methods: intensive involvement of researchers in the cult covert participant observation narrative fieldnotes of events categorization of members in terms of the degree of prior commitment and social support determining changes in proselytizing comparison of two groups inferences to the meaning of events for participants rich descriptions of situational influences and processes case analysis of all participants Validity: quasi-experimental controls ruling out alternative explanations explaining exceptions to the predictions Figure 9.4. Design Map of Festinger et al. (1956) Study verts to the cult and covertly amassed detailed descriptive notes on what happened as the day of judgment approached and then passed. However, to test the hypothesis, these observational data were analyzed primarily by categorizing members in terms of the degree of prior commitment and social support (the two key independent variables) and measuring changes in proselytizing (the indicator of subsequent commitment) following disconfirmation. Figure 9.4 depicts the design of the study. This study differs from a component study such as Sutton and Rafaeli's in that the "components" are different aspects of a single research design rather than separate quantitative and qualitative strands or phases of a larger study. At first glance, it seems to fit one of Patton's (1990) types of "methodological mixes"—experimental design, qualitative data, and statistical analysis (p. 191)—and would thus be considered a mixed model design. The main differences from Patton's type are that the study was a "natural" experiment (more accurately, a quasi-experiment) rather than a manipulated intervention and that the analysis was hypothesis testing, variable focused, quantitative, and based on prior analytical categories but not specifically statistical due to the small number of participants. However, the design is more complex than this categorization suggests, and we want to analyze the study to reveal some of these complexities. The purposes and explicit research questions for Festinger et al.'s (1956) study were predominantly quantitative— a goal of testing the predictions of their 262 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES theory of how people with a strongly held belief respond to disconfirmation of that belief; a hypothesis, deductively generated from this theory, about the effect of social suppott following disconfirmation on the key measure of commitment (proselytizing); and the testing of this hypothesis, with the goal of creating generalizable knowledge. However, their conceptual framework addressed both the process by which the predicted outcome (disconfirmation leads to increased commitment) could occur and the variables that could influence this outcome, and some implicit process questions became apparent in the conclusions section. In terms of methods, the study could be seen as a quasi-experiment, with a naturally occurring intervention, pre- and post-intervention data collection, and a comparison of two parts of the group that differed in the degree of social support. However, with the detailed qualitative data collection, the logic also resembled a qualitative case study. The research relationships and data collection involved covert participant observation, intensive involvement of the researchers in the cult, and narrative fieldnotes of events. It is unclear what formal qualitative analysis techniques, if any, were used. In the narrative of the study, the researchers made frequent inferences to the meaning of events for participants, and there were rich descriptions of situational influences and processes. In the concluding chapter of their book, Festinger et al. (1956) first gave a case-by-case analysis of all participants in terms of the hypothesized preconditions (independent variables) and outcomes. Participants were then categorized in terms of these variables, and the authors tallied the confirmations and exceptions to the predictions and compared the two situations that differed in the key independent variable (social support). This argument is essentially quantitative. However, it is extensively supplemented by a process analysis of the sequence of events for each individual; this is used to explain apparent exceptions and to modify the hypotheses to some extent. The authors also made use of unanticipated outcomes (e.g., the persistence of predictions of disaster, the identification of visitors as spacemen) that were relevant to their conclusions. This was a coherent and workable mixed methods design because the different components were compatible and complementary in this particular situation, not because they derived from a single paradigm or were consistent with a single set of assumptions. Testing Festinger et al.'s (1956) specific hypothesis (the primary aim of the study) would ideally have involved an experimental design. However, the nature of the phenomenon addressed by the theory made an experimental test of the hypothesis impossible. The only real alternative was a kind of "natural experiment," and one was dropped into the researchers' laps. The authors noted, somewhat apologetically, that the situation that was available to them precluded the sort of formal standardized methods that "the orthodoxy of social science" would normally require (pp. 248-249); consequently, the sampling and data collection were almost purely qualitative. These consisted mainly of the use of participant observers, who gathered whatever data they could that related to the theory and research questions—including data on the meaning, context, and process of the group's activities—and produced a detailed narrative account of the events leading up to and following the disconfirmation of the group's predictions. The crucial links in making this a coherent design were the analysis and validity procedures employed to connect the authors' qualitative data to their research questions, hypotheses, theories, and pur- An Alternative Approach ♦ 263 poses. This was accomplished in two ways. One of these involved quantifying the qualitative data to adapt these to the logical requirements of hypothesis testing. The two groups of believers, which differed in the value of the major independent variable (social support), were compared in terms of the main outcome variable (extent of proselytizing) as well as on other indicators of the strength of commitment (a key mediating variable) both before and after the disconfirmation. If this were the entire analysis, however, the research results would have been far less convincing than they were given that the number of participants (17) on whom sufficient data existed was quite small. The study's conclusion that social support was essential to strengthened belief and proselytizing was buttressed by a second qualitative analysis that examined the data on each group member for evidence relevant to the hypothesis and constructed a "mini-case study" of each member. These cases relied heavily on inductive identification of relevant data, attention to meaning and context, and a process account that elucidated the mechanisms by which belief was strengthened or weakened—all features that are characteristic of qualitative research. In addition, most of the report was such a "case study" of the entire phenomenon, revealing in rich detail how the group developed and how it responded to the disconfirmation of its predictions. These analyses reveal (or create) a set of implicit qualitative research questions about the meaning, processes, and context of the events studied that parallel the quantitative hypothesis and connect to qualitative aspects of the authors' conceptual framework. This dual analysis was facilitated by the conceptual framework for the study, which included both variance and process aspects of the phenomenon. The validity of Festinger et al.'s (1956) conclusions is vulnerable to the fact that traditional experimental controls were impossible, that data were not collected in a structured way that would ensure reliability and facilitate comparison, and that the sample was quite small and self-selected. The researchers' main strategy for dealing with these validity issues was to explicitly identify plausible alternative explanations and to use their data to argue that these are not credible explanations for their results. This strategy draws on both qualitative and quantitative approaches. We believe that few, if any, sequentially "mixed" designs of the type described by Patton (1990) maintain a complete sequential separation of the qualitative and quantitative elements of the research. As in this example, the different components tend to grow "tendrils" backward and forward, integrating both qualitative and quantitative elements into all components of the research. This is understandable given the "resonance" among the components of each approach; qualitative data collection tends to generate qualitative analysis, research questions, conceptualizations, and validity strategies, and the same is true of quantitative components, while a qualitative component of the conceptual framework tends to generate qualitative research questions and methods. Another approach that blurs the distinction between component and integrated designs is to conduct the quantitative and qualitative data collection strands in parallel, as in the studies by Trend (1978/1979) and Festinger et al. (1956), but to embed these within an overall experimental or quasi-experimental design, one that involves a deliberate intervention as well as establishing experimental and control conditions. This sort of design has been employed by Lundsgaarde, Fischer, and Steele (1981) and by Maxwell et al. (1986), among others. Such studies are classed as integrated designs by Caraceili 264 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES and Greene (1997, p. 26) and would be considered mixed model designs by Tashakkori and Teddlie (1998) because they go beyond the mixing of methods in a strict sense. However, the actual methods components can range from largely separate (as in the study by Maxwell et al., 1986) to closely integrated (as in the study by Milgram, 1974 [discussed later]). The study by Lundsgaarde et al. (1981), conducted during 1976-1977, illustrates some of the possible complexities of such designs. These researchers, all anthropologists, carried out what they described as an "ethnographic" study of the effect of a computerized medical information system (known as PROMIS) on the functioning of a hospital ward. They did this by studying two hospital wards prior to the implementation of PROMIS and then continuing this research while PROMIS was introduced on one of the wards, using the other ward as a control group. They described this ethnographic study as one "component" of the PROMIS evaluation; it was designed to complement the other components of the evaluation, which employed a quantitative analysis of medical records to determine the impact of PROMIS on the health care delivery process. The context in which PROMIS was implemented was politically charged, and the developers of the overall evaluation strategy were concerned that variation in human and situational variables might make it difficult to interpret the overall quantitative results. The goals of the ethnographic study were to document the events surrounding the implementation of the PROMIS system, and the experiences of the health care providers using this system, using a more descriptive and inductive approach so as to characterize the context in which the system was developed and demonstrated (Lundsgaarde et al., 1981, pp. 10-11). However, the "ethnographic" component itself involved a mix of qualitative and quantitative elements. The purposes (described previously) were mainly qualitative but included the explicit comparison of the experimental and control wards so as to determine the effects of the PROMIS implementation on the experimental ward. The conceptual framework for the study was largely drawn from innovation theory (expressed in 20 "propositions" that were a mix of variance and process statements) and the sociology of medical practice. No research questions were explicitly stated; although some process questions can be clearly inferred from the study's purposes, the overall evaluation was guided by a specific variance hypothesis about the effect of PROMIS on patient care behavior, a hypothesis that was tested by the quantitative components of the evaluation. The ethnographic component relied heavily on participant observation, informal interviewing, and document analysis, and Lundsgaarde et al. (1981) presented an explicit defense of such qualitative methods in terms of the goals of context and meaning (p. 16). However, the study also included a questionnaire, a more structured interview, and a comparative observational assessment (following the introduction of PROMIS) of the amount of time spent generating and using 'medical records on the two wards, using matched pairs of residents or interns observed on a randomized schedule. (The latter task was required by the funding institution [p. 11] and forced a reallocation of much of the later qualitative data collection time from participant observation to in-depth interviews.) In addition, midway through the study, the observers on the two wards switched settings so as to gain a comparative perspective and to control for biases. Lundesgaardeetal. (1981) justified this mix of quantitative and qualitative methods as a means of triangulating data and resolving contradictions between data sources (p. 16). The concerns of the evalu- An Alternative Approach ♦ 265 ation planners were well-founded; the quantitative analysis of medical records found no statistically significant advantages of PROMIS over its manual counterpart, while the ethnographic study showed that "many of the clinicians who were required to use the system were unwilling participants in the experiment and even unsympathetic to many of the goals of those who developed it" and that "many of the human and organizational problems . . . could have been avoided, or at least neutralized, if the developers had paid more attention to contextual social variables affecting system users" (p. 2). The authors stated, I It is the unpredictability of the temporal characteristics of all innovations that presents researchers with the most thorny problems of analysis. The objective measurement of the rate of acceptance, and the estimation of the potential rate of diffusion, has proved the most difficult analytical problem in our study of the PROMIS innovation. For this reason, they emphasized "the importance of a multifaceted and flexible research design for the study of the many social and operational problems created by the installation of [PROMIS]" (p. 9). The presentation of the results of the study demonstrated a close integration of the quantitative and qualitative elements in drawing conclusions and addressing validity threats. For example, their discussion of the effect of PROMIS on house staff physicians (pp. 61-91) closely integrated the data from participant observations, qualitative interviews, and the systematic time-sampling observations of residents and interns. This presentation embedded the statistical analysis of the quantitative behavioral data in a descriptive account of these activities, one that clarifies the contextual variations in, and influences on, these behaviors. They noted that the quantitative data did not support the widespread perception on the experimental ward that residents and interns spent more time entering medical data into patients' records, and the authors devoted considerable space to discussing possible reasons for this misperception, drawing on their interviews and observations (pp. 86-90). While this evaluation superficially resembles a component design, with separate qualitative and quantitative components of the evaluation, a detailed examination reveals a much more integrated design. Some of this integration may initially have been externally imposed but was fully incorporated into the analysis, validity procedures, and conclusions. The triangulation of different methods was the result of not only the different purposes of the evaluation but also the validity concerns that would have threatened a purely quantitative study. The presence of quantitative elements in the ethnographic part of the study was partly the result of an implicit purpose (the researchers' need to satisfy the external funder) that had little intrinsic connection to the study's conceptual framework. However, these elements were closely integrated into the study's analysis, using a validity approach based on both quantitative and qualitative concepts (experimental controls and statistical tests and a process approach to ruling out alternative explanations). Figure 9.5 provides a design map of this study. The quantitative and qualitative elements can be even more closely integrated than in this example. Milgram's (1974) Obedience to Authority is a report of an experimental study (carried out between 1960 and 1963) of how people respond when they are ordered by authorities to inflict pain and possible serious harm on others. Milgram and his associates designed a series of laboratory situations in 266 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES Purposes: Conceptual Framework: determine impact of PROMIS on health care theory on which PROMIS was based (quantitative delivery practices component) describe setting of PROMIS intervention theory of innovation (ethnographic component) document events surrounding implementation and sociological research on medical practice experiences of health care providers (ethnographic component) document impact of PROMIS on behavior of users Research Question: hypotheses about effect of PROMIS on users' practices (implicit) questions about context of innovation and users' experiences Methods: Validity: 2 wards, experimental and contro experimental controls: pre/post control group pre/post implementation measures design comparative behavioral observations statistical hypothesis testing structured questionnaire triangulation of methods and data sources data from clinical records ruling our alternative explanations statistical analysis participant observation interviews documents Figure 9.5. Design Map of Lundsgaarcle et al. (1981) Study which participants were deceived into believing that they were part of a study of the effects of punishment on learning and were then told to give increasingly severe electrical shocks to a supposed "subject" who was actually an accomplice of the researchers and who feigned pain and eventual refusal to cooperate. Unlike Festinger et al. (1956), Milgram (1974) explicitly grounded this study in the experimental tradition in social psychology (p. xv). The researchers employed numerous different experimental conditions designed to determine the effect of different variables on the degree of obedience (the dependent variable), and they collected quantitative data about the level of shock that participants administered (the main measure of obedience) in each of the different conditions. However, the researchers were also concerned with the process by which people responded to the researchers' directions: how the participants made sense of and reacted to these directions and why they complied with or resisted the orders. In introducing the individual case studies, Milgram (1974) stated, From each person in the experiment we derive one essential fact: whether he has obeyed or disobeyed. But it is foolish to see the subject only in this way. For he brings to the laboratory a full range of emotions, attitudes, and individual styles. ... We need to focus on the individuals who took part in the study not only because this provides a personal dimension to the experiment but also because the quality of each I An Alternative Approach ♦ 267 \ person's experience gives us clues to í the nature of the process of obedience. t (P-44) The researchers covertly recorded the participants' behavior during the experiment, interviewed some participants at length after the experiment was over to determine their reasons for compliance or refusal, and sent a follow-up questionnaire to all participants that allowed expression of their thoughts and feelings. The analysis of these data is primarily qualitative but is closely integrated with the quantitative data. The results chapters of the book present a fine-grained blending of quantitative tables and graphs with observational notes, excerpts from recorded observations and interviews, and case studies of particular participants' responses to the experimental situation. In addition, the theoretical model developed from the study is not a pure "variance" model, restricted to the different variables that affect obedience; as in the study by Festinger et al. (1956), it incorporates extensive discussion of the social processes and subjective interpretations through which obedience and resistance to authority develop. And in discussing potential validity threats to the study's conclusions, Milgram (1974) used both the quantitative results from the experimental manipulations and qualitative data from the observations to rule out these threats. In this study, experimental intervention, laboratory controls, and quantitative measurement and analysis were integrally combined with qualitative data collection and analysis to answer both qualitative and quantitative research questions. Although Milgram himself said virtually nothing explicitly about the integration of quantitative and qualitative elements in this study, Etzioni (1968) claimed that this research "shows that the often stated opposition between meaningful, interesting humanistic study and accurate, empirical quantitative research is a false one: The two perspectives can be combined to the benefit of both" (cited in Milgram, 1974, p. 201). Figure 9.6 provides a design map of the study. ♦ Conclusions and Implications In this chapter, we have tried to show the value of a broader and more interactive concept of research design for understanding mixed methods research. We have also argued for a broader and more fundamental concept of the qualitative-quantitative distinction, one that draws on the idea of two different approaches to explanation as well as two different types of data. Through detailed examination of particular studies, we have tried to demonstrate how these tools can be used to attain a better understanding of mixed methods research. We draw several implications from these arguments and examples. First, the logic-in-use of a study can be more complex, and can more closely integrate qualitative and quantitative elements of the study, than an initial reading of the report would suggest. The studies by Blumstein and Schwartz (1983), Lundsgaarde et al. (1981), Festinger et al. (1956), and Milgram (1974) all involved a greater integration of qualitative and quantitative approaches than one would guess from their explicit descriptions of their methods, and the two other studies presented (Sutton & Rafaeli, 1988, 1992; Trend, 1978/1979) may be exceptions only because the authors had published candid in-depth accounts of their studies' designs and methods, including aspects rarely addressed in research reports. Second, the interactive design model that we have presented can be a valuable tool in understanding the integration of 268 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES Purposes: Conceptual Framework: understand people's willingness to mixed variance and process theory of obey immoral commands obedience to authority Researach Question (largely implicit): what effect do different variables have on obedience? what is the process by which obedience and resistance to authority are generated? why do subjects comply with or resist orders? how do they make sense of the experimental situation, and of their obedience? Methods: Validity: experimental manipulation of conditions experimental controls covert observation and recording of triangulation of methods subjects' behavior ruling out alternative explanations qualitative interviews with subjects inferences to the meaning of events for participants case analysis of some participants Figure 9.6. Design Map oř Milgram (1974) Study qualitative and quantitative approaches and elements in a particular study. For example, the conceptual framework of a study may be largely variance theory (Sutton & Rafaeli, 1988, 1992, Phase 1), largely process theory (Sutton & Rafaeli, 1988, 1992, Phase 2), a combination of both types of theories (Trend, 1978/1979; Lundsgaarde et al., 1981), or an integration of the two in a single theory (Festinger etal., 1956; Milgram, 1974). Third, there is considerable value in a detailed understanding of how qualitative and quantitative methods are actually integrated in particular studies. For example, the degree of integration of qualitative and quantitative elements in the conceptual framework, analysis, or validity components of a study might not correspond to the integration of data collection meth- ods. The study by Lundsgaarde et al. (1981) has more integration in the methods and validity components than in the conceptual framework, while the study by Festinger et al. (1956) lias more integration in the conceptual framework and validity than in methods. In addition, the actual integration among different components of the design is often essential to understanding how a particular combination of quantitative and qualitative elements is or is not coherent. For example, the integrated process/variance conceptual framework of Milgram's (1974) study played a key role in the integration of methods and analysis. Fourth, we do not believe that typological models by themselves provide adequate guidance for designing mixed methods research. The examples and analyses An Alternative Approach ♦ 269 of specific studies provided by Greene and Caracelli (1997) and by Tashakkori and Teddlie (1998) are essential complements to their typologies; these provide both a concrete realization of how the types play out in practice and an illustration of aspects of mixed methods design that are not captured in the typology. Fifth, we also believe, however, that there is no easy generalizability or transferability of the analysis of particular studies; the actual integration of the components of a study is influenced by a wide range of conditions and factors and is not dictated by the category in which it fits. The design model that we have presented is a tool for designing or analyzing an actual study rather than a template for designing a particular type of study. In a sense, we are presenting a more qualitative approach to mixed methods design, emphasizing particularity, context, holistic understanding, and the process by which a particular combination of qualitative and quantitative elements plays out in practice, in contrast to a more quantitative approach based on categorization and comparison. As wirh quantitative and qualitative approaches in general, we advocate an integration of the two approaches. ■ References Becker, H. S. (1990). Generalizing from case studies. In E. Eisner & A. Peshkin (Eds.), Qualitative inquiry in education: The continuing debate (pp. 233-242). New York: Teachers College Press. Bernsrein, R.J. (1992). The new constellation: The ethical-political horizons of modernity-postmodernity. Cambridge, MA: MIT Press. Blumstein, P., & Schwartz, P. (1983). American couples. New York: Simon & Schuster. Brewer, J., & Hunter, A. (1989). Multimethod research: A synthesis of styles. Newbury Park, CA: Sage. Bryman, A. (1988). Quantity and quality in social research. London: Unwin Hyman. Campbell, D. T, & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research on teaching. In N. L. Gage (Ed.), Handbook of research on teaching (pp. 171-246). Chicago: Rand McNally. (Reprinted in 1966 as Experimental and quasi-experimental designs for research) Caracelli, V. J., & Greene, J. C. (1997). Crafting mixed-method evaluation designs. In J. C. Greene & V. J. Caracelli (Eds.), Advances in mixed-method evaluation: The challenges and benefits of integrating diverse paradigms (New Directions for Evaluation, No. 74, pp. 19-32). San Francisco: Jossey-Bass. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Boston: Houghton Mifflin. Creswell, J. W. (1994). Research design: Quantitative and qualitative approaches. Thousand Oaks, CA: Sage. Etzioni, A. (1968). A model of significant research. International journal of Psychiatry, 6, 279-280. Festinger, L., Riecken, H. W., & Schachter, S. (1956). When prophecy fails. Minneapolis: University of Minnesota Press. Grady, K. A., & Wallston, B. S. (1988). Research in health care settings. Newbury Park, CA: Sage. Greene, J. C, & Caracelli, V. J. (1997). Defining and describing the paradigm issue in mixed-method evaluation. In J. C. Greene 8c" V. J. Caracelli (Eds.), Advances in mixed-method evaluation: The challenges and benefits of integrating diverse paradigms (New Directions for Evaluation, No. 74, pp. 5-17). San Francisco: Jossey-Bass. Guba, E. G., &c Lincoln, Y. S. (1989). Fourth generation evaluation. Newbury Park, CA: Sage. Harre, R. (1972). The philosophies of science. Oxford, UK: Oxford University Press. 270 ♦ METHODOLOGICAL AND ANALYTICAL ISSUES Harre, R., & Madden, E. H. (1975). Causal powers: A theory of natural necessity. Oxford, UK: Basil Blackwell. Kaplan, A. (1964). The conduct of inquiry. San Francisco: Chandler. Kidder, L. H., & Fine, M. (1987). Qualitative and quantitative methods: When stories converge. In M. M. Mark &c R. L. Shotland (Eds.), Multiple methods in program evaluation: New directions in program evaluation (pp. 57-75). San Francisco: Jossey-Bass. Koonrz, V. (1976). Innovation: Components of bundles. Medical Care, 7(4). LeCompte, M. D., & Preissle, J. (1993). Ethnography and qualitative design in educational research (2nd ed.). San Diego: Academic Press. Lundsgaarde, H. P., Fischer, P. J., & Steele, D. J. (1981). Human problems in computerized medicine (Publications in Anthropology, No. 13). Lawrence: University of Kansas. Martin, J. (1982). A garbage can model of the research process. In J. E. McGrath, J. Martin, &C R. Kulka (Eds.), Judgment calls in research (pp. 17-39). Thousand Oaks, CA: Sage. Maxwell, J. A. (1990). Response to "Campbell's Retrospective and a Constructivist's Perspective." Harvard Educational Revieiv, 60, 504-508. Maxwell, J. A. (1992). Understanding and va-I id i ry in qualitative research. Harvard Educational Review, 62, 279-300. Maxwell, J. A. (1996). Qualitative research design: An interactive approach. Thousand Oaks, CA: Sage. Maxwell, J. A. (1998). Using qualitative research to develop causal explanations. Working paper, Harvard Project on Schooling and Children, Harvard University. Maxwell, J. A., &Mohr, L. B. (1999). Quantitative and qualitative: A conceptual analysis. Paper presented at the annual meeting of the American Evaluation Association, Orlando, FL. Maxwell, J. A., Sandlow, L. J., &c Bashook, P. G. (I 986). Combining ethnographic and experimental methods in evaluation research: A case study. In D. M. Fetterman & M. A. Pitman (Eds.), Educational evaluation: Ethnography in theory, practice, and politics (pp. 121-143). Newbury Park, CA: Sage. McCawley, J. (1982). Thirty million theories of grammar. Chicago: University of Chicago Press. Merriam-Webster. (1984). Webster's ninth new collegiate dictionary. Springfield, MA: Author. Miles, M. B., & Huberman, A. M. (1994). Qualitative research design (2nd ed.). Thousand Oaks, CA: Sage. Milgram, S. (1974). Obedience to authority: An experimental view. New York: Harper & Row. Mohr, L. (1982). Explaining organizational behavior. San Francisco: Jossey-Bass. Mohr, L. (1995). Impact analysis for program evaluation (2nd ed.). Thousand Oaks, CA: Sage. Mohr, L. (1996). The causes of human behavior: Implications for theory and method in the social sciences. Ann Arbor: University of Michigan Press. Patton, M. Q. (1980). Qualitative evaluation and research methods. Beverly Hills, CA: Sage. Patton, M. Q. (1990). Qualitative evaluation and research methods (2nd ed.). Newbury Park, CA: Sage. Pawson, R., & Tilley, N. (1 997). Realistic evaluation. London: Sage. Pitman, M. A., & Maxwel(, J. A. (1992). Qualitative approaches to evaluation. In M. D. LeCompte, W. L. Millroy, & J. Preissle (Eds.), The handbook of qualitative research in education (pp. 729-770). San Diego: Academic Press. Rabinowitz, V. C, & Weseen, S. (2001). Power, politics, and the qualitative/quantitative debates in psychology. In D. L. Tolman & M. Brydon-Miller (Eds.), From subjects to subjectivities: A handbook of interpretive and participatory methods (pp. 12-28). New York: New York University Press. Reichardt, C. S., & Cook, T D. (1979). Beyond qualitative versus quantitative methods. In T. D. Cook & C. S. Reichardt (Eds.), Qualitative and quantitative methods in Ait Alternative Approach ♦ 271 program evaluation (pp. 7-32). Beverly Hills, CA: Sage. Robson, C. (1993). Real world research: A resource for social scientists and practitioner-researchers. Oxford, UK: Blackwell. Rogers, E. C. (1 995). Diffusion of innovations (4th ed.). New York: Free Press Salmon, W. C. (1984). Scientific explanation and the causal structure of the world. Princeton, NJ: Princeton University Press. Salmon, W. C. (1989). Four decades of scientific explanation. In P. Kitcher & W. C. Salmon (Eds.), Scientific explanation (pp. 3-219). Minneapolis: University of Minnesota Press. Salmon, W. C. (1998). Causality and explanation. New York: Oxford University Press. Sayer, A. (1992). Method in social science: A realist approach. London: Routledge. Sayer, A. (2000). Realism and social science. Thousand Oaks, CA: Sage. Staw, B. M. (1992). Do smiles lead to sales? Comments on the Sutton/Rafaeli study. In P. Frost & R. Stablein (Eds.), Doing exemplary research (pp. 136-142). Newbury Park, CA: Sage. Sutton, R. I., & Rafaeli, A. (1 988). Untangling the relationship between displayed emotions and organizational sales: The case of convenience stores. Academy of Management Journal, 31, 461-487. Sutton, R. I., & Rafaeli, A. (1992). How we untangled the relationship between displayed emotions and organizational sales: A tale of bickering and optimism. In P. Frost & R. Stablein (Eds.), Doing exemplary research (pp. 115-128). Newbury Park, CA: Sage. Tashakkori, A., & Teddlie, C. (1998). Mixed methodology: Combining qualitative and quantitative approaches (Applied Social Research Methods, No. 46). Thousand Oaks, CA: Sage. Trend, M. G. (1979). On the reconciliation of qualitative and quantitative analyses: A case study. In T D. Cook & C. S. Reichardt (Eds.), Qualitative and quantitative methods in evaluation research (pp. 68-86). Newbury Park, CA: Sage. (Original work published 1978) Tukey,J. W. (1977). Exploratory data analysis. Reading, MA: Addison-Wesley. Weiss, R. S. (1994). Learning from strangers: The art and method of qualitative interviewing. New York: Free Press. Yin, R. K. (1984). Case study research: Design and methods. Beverly Hills, CA: Sage.