See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/277343951 Defining and Teaching Evaluative Thinking: Insights From Research on Critical Thinking Article  in  American Journal of Evaluation · May 2015 DOI: 10.1177/1098214015581706 CITATIONS 149 READS 3,594 4 authors, including: Thomas Archibald Virginia Tech (Virginia Polytechnic Institute and State University) 49 PUBLICATIONS   570 CITATIONS    SEE PROFILE Monica Hargraves Independent Consultant 26 PUBLICATIONS   364 CITATIONS    SEE PROFILE William Michael Trochim Cornell University 110 PUBLICATIONS   14,379 CITATIONS    SEE PROFILE All content following this page was uploaded by Thomas Archibald on 18 May 2016. The user has requested enhancement of the downloaded file. Running Head: DEFINING AND TEACHING EVALUATIVE THINKING 1 Defining and Teaching Evaluative Thinking: Insights from Research on Critical Thinking1 Jane Buckley Cornell University Thomas Archibald Virginia Tech Monica Hargraves and William M. Trochim Cornell University Abstract Evaluative thinking (ET) is an increasingly important topic in the field of evaluation, particularly among people involved in evaluation capacity building (ECB). Yet it is a construct in need of clarification, especially if it is to be meaningfully discussed, promoted, and researched. To that end, we propose that ET is essentially critical thinking applied to contexts of evaluation. We argue that ECB, and the field of evaluation more generally, would benefit from an explicit and transparent appropriation of well-established concepts and teaching strategies derived from the long history of work on critical thinking. In this paper, based on previous work in the fields of education, cognitive science and critical thinking, as well as on our experience as ECB practitioners, we propose several guiding principles and specific strategies for teaching ET that draw directly from research on the teaching of critical thinking. Keywords: evaluative thinking, critical thinking, evaluation capacity building 1 This paper is a preprint version of a paper published in the American Journal of Evaluation. To refer to or cite the published version, please use: Buckley, J., Archibald, T., Hargraves, M., & Trochim, W. M. (2015). Defining and teaching evaluative thinking: Insights from research on critical thinking. American Journal of Evaluation, 36(3), 375-388. doi:10.1177/1098214015581706 This research was supported by National Science Foundation grant number 0814364. Correspondence regarding this article should be addressed to Jane Buckley, Cornell Office for Research on Evaluation, 433 Kennedy Hall, Ithaca, NY 14853. Phone: (607) 255-0397; Fax: (607) 255-1150. DEFINING AND TEACHING EVALUATIVE THINKING 2 Defining and Teaching Evaluative Thinking: Insights from Research on Critical Thinking In the field of evaluation, especially among people involved in evaluation capacity building (ECB), evaluative thinking is increasingly recognized as a key component of evaluation capacity and high-quality evaluation practice (e.g., Baker, 2011; Baker & Bruner, 2012; Bennett & Jessani, 2011; Carden & Earl, 2007; Davidson, Howe, & Scriven, 2004; Griñó, Levine, Porter, & Roberts, 2014; King, 2007; McKegg & King, 2013; Patton, 2005, 2011; Preskill, 2008, 2013; Taut, 2007; Volkov, 2011; Wind & Carden, 2010). However, despite that shared recognition, definitions of evaluative thinking (ET) are varied and sometimes ambiguous. In the absence of a clear and shared definition, the phrase risks becoming an empty buzz word—Patton warns that, “As attention to the importance of evaluation culture and evaluative thinking has increased, we face the danger that these phrases will become vacuous through sheer repetition and lip service” (2010, p. 162). In order for ET to become a useful construct in the field—one that can be meaningfully discussed, promoted, taught, measured, and researched—it is important that the evaluation community establish a clear and accepted definition. Others in the field are working on this. For example, Anne Vo (2013) conducted a Delphi study with leaders from the field to clarify the construct and propose a definition. Once a good definition is established, and given the agreedupon importance of ET, the immediate challenge becomes how to teach and promote it. This paper takes up both the definitional and pedagogical challenges. This paper’s central thesis is that ET is, in essence, critical thinking applied to contexts of evaluation. We argue that ECB initiatives—and the field of evaluation more generally—would benefit from an explicit and transparent appropriation of well-established, powerful concepts and teaching strategies developed over centuries by professional educators, learning theorists, philosophers and cognitive scientists. As early as the 1600s, Sir Francis Bacon was describing a construct that would later become recognized as critical thinking (Spedding, 1868). Based on previous work in the fields of education, cognitive science and critical thinking, as well as on our experience as ECB practitioners, in this paper we propose several guiding principles and specific strategies for teaching ET that draw directly from research on the teaching of critical thinking. In particular we draw upon the work of Brookfield (1987, 2012), Facione (1990, 2000), and McPeck (1990), among others. The relationship between evaluative and critical thinking is not entirely new territory. As Schwandt (2008) reminds us, Michael Scriven has long insisted on the central role of critical thinking in evaluation. Somewhat surprisingly, despite Scriven’s thirty years of insistence, critical thinking is discussed only rarely in the evaluation literature. The specific contributions of this article are to explicitly connect critical thinking to the increasingly important notion of evaluative thinking and to present specific principles and practices that can help evaluators and ECB facilitators more consciously and intentionally build ET into their work. We begin by reviewing the use of the term “evaluative thinking” and its existing definitions in the evaluation literature. Defining Evaluative Thinking Our initially casual perception that the term “evaluative thinking” has been growing in popularity is confirmed by a search of the evaluation literature. We searched seven peerreviewed evaluation journals—American Journal of Evaluation, Canadian Journal of Program Evaluation, Evaluation and Program Planning, Evaluation Review, Evaluation, Journal of DEFINING AND TEACHING EVALUATIVE THINKING 3 MultiDisciplinary Evaluation, and New Directions for Evaluation—by querying each journal’s web site (or publisher’s website) using the search term “evaluative thinking.” We found that since the late 1990s, the term has been used in the published evaluation literature with increasing frequency. For instance, other than one 1981 outlier, the term did not appear at all in the evaluation literature until 1996, after which time it appeared roughly once per year until 2001 when its usage began to increase dramatically. By 2007, evaluative thinking was mentioned in 12 different articles, with the tally reaching its highest number, 15, in 2013. The prevalence of the term “evaluative thinking” among presentations at the American Evaluation Association annual meeting in October 2013—with 18 listings appearing in a search of the on-line program—also underscores its current popularity. The increasing use of the term is not associated, however, with convergence on a definition. In part, this is due to the fact that in the majority of cases the term’s use seems to be purely rhetorical—in one case, the term appears in an article’s title but is then not mentioned anywhere in the article itself (Nelson & Eddy, 2008). Some discussions focus on ET at the level of individuals; some discuss it more at an organizational level. Of the 103 articles we found, only 11 explicitly defined or explained ET. Among those articles that do define or explain ET, some common themes are noticeable. For example, in many cases, ET is associated with the notion of process use (King, 2007; Patton, 2007). Another common theme is that ET should not be restricted solely to evaluation-specific activities, but rather should infuse all of an organization’s work processes (Baker & Bruner, 2012; Bennett & Jessani, 2011; Carden & Earl, 2007; King, 2007; Patton, 2005; Taut, 2007). Often, descriptions or definitions of ET liken it to reflective practice (Argyris & Schön, 1978; Schön, 1983): ET is a type of reflective practice that integrates the same skills that characterize good evaluation throughout all of an organization’s work practices (Baker, 2011; Baker & Bruner, 2012); it is “questioning, reflecting, learning, and modifying … conducted all the time. It is a constant state-of-mind within an organization’s culture and all its systems” (Bennett & Jessani, 2011, p. 24); it is “about getting people in organizations to look at themselves more critically through disciplined processes of systematic inquiry…about helping people ask these questions and then go out and seek answers” (Preskill & Boyle, 2008, p. 148); it is characterized by “a willingness to do reality testing, to ask the question: how do we know what we think we know? … It’s an analytical way of thinking that infuses everything that goes on” (Patton, 2005, para. 10). And while Carol Weiss (1998) did not necessarily use the term “evaluative thinking,” she evokes the idea, describing one side benefit of collaborative evaluation as “helping program people reflect on their practice, think critically, and ask questions about why the program operates as it does. They learn something of the evaluative cast of mind—the skeptical questioning point of view, the perspective of the reflective practitioner” (p. 25, emphasis added). Davidson, Howe, and Scriven (2004) describe ET as a combination of commitment and expertise comprised of evaluative know-how and an evaluative attitude. Elsewhere, it is similarly positioned in relation to the three types of objectives commonly associated with ECB: knowledge or cognitive outcomes; behavioral or skill-related outcomes; and affective outcomes (Preskill, 2008; Taut, 2007). Discussing ET as an important component of the work of internal evaluators, Volkov (2011) describes it as a process, a mind-set, a capacity, and “a person’s or organization’s ability, willingness, and readiness to look at things evaluatively and to strive to utilize the results of such observations” (p. 38). DEFINING AND TEACHING EVALUATIVE THINKING 4 However, despite these helpful accounts of ET, a degree of imprecision remains regarding the construct. For example, some descriptions of ET essentially use the term in the definition, relying on closely related notions such as “an evaluative attitude” and “readiness to look at things evaluatively” to define ET. Also, many descriptions of ET overlap significantly with descriptions of “evaluative doing,” or the practice of evaluation: ET involves “asking questions of substance, determining what data are required to answer specific questions, collecting data using appropriate strategies, analyzing collected data and summarizing findings, and using the findings” (Baker & Bruner, 2012, p. 1). For Baker and Bruner (2012), what sets ET apart from commonly understood definitions of evaluation practice is that, as mentioned above, ET ideally occurs throughout all of an organization’s work practices, not just formal evaluation work. Yet, other than its potential (and desired) pervasiveness, what distinguishes the construct? What’s more, while each of the contributors to this growing conversation on ET presented above offers insightful and helpful perspectives on one or more aspect of the construct, a comprehensive picture of ET is lacking. As we propose in this article’s introduction, the absence of a widely agreed upon definition prevents ET from being meaningfully discussed, promoted, taught, measured, and researched, thus limiting the construct’s utility to the field. This rich discussion of what ET is can be substantially advanced by drawing on the long history of work on the closely related construct of critical thinking. The cognitive attributes now associated with critical thinking have been discussed for centuries. In 1603, long before the term “critical thinking” had been coined, Sir Francis Bacon listed a number of favorable mental traits such as a “desire to seek, patience to doubt, fondness to meditate, slowness to assert, readiness to reconsider, carefulness to dispose and set in order; and … [hatred for] every kind of imposture” (Spedding, 1868, p. 85); these traits are not far from contemporary definitions of critical thinking. One of the earliest usages of the actual term appears to have been by John Dewey (1910) in How We Think, where he writes, “The essence of critical thinking is suspended judgment; and the essence of this suspense is inquiry to determine the nature of the problem before proceeding to attempts at its solution” (p. 74). More recently, especially in the 1980s, definitions of and debates on critical thinking have proliferated (Brookfield, 1987; Ennis, 1990; Facione, 1990, 2000; McPeck, 1984, 1990; Paul, 1993; Scriven, 1987). In the sections below, we present selective findings from our review of the critical thinking literature, paying special attention to implications for how to teach or promote critical thinking. With inspiration from these authors on critical thinking, as well as from our colleagues in the field of evaluation, and guided by our own experience as ECB practitioners (Archibald & Buckley, 2013; Archibald, Buckley, & Trochim, 2011; Trochim et al., 2012), we propose the following succinct definition: Evaluative thinking is critical thinking applied in the context of evaluation, motivated by an attitude of inquisitiveness and a belief in the value of evidence, that involves identifying assumptions, posing thoughtful questions, pursuing deeper understanding through reflection and perspective taking, and informing decisions in preparation for action. In this definition, we intend “context of evaluation” to include both formal evaluations as well as informal evaluative efforts that inform and improve actions with regard to any organizational, programmatic, or other purposeful undertaking; as such, our definition of ET is in agreement with other descriptions of ET (e.g., Baker and Bruner, 2005; Patton, 2005) that emphasize the importance of ET throughout all of an organization’s functions. As part of the effort to develop a comprehensive picture of ET, one critical challenge is to articulate the DEFINING AND TEACHING EVALUATIVE THINKING 5 difference between ET and simply doing good evaluation. In our view, the two are not the same and it is important to keep the constructs distinct. To be clear, by foregrounding this distinction between evaluative thinking and evaluative doing, we in no way mean to promote thinking evaluatively simply for thinking’s sake. To the contrary, our view is that ET—in combination with evaluation knowledge and skills—is essential for high quality evaluation practice. When evaluators, program planners, researchers, and educators all think evaluatively, and they are all engaged in the evaluation process on some level, evaluations are well planned, implementation is sustained, and results are used in support of program evolution. Without evaluative thinking, evaluation stagnates—those responsible for formal evaluation planning and implementation lack motivation, resist change, miss critical connections, and make less than ideal decisions. In essence, evaluative thinking is the substrate that allows evaluation to grow and thrive. Not everyone in an organization or on a program team needs to be an evaluator or to do evaluation work. However, if everyone involved in planning, implementing and evaluating a program is an evaluative thinker, the program and its evaluation have the best chance for success. According to our definition, evaluation can still take place without evaluative thinking. For example, data can be collected and analyzed; but without evaluative thinking, data collected may not be useful and the individuals doing the collecting would not be well-poised to use and incorporate unexpected developments, or adapt and revise evaluation plans in the face of setbacks or surprises in the real world. In short, ET is a protective factor to prevent against the risk of senseless, mindless evaluation (Patton, 2011; Volkov, 2011). Operationalizing Evaluative Thinking for Evaluation Capacity Building In the field of ECB, approaches to teaching evaluation methods have been widely described (Alkin & Christie, 2002; Darabi, 2002; Febey & Coyne, 2007; Kelley & Kaczynski, 2008; Lee, Wallace, & Alkin, 2007; Oliver, Casiraghi, Henderson, Brooks, & Mulsow, 2008). In contrast, methods for teaching evaluative thinking are described very rarely, if at all. In this section we offer a theoretical framework and specific strategies designed to teach, facilitate and promote evaluative thinking in individuals as well as in groups or organizations. It is important to note that these strategies are intended to be used in addition to existing ECB efforts designed to promote knowledge of evaluation methods or in situations where knowledge of evaluation methods is present but evaluative thinking—and therefore sustained high quality, intrinsically motivated evaluation work—is absent. Theoretical foundations for promoting evaluative thinking The principles and strategies set forth in this paper are firmly rooted in the knowledge base provided by the cognitive science and education literatures. In this section a sample of these literatures is summarized, with particular attention to understanding the challenges that are inherent in trying to instill and cultivate evaluative thinking in learners. The discussion is then distilled into five guiding principles for promoting evaluative thinking. In general, modern cognitive and education research focuses on the idea that people construct new knowledge and understanding based on what they already know and believe (Brookfield, 1986; Piaget, 1978; Vygotsky, 1978). In the context of efforts to promote ET, this “constructivist” theory suggests that learning will be strongest if learners move from their current knowledge and beliefs toward the knowledge and skills involved in ET. Accordingly, communication between teacher and learner about current knowledge and beliefs can be useful in establishing the starting point for learning. The constructivist approach also emphasizes the DEFINING AND TEACHING EVALUATIVE THINKING 6 primary role of the learner in the process of learning. Rather than a unidirectional relationship where the teacher provides knowledge to the student, the teacher’s role is instead to create opportunities for the learner to construct her or his own knowledge through practice. This approach to teaching and learning may require more time and intrinsic motivation on the part of the learner. However, constructivists argue that the new knowledge and skills that result will be more deeply understood and sustained in their use. One of the particular contributions of cognitive science research has been the identification of types of thinking (e.g., categorization, causal thinking, analysis, etc.), levels of thinking (e.g., remembering, applying, creating), and the thinking skills associated with those levels. For example, education researchers focused on cognition refer to something called “evaluativist” level thinking (Kuhn, 2005; Kuhn, Cheney, & Weinstock, 2000). Evaluativist level thinking is defined by these researchers as one of the highest levels of thinking one can do. An evaluativist level thinker believes that: assertions are “judgments that can be evaluated and compared according to criteria of argument and evidence;” knowledge is “generated by human minds, uncertain, but susceptible to evaluation;” and critical thinking is “valued as a vehicle that promotes sound assertions and enhances understanding” (Kuhn, 2005, p. 31). Essentially, evaluativist level thinking is the same as evaluative thinking un-coupled from formal evaluation. Cognitive skills that characterize evaluativist level thinking are examples of what Bloom (1956) refers to as “higher-order” thinking skills. This means that they involve the coordinated use of several “lower-order” thinking skills such as recall, comprehension and application. Skillbuilding becomes an important consideration when cognition is viewed this way, because humans are not born with an ability to engage in high-level critical thinking; rather, this is an ability that has to be cultivated or acquired. David Perkins makes an analogy to physical activity: “Everyday thinking, like ordinary walking, is a natural performance we all pick up. But good thinking, like running the 100 meter dash or rock climbing, is a technical performance, full of artifice” (as cited in Hunkins, 1995, p. 17). As a skill, evaluative thinking must be learned and practiced. The learner must have some theoretical knowledge of evaluative thinking and the skills it involves in order to practice. Then, just as with any physical skill, the more practicing the learner does, the better the thinking will be. And, like most things that require practice, there are no shortcuts or tricks that will make someone an instantly better thinker (Ericsson & Charness, 1994). Also, as with any skill, evaluative thinking must be developed incrementally, so that the context and ways in which it is used become more complex and challenging over time, thereby gradually increasing the skill level of the learner (Brookfield, 2012). Another reason why critical thinking can be such a difficult skill to develop is that the brain has natural tendencies to error, bias and “blindspots,” particularly in a subconscious effort to maintain an existing belief. This tendency to make evidence subservient to belief, rather than the other way around, is known as “belief preservation,” or “confirmation bias.” In order to compensate for this influence good thinkers must use another high-level thinking skill, namely, analysis and awareness of one’s own thinking or “metacognition,” to recognize and reflect on their own belief preservation tendencies (Lord, Ross, & Lepper, 1979). Finally, just as it is important to use new knowledge in several different contexts in order for it to deepen and be retained, so is it important to use thinking skills to solve a variety of different problems in order for the thinking skills to continue to develop (Halpern, 1998). Though thinking skills can be used in a wider variety of situations than any particular piece of knowledge, this also means that there is more territory that the skill needs to be transferred to in DEFINING AND TEACHING EVALUATIVE THINKING 7 order to fully develop. Building connections within the brain is essential. Articulating, or even illustrating, the critical thinking being applied to a new problem can help solidify these connections. Moreover, it is important that ET be practiced routinely, intentionally and transparently. Often, ECB professionals work to promote ET without naming it or announcing their intention. The power of stating, out loud, an intention to practice ET is that it allows the learner’s brain to quickly queue up past experience with that skill and, in turn, makes the practice more productive (Ericsson & Charness, 1994). In addition, labeling ET practice out loud contributes to the creation of an intentional learning environment. Guiding principles for promoting evaluative thinking Based on the research literature summarized above, we have established the following five principles that should guide any effort to promote evaluative thinking. It is important to note that these guiding principles are directed at ECB practitioners, teachers of evaluation, professional development facilitators, and anyone else involved in intentional educational efforts to promote ET—people to whom we refer below as “promoters.” I. Promoters of evaluative thinking should be opportunist about engaging learners in evaluative thinking processes in a way that builds on and maximizes intrinsic motivation (Bransford, Brown, & Cocking, 1999; Brookfield, 2012; Piaget, 1978; Vygotsky, 1978). For instance, if staff members in an organization dislike evaluation, yet demonstrate intrinsic motivation to critically reflect on their program’s successes and failures as they drive back to the office from a program site together, ET promotion should focus on those naturally occurring discussions as a key starting point. II. Promoting evaluative thinking should incorporate incremental experiences, following the developmental process of “scaffolding” (Bransford, Brown, & Cocking, 1999; Brookfield, 2012). Extending Perkins’ analogy, cited above, a good walker should be coached through progressively more challenging walks and hikes rather than launched immediately into extreme long-distance hikes in difficult terrain. Incremental skillbuilding is especially important because ET can involve a potentially risky (emotionally or politically) questioning of foundational assumptions. To put this principle into practice, efforts to promote ET should begin by focusing on generic or everyday examples before questioning the philosophical assumptions that may be fundamental to an organization’s theory of change. III. Evaluative thinking is not a born-in skill, nor does it depend on any particular educational background; therefore, promoters should offer opportunities for it to be intentionally practiced by all who wish to develop as evaluative thinkers (Brookfield, 2012; Ericsson & Charness, 1994). If an organization’s leader asserts that ET is important, yet does not provide opportunities for staff to learn about and practice it, little or nothing will change. What’s more, efforts to promote ET should not be limited to staff with evaluation responsibilities; ideally, all members of an organization should have the opportunity to think evaluatively about their work. IV. Evaluative thinkers must be aware of—and work to overcome—assumptions and belief preservation (Brookfield, 2012; Lord et al., 1979; Nkwake, 2013). Promoters should offer a variety of structured and informal learning opportunities, such as those included in the list of practical strategies below, to help people identify and question assumptions. V. In order to best learn to think evaluatively, the skill should be applied and practiced in multiple contexts and alongside peers and colleagues (Bransford et al., 1999; Brookfield, DEFINING AND TEACHING EVALUATIVE THINKING 8 2012; Foley, 1999; Halpern, 1998; Simon, 2000). ET can and should be practiced individually, yet applying this principle can leverage the benefits of social learning (discussed in greater detail below) and help people move away from the notion that evaluative thinking is done only by the evaluator and only during formal evaluations. These five principles represent fundamental ideas on which any ET capacity building effort should be built. However, this list is certainly not exhaustive. As the field of ET capacity building grows, and new best practices in this area are established, this list is likely to evolve. Practical strategies for promoting evaluative thinking Evaluative thinking, by definition, is associated with an individual. Given what we know from social learning theory, a good strategy for developing ET is to have individuals work together. Social learning theory suggests that through behavior modeling and observation, peers learning in groups deepen knowledge more efficiently at the same time as they build a trusting community of learners (Bandura, 1977; Foley, 1999; Leeuwis & Pyburn, 2002). This principle is reflected in the examples in the following table (Table 1), which offers some practical strategies for promoting evaluative thinking based on the literature and guiding principles described above. These strategies could be used in any organization engaged in ECB and interested in promoting ET in individuals as well as in establishing an evaluative thinking culture in the organization. The example activities are organized into six general areas, shown in the left column. The guiding principles addressed by each activity area are identified by their roman numeral. The activities described here are not designed to be used all at once and some strategies may be more or less appropriate given the organization context. Also, as mentioned in guiding principle II, there are activities described here that should be experienced incrementally. For example, the “Critical Conversation Protocol” designed by Stephen Brookfield (2012) should only be used in a context where some ET habits and skills have already been established. Table 1 Practical strategies and examples of activities for promoting evaluative thinking Practical Strategies Examples of Activities 1. Create an intentional ET learning environment (I, II, III). a) Display logic models in the workplace—in meeting rooms, within newsletters, etc. b) Create public spaces to record and display questions and assumptions. c) Post inspirational questions, such as, “How do we know what we think we know?” (Patton, 2005, para. 10). d) Highlight the learning that comes from successful programs and evaluations and also from “failures” or dead ends. 2. Establish a habit of scheduling meeting time focused on ET practice (I, II, III, IV, V). a) Have participants “mine” their logic model for information about assumptions and how to focus evaluation work (for example, by categorizing outcomes according to stakeholder priorities) (Trochim et al., 2012). b) Use “opening questions” to start an ET discussion, such as, “How can we check these assumptions out for accuracy and validity?” (Brookfield, 2012, p. 195); “What ‘plausible alternative explanations’ are there for this finding?” (see Shadish, Cook, & Campbell, 2002, p. 6). DEFINING AND TEACHING EVALUATIVE THINKING 9 c) Engage in critical debate on a neutral topic. d) Conduct a media critique (critically review and identify assumptions in a published article, advertisement, etc.) (Taylor-Powell, 2010). 3. Use role-play when planning evaluation work (III, IV, V). a) Conduct a scenario analysis (have individuals or groups analyze and identify assumptions embedded in a written description of a fictional scenario) (Brookfield, 2012). b) Take on various stakeholder perspectives using the “thinking hats” method in which participants are asked to role play as a particular stakeholder (De Bono, 1999). c) Conduct an evaluation simulation (simulate data collection and analysis for your intended evaluation strategy). 4. Diagram or illustrate thinking with colleagues (IV, V). a) Have teams or groups create logic and pathway models (theory of change diagrams or causal loop diagrams) together (Trochim et al., 2012). b) Diagram the program’s history. c) Create a system, context and/or organization diagram. 5. Engage in supportive, critical peer review (I, II, III, IV, V). a) Review peer logic models (help identify leaps in logic, assumptions, strengths in their theory of change, etc.). b) Use the Critical Conversation Protocol (a structured approach to critically reviewing a peer’s work through discussion) (Brookfield, 2012). c) Take an appreciative pause (stop to point out the positive contributions, and have individuals thank each other for specific ideas, perspectives or helpful support) (Brookfield, 2012). 6. Engage in evaluation (I, II, III, V). a) Ensure that all evaluation work is participatory and that members of the organization at all levels are offered the opportunity to contribute their perspectives. b) Encourage members of the organization to engage in informal, self-guided evaluation work. c) Access tools and resources necessary to support all formal and informal evaluation efforts (including the support of external evaluators, ECB professionals, data analyzers, etc.). Note. The roman numerals in parentheses following each practical strategy above refer to the evaluative thinking guiding principles to which each practical strategy corresponds. There are innumerable ways to think about how these activities could be incorporated into an organization’s daily, weekly, and monthly routines. As an internal evaluator, one may have the opportunity to lead these activities, incorporating them into regular existing meetings and around the office. External evaluators and ECB professionals may think about developing an ET focused introductory workshop in which some or all of the above activities are practiced and DEFINING AND TEACHING EVALUATIVE THINKING 10 members of the participating organization put forth a plan for incorporating a few of these into their regular routine. This professional development workshop approach requires an ongoing relationship between the capacity builder and the organization partners. No single workshop can change an organization’s culture. This will depend on internal “evaluation champions” (Brandon, Smith, & Hwalek, 2011; Nielsen, Lemire, & Skov, 2011; Trochim et al., 2012) being continuously supported by an ET professional as well as administrators and policies internal to the organization. In our own work as ECB practitioners, we have successfully integrated some of the practices from Table 1 into ECB workshops for non-evaluators working in community education and STEM education contexts (Archibald, Buckley, Urban, & Trochim, 2012; Earle & Archibald, 2009; Archibald, Earle, & Hargraves, 2010) and have practiced similar activities with evaluation peers through a skill-building workshop at the American Evaluation Association conference (Buckley & Archibald, 2013); most of our work with ET has taken place in the United States, though we have also adapted our approach for use in international development contexts as well. While our program of empirical research on the promotion of ET is still in its early stages, we have qualitative evidence that these practical strategies tend to be well received and seem to encourage evaluative thinking. We have also noted that time and resource barriers can be daunting. Some funders or organizational leaders may be more reluctant to invest in ET than in evaluation because the benefits may seem less tangible and less immediate. Despite that, there is evidence that an increasing number of organizations and agencies do see the benefit in emphasizing and promoting ET. For example, in a contribution to the empirical knowledge base on ET, a recent case study report produced by InterAction and the Centre for Learning on Evaluation and Results for Anglophone Africa (CLEAR-AA) provides insights on the experiences of four international NGOs with evaluative thinking at the organizational, program, and project levels (Griñó, Levine, Porter, & Roberts, 2014). Evaluative Thinking and an Evaluation Culture The case studies presented by Griñó, Levine, Porter, and Roberts (2014), in addition to the insights on teaching and promoting ET that we synthesize above, could inform efforts to instill a culture of evaluative thinking within an organization. The question of how ET can be promoted in organizational contexts, a question that is beyond the scope of this paper, remains a pressing issue facing the field of research on ECB. Much has been written about organizational change, learning-enabled organizations, and even evaluative thinking at the organizational level (Bennett & Jessani, 2011; Davidson, 2005; Preskill & Torres, 1999; Senge, 2006). These authors, and others, explain that in order for organizations to improve their effectiveness and therefore survive, they must be willing to think evaluatively, engage in effective evaluation and utilize the results. However, unlike people, organizations do not think or act. Therefore, an organization that “thinks evaluatively” might be better described as an organization in which people at all levels of the organization are evaluative thinkers and are engaged in the evaluation of what their organization does in some capacity. Creating and encouraging a culture of evaluative thinking is not a trivial undertaking. It requires intervention and commitment at multiple levels of the system, where individuals at all levels of the system will require different evaluation knowledge, attitudes, and skills. For example, those at the “top” of the management hierarchy will have to be committed to allowing time and space for evaluation, as well as to being open to change based on evaluation results; program level staff will have to adopt the knowledge and skills described throughout this paper; DEFINING AND TEACHING EVALUATIVE THINKING 11 and all levels of the hierarchy will need to build trust, support broad inclusion in decision making, and create a space that is safe for asking questions and giving and receiving honest critique without fear of shame or losing one’s job. However, for an evaluative thinking culture to take root, members of the organization will ideally share an evaluative attitude and an ability to engage in evaluative thinking. Upper management may apply evaluative thinking skills and attitudes in a slightly different way, namely for making decisions about policies and program changes, but the thinking skills are fundamentally the same. These skills and attitudes can only exist at the individual level, but in order for an organization to adopt an evaluation culture, a critical mass of the individuals who make up that organization must possess them. The idea of a critical mass being necessary for change is not a new one. As Harman (1998) describes, “Throughout history, the really fundamental changes in societies have come about not from dictates of governments and the results of battles but through vast numbers of people changing their minds—sometimes only a little bit” (p. viii). Experience suggests that organizational commitments to the development of an evaluative culture can build on this idea strategically. If evaluative thinking is promoted by an evaluation champion in a position of influence and is increasingly practiced by members of the organization as part of a learning community, an evaluation culture will follow. Therefore, the methods and strategies used for teaching and learning evaluative thinking have very broad implications for the quality as well as the sustainability of evaluation within organizations. Additional inquiry is needed to better understand how to build on the concept of ET in order to promote and strengthen an evaluative culture within an organization. Conclusion In recent years, the field of evaluation has hit upon an idea central to unlocking the je ne sais quoi of high quality, sustained, useful evaluation practice: evaluative thinking. The challenge, however, has been how to teach and cultivate this important element. Preskill (2013) has recently articulated that challenge, asking: “What specific activities, practices, and behaviors contribute to building a culture of inquiry? What does it take to sustain evaluative thinking and practice?” (p. 2). In this paper, to begin responding to that challenge, we have proposed a succinct definition of ET that provides specific guidance for evaluation capacity building efforts designed to promote and sustain evaluative thinking. This definition is based on the work of education and cognitive science researchers who have long focused on the parallel construct known as critical thinking. Careful connection of evaluative thinking and critical thinking encourages new areas of research and inspiration for practice in evaluation and ECB. These could include: a more comprehensive identification of additional principles for promoting ET; linking ET to major evaluation paradigms (e.g., utilization-focused, theory-driven, methoddriven) and to important theoretical foundations (e.g., validity theory); and research on evaluative thinking in organizational cultures—potential research questions could focus on how people react to interventions like those in Table 1, what contextual barriers exist that impede their implementation, and what each intervention’s relative effectiveness is. Our particular areas of interest include careful measurement of ET (Archibald, Buckley, & Trochim, 2011), a deeper exploration of the intersection between ET and evaluative culture, and a reexamination of current ECB strategies and frameworks with an eye toward intentional promotion of ET—including recognizing strategies that already implicitly promote ET yet could benefit from making the ET goal more explicit. DEFINING AND TEACHING EVALUATIVE THINKING 12 Interest in evaluative thinking is growing within the field of evaluation. The more we can recognize, measure, and—especially—strengthen ET within individuals and organizations, the more we can contribute to the ultimate evaluation goals of promoting the beneficial evolution of programs and organizations and the allocation of society’s scarce resources to highest uses. This paper attempts to move us forward in this direction. DEFINING AND TEACHING EVALUATIVE THINKING 13 References Alkin, M., & Christie, C. (2002). The use of role-play in teaching evaluation. American Journal of Evaluation, 23(2), 209-218. Argyris, C., & Schön, D. (1978). Organization learning: A theory of action perspective. Reading, MA: Addison Wesley. Archibald, T., & Buckley, J. (2013, October). Evaluative Thinking: Principles and Practices to Enhance Evaluation Capacity and Quality. Skill-building workshop facilitated at the annual conference of the American Evaluation Association, Washington, DC. Archibald, T., Buckley, J., & Trochim, W. (2011, November). Evaluative Thinking: What is it? Why does it Matter? How Can We Measure It? Paper presented at the annual conference of the American Evaluation Association, Anaheim, CA. Archibald, T., Buckley, J., Urban, J. & Trochim, W. (2012, October). Promoting Evaluative Thinking, the Missing Ingredient in Evaluation Capacity Building. Paper presented at the annual conference of the American Evaluation Association, Minneapolis, MN. Archibald, T., Earle, J. & Hargraves, M. (2010, November). Early Indications of Process Use Outcomes Associated With Evaluation Planning Through the Systems Evaluation Protocol. Presented at the American Evaluation Association Conference, San Antonio, TX. Baker, A. (2011). Evaluative thinking in philanthropy pilot. Report submitted to the Bruner Foundation, Effectiveness Initiatives. Lambertville, NJ: Author. Retrieved from http://www.evaluativethinking.org/docs/eTip.FINAL_REPORT.V8.pdf Baker, A., & Bruner, B. (2012). Integrating Evaluative Capacity into Organizational Practice. The Bruner Foundation. Retrieved from http://www.evaluativethinking.org/docs/Integ_Eval_Capacity_Final.pdf Baker, A., Bruner, B., Sabo, K., & Cook, A. M. (2006). Evaluation Capacity & Evaluative Thinking in Organizations. Bruner Foundation, Inc. Retrieved from http://www.evaluativethinking.org/docs/EvalCap_EvalThink.pdf Bandura, A. (1977). Social learning theory. Englewood Cliffs, NJ: Prentice Hall. Bennett, G., & Jessani, N. (Eds.) (2011). The knowledge translation toolkit: Bridging the knowdo gap: a resource for researchers. New Delhi, India: Sage Publications. Bloom, B. S. (Ed.). (1956). Taxonomy of educational objectives book 1: Cognitive domain. Longman, NY: Addison Wesley Publishing Company. Brandon, P. R., Smith, N. L., & Hwalek, M. (2011). Aspects of successful evaluation practice at an established private evaluation firm. American Journal of Evaluation, 32(2), 295-307. Bransford, J., Brown, A. L., & Cocking, R. R. (1999). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press. Brookfield, S. (1986). Understanding and facilitating adult learning: A comprehensive analysis of principles and effective practices. San Francisco, CA: Jossey-Bass. Brookfield, S. (1987). Developing critical thinkers. San Francisco, CA: Jossey-Bass. Brookfield, S. (2012). Teaching for critical thinking: Tools and techniques to help students question their assumptions. San Francisco, CA: Jossey-Bass. Buckley, J., & Archibald, T. (2013, October). Evaluative Thinking: Principles and Practices to Enhance Evaluation Capacity and Quality. Professional development workshop facilitated at the annual conference of the American Evaluation Association, Washington, DC. DEFINING AND TEACHING EVALUATIVE THINKING 14 Carden, F., & Earl, S. (2007). Infusing evaluative thinking as process use: The case of the International Development Research Centre (IDRC). In J. B. Cousins (Ed.), Process Use in Theory, Research, and Practice. New Directions for Evaluation, 116, 61-73. San Francisco, CA: Jossey-Bass. Darabi, A. (2002). Teaching program evaluation: Using a systems approach. American Journal of Evaluation, 23(2), 219-228. Davidson, E. J. (2005). Evaluative thinking and learning-enabled organizational cultures. Presentation to the Canadian Evaluation Society & American Evaluation Association Conference. Toronto, Ontario. Davidson, E. J., Howe, M., & Scriven, M. (2004). Evaluative thinking for grantees. In M. Braverman, N. Constantine, & J. K. Slater (Eds.), Foundations and evaluation: Contexts and practices for effective philanthropy (pp. 259-280). San Francisco, CA: Jossey-Bass. De Bono, E. (1999). Six thinking hats. London: Penguin. Dewey, J. (1910). How we think. Boston: D. C. Heath & Co. Earle, J., & Archibald, T. (2009, November). Evaluation Planning Using the Systems Evaluation Protocol. Presented at the American Evaluation Association Conference, Orlando, Florida. Ennis, R. H. (1990). The extent to which critical thinking is subject-specific: Further clarification. Educational Researcher, 19(4), 13-16. Ericsson, K., & Charness, N. (1994). Expert performance. American Psychologist, 49(8), 725- 747. Facione, P. A. (1990). Critical Thinking: A Statement of Expert Consensus for Purposes of Educational Assessment and Instruction. The Delphi Report: Research Findings and Recommendations Prepared for the American Philosophical Association (ERIC document no. ED 315–423). Washington, DC: ERIC. Facione, P. A. (2000). The disposition toward critical thinking: Its character, measurement, and relationship to critical thinking skill. Informal Logic, 20(1), 61-84. Febey, K., & Coyne, M. (2007). Program evaluation: The board game an interactive learning tool for evaluators. American Journal of Evaluation, 28(1), 91-101. Foley, G. (1999). Learning in social action: A contribution to understanding informal education. London: Zed Books. Griñó, L., Levine, C., Porter, S., & Roberts, G. (2014). Embracing Evaluative Thinking for Better Outcomes: Four NGO Case Studies. InterAction and the Centre for Learning on Evaluation and Results for Anglophone Africa (CLEAR-AA). Retrieved from: http://www.interaction.org/document/embracing-evaluative-thinking-better-outcomes- four-ngo-case-studies. Halpern, D. (1998). Teaching critical thinking for transfer across domains. American Psychologist, 53(4), 449-455. Harman, W. W. (1998). Global mind change: The promise of the 21st century. San Francisco, CA: Berrett-Koehler Publishers. Hunkins, F. P. (1995). Teaching thinking through effective questioning. Boston: ChristopherGordon Publishers. Kelley, M., & Kaczynski, D. (2008). Teaching evaluation from an experiential framework. American Journal of Evaluation, 29(4), 547-554. DEFINING AND TEACHING EVALUATIVE THINKING 15 King, J. A. (2007). Developing evaluation capacity through process use. In J. B. Cousins (Ed.), Process Use in Theory, Research, and Practice. New Directions for Evaluation, 116, 45- 59. San Francisco, CA: Jossey-Bass. Kuhn, D. (2005). Education for thinking. Cambridge, MA: Harvard University Press. Kuhn, D., Cheney, R., & Weinstock, M. (2000). The development of epistemological understanding. Cognitive Development, 15(3), 309-328. Lee, J., Wallace, T., & Alkin, M. (2007). Using problem-based learning to train evaluators. American Journal of Evaluation, 28(4), 536-545. Leeuwis, C., & Pyburn, R. (Eds.) (2002). Wheelbarrows full of frogs: Social learning in rural resource management. Assen, The Netherlands: Koninklijke Van Gorcum. Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098-2109. McPeck, J. E. (1984). The evaluation of critical thinking programs: Dangers and dogmas. Informal Logic, 6(2), 9-13. McPeck, J. E. (1990). Teaching critical thinking: Dialogue and dialectic. New York, NY: Routledge. McKegg, K., & King, S. (2013). Evaluation 101. Workshop presented at the Aotearoa New Zealand Evaluation Association (ANZEA) Conference, New Zealand, July 22, 2013. Nelson, M., & Eddy, R. M. (2008). Evaluative thinking and action in the classroom. In T. Berry & R. M. Eddy (Eds.), Consequences of No Child Left Behind for educational evaluation. New Directions for Evaluation, 117, 37-46. San Francisco, CA: Jossey-Bass. Nielsen, S. B., Lemire, S., & Skov, M. (2011). Measuring evaluation capacity—Results and implications of a Danish study. American Journal of Evaluation, 32(3), 324-344. Nkwake, A. M. (2013). Working with assumptions in international development program evaluation. New York: Springer. Oliver, D., Casiraghi, A., Henderson, J., Brooks, A., & Mulsow, M. (2008). Teaching program evaluation: Three selected pillars of pedagogy. American Journal of Evaluation, 29(3), 330-339. Patton, M. Q. (2005). In conversation: Michael Quinn Patton. Interview with Lisa Waldick, from the International Development Research Center. Retrieved from http://www.idrc.ca/en/ev-30442-201-1-DO_TOPIC.html. Patton, M. Q. (2007). Process use as a usefulism. In J. B. Cousins (Ed.), Process Use in Theory, Research, and Practice. New Directions for Evaluation, 116, 99-112. San Francisco, CA: Jossey-Bass. Patton, M. Q. (2010). Incomplete successes. The Canadian Journal of Program Evaluation, 25(3), 151-163. Patton, M. Q. (2011). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York: Guilford Press. Paul, R. (1993). Critical thinking: What every person needs to survive in a rapidly changing world (3rd ed.). Santa Rosa, CA: Foundation for Critical Thinking. Piaget, J. (1978). Success and understanding. Boston, MA: Harvard University Press. Preskill, H. (2008). Evaluation's second act: A spotlight on learning. American Journal of Evaluation, 29(2), 127-138. Preskill, H. (2013). Now for the hard stuff: Next steps in ECB research and practice. American Journal of Evaluation, 35(1), 116-119. DEFINING AND TEACHING EVALUATIVE THINKING 16 Preskill, H., & Boyle, S. (2008). Insights into evaluation capacity building: Motivations, strategies, outcomes, and lessons learned. The Canadian Journal of Program Evaluation, 23(3), 147-174. Preskill, H., & Torres, R. T. (1999). Evaluative inquiry for learning in organizations. Thousand Oaks, CA: Sage Publications. Schön, D. (1983). The reflective practitioner: How professionals think in action. New York: Basic Books. Schwandt, T. A. (2008). Educating for intelligent belief in evaluation. American Journal of Evaluation, 29(2), 139-150. Scriven, M. (1987). Probative logic: Review and preview. In F. H. van Eemeren, R. Grootendorst, J. A Blair, & C. A. Willard (Eds.), Argumentation: Across the lines of discipline: Proceedings of the Conference on Argumentation 1986 (pp. 7-32). Dordrecht, The Netherlands: Foris. Senge, P. M. (2006). The fifth discipline: The art and practice of the learning organization. New York: Doubleday/Currency. Shadish, W., Cook, T., & Campbell, D. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin. Simon, H. A. (2000). Observations on the sciences of science learning. Journal of Applied Developmental Psychology, 21(1), 115-121. Spedding, J. (Ed.). (1868). The letters and the life of Francis Bacon (Vol. 3). London: Longmans, Green, Reader, and Dyer. Taut, S. (2007). Studying self-evaluation capacity building in a large international development organization. American Journal of Evaluation, 28(1), 45-59. Taylor-Powell, E. (2010). Building Evaluation Capacity of Community Organizations. Materials from a professional development workshop presented at the American Evaluation Association annual conference, San Antonio, TX, November 8-9, 2010. Trochim, W., Urban, J. B., Hargraves, M., Hebbard, C., Buckley, J., Archibald, T., Johnson, M., & Burgermaster, M. (2012). The Guide to the Systems Evaluation Protocol (V2.2). Ithaca NY. Retrieved from https://core.human.cornell.edu/research/systems/protocol/index.cfm. Vo, A. (2013). Toward a definition of evaluative thinking. (Unpublished doctoral dissertation). University of California, Los Angeles (UCLA). Volkov, B. B. (2011). Beyond being an evaluator: The multiplicity of roles of the internal evaluator. In B. B. Volkov & M. E. Baron (Eds.), Internal evaluation in the 21st century. New Directions for Evaluation, 132, 25-42. San Francisco, CA: Jossey-Bass. Vygotsky, L. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press. Weiss, C. H. (1998). Have we learned anything new about the use of evaluation? American Journal of Evaluation, 19(1), 21-33. Wind, T., & Carden, F. (2010). Strategy evaluation: Experience at the International Development Research Centre. In P. A. Patrizi & M. Q. Patton (Eds.), Evaluating strategy. New Directions for Evaluation, 128, 29-46. San Francisco, CA: Jossey-Bass. View publication stats