Part 6 Science and Values Introduction I. The Problems Problems about the role of value judgments within science and ethical disputes about the uses of science (e.g., in technology and social policy) are commonplace. Recent discussion of genetic engineering, behavior control, safety in nuclear power plants, human experimentation (e.g., deception and manipulation in research), medical ethics, and the IQ controversy are only the most dramatic instances of such problems. The basic problems are by no means new: just think of the controversy between Galileo and the Catholic Church about the nature of the solar system, or controversies between Darwin and his critics over the question of evolution. However, underlying the many disputes covered by the phrase "science and values," there are several fundamental and persisting issues. Among the most obvious are these: 1. Can (is/should) science be "value free" or "neutral"? What do "value free" and "value neutral" mean in such contexts? What kinds of values are at issue? (This raises the question "What is value?" which must be bypassed here.) For instance, not all values are moral values, so that science may be morally neutral even though not value neutral in some other (or in a wider) sense. As víc shaft see, it is open to question -wheXhei science is ot can be even morally neutral, much less value neutral. 2. If science is value free (in either the extended or the narrower sense of morally neutral: unless otherwise specified I shall mean the former whenever "value free" is used) what implications does this have for our conception of science, knowledge, and values, and for our views about the nature and aims of science and the social uses of science and technology? If science is not value free, what does this imply about the foregoing? 3. What are the best or most defensible concepts or theories about knowl- 481 482 Part 6: Science and Values edge, values (moral or nonmoral), science, and the best ways of conceptualizing their interconnections (or the lack thereof)? Here we must eventually deal with the concepts of rationality, objectivity, subjectivity, pure and applied science, and so on. (See part 1 of this volume.) And we must eventually decide what sorts of views will or will not be plausible candidates for helping us understand science, values, and their connections. For instance, will any theory according to which science and values are, and should be, totally unrelated be acceptable to us? This is a question that is discussed later on in this introduction. It should come as no surprise that these and other issues have been given a variety of answers, and have generated a number of complex and often conflicting theories about science, values, and their interrelations. There is no point in attempting to even list, much less discuss, all or even many of these views in an introductory essay. Instead, some historical and philosophical backdrop for the selections in part 6 will be provided. The emphasis will be on those developments which get to the very heart of the issues, and which bear most directly on the readings. It will prove instructive to begin with the question of why there is, or even should be, any problems about the relationships between science and values at all. This question is by no means rhetorical, but rather gets to the nub of the issue, which involves the relationship between science (and, more generally, knowledge) and values which finds expression in modern science and philosophy. It is here, after all, that we must look for the ideas that continue to dominate our culture's general outlooks on these questions. II. Why Is There a Problem About "Science and Values"? The problems briefly outlined above come into existence with the advent of modern philosophy and science, especially the scientific revolution of the seventeenth century. In order to appreciate why, and how, this happens, it will be helpful to briefly sketch the views of the ancient Greeks—the molders of Western culture—on the issues of knowledge and values (or, at any rate, on the Greek approximations to these issues as they have since come to be understood). Then we can go on to sketch the development of the modern problems as they arise at the beginning of "modem history," which shall be dated here from the seventeenth century. For the ancient Greeks, there are no distinctions between (a) science or knowledge and "values" or "the good" or between (b) science and philosophy or between (c) the objective and the subjective (as these concepts are understood within modern science and philosophy) or between (d) a "factual" or "descriptive" account of the world (e.g., in terms of the structural Introduction to Part 6 483 properties of things and the laws which govern them) and a "normative" or "evaluative" or (even) a "moral" interpretation of the world, as embodying a certain order, pattern, beauty, purpose, and even "goodness." (What is natural is also good in this view.) The most influential, and most forceful, presentation of the Greek view of the cosmos is articulated by Plato in his Republic (bks. iv-vii). For Plato, "objective" reality is characterized in terms of the idea or form of "the good." Reality is a unified, patterned, or ordered whole. In order to understand experience, we have to arrive at a knowledge of the laws and structures governing everything, as well as the order, patterning, or "purpose" which pervades all experience and which unifies it in a coherent and "meaningful" fashion. Such knowledge, which Plato calls "Dialectic," is not to be equated with what is today called "knowledge," since our term "knowledge" is often used as a synonym for "science" or "scientific knowledge." On Plato's view, the aims and methods of the empirical sciences are designed to give us at best only a limited insight into reality; more specifically, an insight into a certain kind of experience, a certain level of reality (the level of objects of experience like trees, rocks, and so on). They must be complemented by an insight into the more basic principles and patterns which govern everything. What for Plato is the "most real" is also the most abstract and the least accessible to ordinary experience. (Modern theoretical science, e.g., atomic theory, quantum physics, genetics, and chemistry, embody this Platonic ideal to some extent.) An "objective account" of things is not complete until everything is ordered into a unified picture, which involves the idea that purpose and norms (i.e., ordering principles) are not eliminable from such an account. It also involves the idea that science cannot give us either a complete account of everything, or an adequate account of even the objects of its legitimate concern, since these objects must be ultimately understood in terms of the principles which govern everything, including themselves and their place in the whole scheme. One of the chief features of modern science and modern philosophy is the attempt to deny or else to truncate this platonic vision of the universe, and of the nature of knowledge and the good. In what follows, a brief sketch of these developments will be given. HI. Origin and Nature of the Problem The issues concerning the relationship between science and values, including the role of values in science (the issue of value-neutrality) comes into modern Western history with the advent of the so-called mechanical picture of the world (especially classical Newtonian science), and the scientific revolution, most especially the epistemological and methodological revolution in science and philosophy inspired by its main architect, Rene Descartes (1596-1650). 484 Part 6: Science and Values According to this view, we must make a sharp distinction between what is objective and what is subjective in order to acquire reliable (i.e., "objective") knowledge of the world (including knowledge of human beings) by the use of reliable ("rational") methods of inquiry. Since, according to the mechanical world picture, nature is a vast machine governed by quantitative laws and relationships (nature is written in the language of mathematics), the objective features of the world turn out to be those features—matter, motion, and physical magnitudes—which constitute the nuts and bolts of the machine, together with the laws governing it. Only such features of experience are truly objective. Thus a rational methodology for inquiring about the machine's working—i.e., for acquiring knowledge—must take into account only those features which can be quantified, i.e., written in nature's language. The very essence of the world is given by the objective properties just mentioned, together with the mechanical laws which govern them. (These essential features of the world are dubbed "primary qualities" by Galileo and Locke.) Everything else, e.g., colors, values, interpretations, purpose, and theories, is not "objective" and thus does not belong in an objective account of the world, unless it can be "reduced" to objective terms, or explained away as illusory phenomena by such an account. (Later thinkers expanded the province of science to include those "subjective" items—called "secondary qualities"—that had an autonomous status for Descartes [e.g., mental phenomena and values] so that these came to be "reduced" to objective features or explained away entirely, as in modern behaviorism and materialism.) In sum, objectivity means both: (a) objective in the sense of being about what is objective, and (b) objective in the sense of arriving at objective truths by methods which themselves take no account of anything "subjective," i.e., which are unbiased. (The search for mechanical "fool-proof methods, e.g., computer algorithms, cost benefit decisions, is the ultimate outcome of this ideal of rational, objective method.) IV. Initial Objections to "Objectivism" This view already has insuperable difficulties: at least we can now see this (which is not to make the anachronistic claim that its classical proponents were flawed for not seeing it: it is just as easy to be a "Monday-morning quarterback" in history as in football). First of all, to paraphrase Woody Allen (Love and Death): Objectivity is subjective, and subjectivity is objective; at least the latter point is certainly true: the fact that I am (say) in pain is no less objective a fact about me than the fact that I weigh 160 pounds. Second, the view being considered is, paradoxically, rooted in the notion that "objective knowledge" is a "rational reconstruction" of the private, i.e., Introduction to Part 6 485 "subjective," experiences of a collection of knowers (viz., out of those subjective experiences which represent the essence of the world). Modern epis-temology is rooted in this conundrum. Third, as M. Polanyi and others show, if we merely wanted objective truths, we (as a species) would devote virtually all of our intellectual energy to studying interstellar dust, and only a fraction of a microsecond studying ourselves (or anything else, for that matter) since, objectively speaking, human beings are of no cosmic significance in the objective order of things! Obviously no one would take this requirement on objectivity seriously (this concept of objectivity is theological—God sees the world objectively as an outside omniscient observer). What we are seeking are truths which are interesting, which are useful and valuable to us. In a word, knowledge, truth, and objectivity are (or are rooted in) values and human purposes. Fourth, not only do knowledge, objectivity, and truth—and thus methodology—turn out to be, or at least be grounded in, values; on some views they are, or are grounded in, moral ideals. In any event, the search for knowledge expresses a value; and thus distinctions between reliable and unreliable knowledge claims, between good and bad methods, and so on, are partly normative judgments. At this point it may be worth citing the words of N. I. Bukharin, who says: "The idea of the self-sufficient character of science... is naive; it confuses the subjective passions of the professional scientist... with the objective social role of this kind of activity, as an activity of vast practical importance.") V. Some Implications of "Objectivism" Despite these "obvious" difficulties, the fact remains that the distinction between objectivity and subjectivity, as regards claims governing methodology and the content of an objective world picture, dominates modern science, philosophy, and Western culture from Descartes and Galileo up to the present. The ideas of value neutrality, the uses of cost-benefit analyses to arrive at "rational" decisions in politics, science, and technology, the attempt to use knowledge for social and political ends (e.g., behavior control techniques), and so on, are just sophisticated outgrowths of this cluster of ideas. So, too, is the idea that scientific method affords the only rational methods for solving problems, so that value judgments are either not rational, or else are concerned merely with problems about calculating efficiency, or about decision-making under conditions of uncertainty. Both proponents and opponents of the classical picture of the world and the attendant ideas of objectivity and rationality (e.g., behaviorists, on the one hand, and so-called "neo-romantics" and existentialists, on the other) share this conception that values are essentially subjective and irrational if they are anything more than predictions or calculations about means to an end. 486 Part 6: Science and Values But it is becoming glaringly obvious that these assumptions are connected with an inadequate conception of both science and values, and of the interrelationships between the two areas. No one is reluctant to distinguish between "good" and "bad" science, or between science and nonscience or pseudoscience (e.g., astronomy vs. astrology). (See part 1.) Yet on the view being discussed we are not supposed to be able to distinguish between good and bad moralities, or between acceptable vs. unacceptable value judgments. But this combination of "normative" science and "positive" ethics is internally incoherent and grossly inadequate. VI. Some Corollaries of "Objectivism" Two clusters of ideas conspire to produce this result, (i) Many advocates of the view being discussed either attempt to turn ethics into a science, or to explain value judgments scientifically (e.g., in terms of conditioning or historical or economic determinism). The latter kind of strategy usually leads to some form of ethical nihilism or extreme ethical relativism: All we can do is explain the origins of ethical behavior in terms of some objective scientific theory. On this view the point or content (i.e., the "autonomy" of value judgments is either lost or obscured. But this strategy explains why, for an advocate of this approach, "positive ethics" is the inevitable result. Moreover, it is just because "normative science" shows that objectivism is the only correct scientific approach that "positive ethics" turns out to be compatible, indeed, required by, objectivism, (ii) The idea of value neutrality, especially as this idea shows up in the social and policy sciences, is greatly influenced by views which attempt to reconcile "objective" science and "subjective" morality, by drawing theoretical limits to science, in order to save morality and human freedom. On this view, (a) science and morality cannot conflict (they are "complementary") since (b) they have nothing to do with each other: they govern different spheres of experience (these relate to the differences between "man as object" and as actor). But the price to be paid for this move is just the idea that science is, and must be, value neutral and that value judgments are merely subjective and irrational acts of the will. Connected with this view is the idea that rational justification is essentially (hypothetico) deductive. The ultimate principles of a system, whether a moral system, a scientific theory, or a formal system, such as geometry, or the "brute facts" or "data" are beyond rational dispute. They must either be arbitrarily stipulated and accepted or be taken as self-evident, and then used to define what a rational proof or justification is within the system. The "ulti-mates" must be accepted as given. Relativism is the view that there are different, equally rational or acceptable (incompatible) ultimates. When one reaches these ultimates, be it a body of "hard facts," or axioms or moral prin- Introduction to Part 6 487 ciples, one has reached bedrock. One can then either accept or reject them. If the former, one can then show that the principles which follow from them are rational. In the latter case, one is free to adopt different "ultimates." At the same time, finding rational principles, or making rational decisions within the system, becomes a matter of, say, finding the best means of optimizing the ultimate principles or ends postulated by the system. Ultimately, this theory of justification is part and parcel of the idea that science is value free, that value judgments are really objective judgments about the best means of optimizing goals which cannot be rationally assessed, and of the view that value judgments can and must be explained objectively (e.g., by deterministic explanations) or else are merely arbitrary fiats of individuals or cultures (which view amounts to nihilism or relativism). This is the most dramatic way in which the theory of objectivity we are discussing is already pregnant with modern nihilism; for this view already structures our view of values as either just objective items of a social system or an individual's behavioral repertoire or else as just "subjective" reactions to the objective facts, which do not belong in an objective scientific picture of the world. It turns out, that both of these approaches to values amount to a kind of relativism, which often embodies a very conservative ideology, i.e., supports the idea that existing views of morality cannot be challenged, and thus that value judgments are really judgments about the best means to those ends and values which are in existence. (This is why cost-benefit analyses embody the view that the optimization of a given end is the only standard for making value judgments.) It thus turns out, on the "c-B" view, that the idea of value neutrality really amounts to the idea that value judgments (to the degree that they are rational and objective) are just judgments of efficiency concerning the best means to a given end. The ends (e.g., purposes) are determined scientifically, and this means (ultimately) they are either given or explained by some deterministic theory as being inevitable, ultimate facts. In any event, objectivism certainly is not "value-free." VII. Recent Developments Sb recent years new light has been shed on the nature of science, values, ratio-aiiry and the sorts of issues discussed in parts 1-5 of this book. Much of the iapetus behind these (often controversial) developments stems from the of feminists, postmodernists, and sociologists of science, as well as n writers exploring the sociopolitical context of modern science. These analyses usually move further away from "objectivist" analyses of jes. But they do more than this. They shed new light on a range of issues : involve questions of values in science, and have produced some astonish- 488 Part 6: Science and Values ingly complex and worthwhile insights into the processes of scientific discovery and justification. At the same time, they have produced strong reactions from advocates of more orthodox approaches to science, values, rationality and the rest of the topics covered so far in the volume. The result of this has been the so-called science wars, in which advocates of these new approaches are charged with being "antiscience," with blurring the distinction between science and nonscience, and with seeking to give pride of place to irrationality and to forces such as ethnic and gender identity and political ideology, not just in society, but within the very core of modern science itself. Advocates of these approaches, at a minimum, insist that it is important to study science with reference to the issues they bring out. This may very well make science better, and will surely increase our understanding of science, which is always a good thing. If these analyses require a more nuanced and complex account of science, so be it. This is not the place to rehearse these issues in detail. It may be helpful instead to sketch the main lines of argument of each of the three approaches mentioned above (feminism, postmodernism, sociology of science) with special emphasis on controversies about values. First, feminism. Many feminists, especially scholars with working familiarity with one or another science, have claimed that the assumptions, methods, and guiding values of many sciences, e.g, medicine, biology, psychology, have been gender biased. When, for example, male medical researchers draw inferences about premenstrual syndrome either without studying women, or by coloring their views of women with male biases; when male psychologists make generalizations about human moral development after using only male subjects, many scholars (and not only feminists or women) raise questions about what is going on. Is there something about science, or its guiding values, assumptions, and methods, that produces such biased results? Are there "women's ways of knowing" that modern science neglects? In some cases questions about the distinction between science and nonscience is connected to the so-called male bias in favor of treating nature as an object of domination, or the so-called values of domination and control characteristic of Western civilization. These issues and others have been seriously debated. In the reading by Giere, the topics covered relate more narrowly to questions about feminism, methodology, and the issue of scientific reliability. The sociology of science, a discipline founded in the late nineteenth century as an outgrowth of some of the debates discussed in the Introduction to part 2, has grown exponentially in recent years. Initially spurred on by Kuhn, whom many interpreted as giving science a sociological account, the sociology (and psychology) of science has taken many forms and raised many issues. Can scientific realism be defended if science is viewed as a social practice? Is science better than voodoo? Do the activities of human beings Introduction to Part 6 489 create the world that scientists study? How do social and personal values, which may not always be rational, as well as personal idlosyncracles and human foibles, including the desire for fame and power, influence the activities of scientists in labs, and how does this relate to the way scientists tell stories about what they do and theorize about the results? Is science a practice like any other, in which case the same human foibles play a role everywhere, so that science is no longer special, even different from a game? If so, what becomes of scientific realism and rationality? Are there any special scientific values or standards of rationality? Are they to be merely identified and explained by sociologists and psychologists? Does an "acceptable" or "rational" theory require just as much sociological or psychological (causal?) explanation or reduction as a "failed" or "irrational" one? Indeed, can "acceptable," "rational," "failed," and "irrational" be given anything but sociological or psychological interpretations? Do we then get another form of objectivism, in which whatever is done is right or wrong just because the social standards, practices, and values do or do not endorse it? There is now a growth industry called Science and Technology Studies (STS) which discusses these sorts of issues and many others. Some of their main advocates are right in the center of the science wars, which so far seem to have only just begun. Finally, postmodernism is a general phenomenon that has pervaded all aspects of culture and society, even though there is little, if any, agreement about what the term means, or what, if any, its main claims and arguments are. For purposes of this discussion, postmodernism calls into question the validity of the ideas of truth, knowledge, reality, objectivity, rationality, and progress that both underlie and are taken for granted by modern science, indeed, by modern, post-Enlightenment culture in the West. The very distinction between "facts" and "values," "knowledge" and "power," "interpretation" and "reality," "discovery" and "invention" have been challenged in various ways. Some writers consider science a "social construct," no more or less objective than stories, novels, or folk tales. Some postmodernists have called for a "blurring of the genres": physics and history (for instance) are just different types of texts or forms of writing. As the reader can imagine, the current science wars involve basic questions that go way beyond the issues discussed in this book, as well as beyond academic and social debates about the nature and value of science in our world. Only time will tell how these debates, which take us back to part 1 of this volume, evolve, and what their consequences will be. Without even attempting to define "postmodernism" here, the thrust of many writings given this label is to challenge the concepts "reality," "knowledge," "truth," "objectivity," "meaning," even "the world." Such terms are indeterminate. They have the status of "social constructs" fabricated by some ruling class or powerful class (white men, Europeans, elitists) who impose 490 Part 6: Science and Values them on everybody else, or at least use rhetoric to destroy all other ideas and render their own constructs as "objective reality" and "truth." At the very least, skepticism and doubts about these terms challenge assumptions that both realists and antirealists share about the very possibility of scientific knowledge and progress. Another line of reasoning connects issues of knowledge to power, male dominance, and eurocentrism. Science is just another story, neither more nor less privileged than any other. It is only for political and cultural reasons that science, rationality, evidence, and argumentation are "privileged." Finally, science has become dependent upon success, measured in terms of power and wealth; science is now an economic commodity, and neither has nor requires any kind of justification in terms of its benefits to humanity. Growing skepticism about the social, economic, and cultural benefits of science, combined with misgivings about the uses or misuses of science and technology for destructive purposes, point in the same general directions, according to many postmodernists. Postmodernism is more radical than antirealism or even relativism, simce it is based upon Nietzsche's view that the universe is a meaningless, chaotic flux, which has no meaning. It is we humans who impose various orders on the world for purposes of survival, cultural flourishing, or whatever. Postmodernists are even more skeptical than Nietzsche, since they suspect that modern science is rooted in sexism, racism, eurocentrism, male bias, and dominance, and is in fact "privileged" for reasons that have little, if anything, to do with the greater rationality or evidential support of science over myths. In a way, postmodernism does radicalize various antirealist and relativist tendencies in modern thought, e.g, the writings of Kuhn, Feyer-abend, and radical feminists. However, although some postmodernist themes are not novel, they are more extreme and potentially more destructive of many of the assumptions that give modern science and technology pride of place in our world today. The reader must decide whether or not this result makes postmodernism worth further study. VIII. The Readings in Part 6 In his essay Rudner argues that the need for scientists to decide when and if the available evidence is strong enough or good enough to warrant the acceptance of a hypothesis is a normative or evaluative activity; hence, science is essentially value-laden. Hempel discusses a number of basic relations between science aad values. "Categorical" value judgments—e.g., "X is good/right or bad/wrong," are not part of science or provable/disprovable by science. Science can help us make "hypothetical" value judgments, however. Thus, *if we want A", then we should do A" can be assessed by science, since scientific Introduction to Part 6 491 knowledge can help us predict the outcome of decisions, find the most efficient means to our ends, et cetera. Science can also explain how and why individuals and groups have the values they do. McMullin looks at the issues discussed by Rudner and Hempel in a broad historical and analytic framework. His views about the normative nature of science and values are broader and more radical than either Rudner's or Hempel's, although he seems to be less radical in his views about science, values, and theory choice than Kuhn. Hollinger traces some of the debates about science and values in the twentieth century. He then sketches some ideas of the contemporary German theorist Jürgen Habermas, who tries to give us a more adequate view of the relation between science, values, and politics. Giere considers whether standard views in the philosophy of science can avoid the dangers of gender bias. He proposes a version of scientific realism that he thinks may deal with this problem. R. H. 29 The Scientist Qua Scientist Makes Value Judgments Richard Rudner The question of the relationship of the making of value judgments in a typically ethical sense to the methods and procedures of science has been discussed in the literature at least to that point which e. e. cummings somewhere refers to as "The Mystical Moment of Dullness." Nevertheless, albeit with some trepidation, I feel that something more may fruitfully be said on the subject. In particular the problem has once more been raised in an interesting and poignant fashion by recently published discussions between Carnap1 and Quine2 on the question of the ontological commitments which one may make in the choosing of language systems. I shall refer to this discussion in more detail in the sequel; for the present, however, let us briefly examine the current status of what is somewhat loosely called the "fact-value dichotomy." I have not found the arguments which are usually offered, by those who believe that scientists do essentially make value judgments, satisfactory. On the other hand the rebuttals of some of those with opposing viewpoints seem to have had at least a prima facie cogency although they too may in the final analysis prove to have been subtly perverse. Those who contend that scientists do essentially make value judgments generally support their contentions by either (a) pointing to the fact that our having a science at all somehow "involves" a value judgment; or (b) by pointing out that in order to select, say, among alternative problems, the scientist must make a value judgment; or (perhaps most frequently) (c) by pointing to the fact that the scientist cannot escape his quite human self—he is a "mass of predilections," and these predilections 492 The Scientist Qua Scientist Makes Value Judgments 493 must inevitably influence all of his activities not excepting his scientific ones. To such arguments, a great many empirically oriented philosophers and scientists have responded that the value judgments involved in our decisions to have a science, or to select problem (a) for attention rather than problem (b) are, of course, extrascientific. If (they say) it is necessary to make a decision to have a science before we can have one, then this decision is literally prescien-tific and the act has thereby certainly not been shown to be any part of the procedures of science. Similarly the decision to focus attention on one problem rather than another is extraproblematic and forms no part of the procedures involved in dealing with the problem decided upon. Since it is these procedures which constitute the method of science, value judgments, so they respond, have not been shown to be involved in the scientific method as such. Again, with respect to the inevitable presence of our predilections in the laboratory, most empirically oriented philosophers and scientists agree that this is "unfortunately" the case; but, they hasten to add, if science is to progress toward objectivity the influence of our personal feelings or biases on experimental results must be minimized. We must try not to let our personal idiosyncrasies affect our scientific work. The perfect scientist—the scientist qua scientist does not allow this kind of value judgment to influence his work. However much he may find doing so unavoidable qua father, qua lover, qua member of society, qua grouch, when he does so he is not behaving qua scientist. As I indicated at the outset, the arguments of neither of the protagonists in this issue appear quite satisfactory to me. The empiricists' rebuttals, telling prima facie as they may against the specific arguments that evoke them, nonetheless do not appear ultimately to stand up, but perhaps even more importantly, the original arguments seem utterly too frail. I believe that a much stronger case may be made for the contention that value judgments are essentially involved in the procedures of science. And what I now propose to show is that scientists as scientists do make value judgments. Now I take it that no analysis of what constitutes the method of science would be satisfactory unless it comprised some assertion to the effect that the scientist as scientist accepts or rejects hypotheses. But if this is so then clearly the scientist as scientist does make value judgments. For, since no scientific hypothesis is ever completely verified, in accepting a hypothesis the scientist must make the decision that the evidence is sufficiently strong or that the probability is sufficiently high to warrant the acceptance of the hypothesis. Obviously our decision regarding the evidence and respecting how strong is "strong enough," is going to be a function of the importance, in the typically ethical sense, of making a mistake in accepting or rejecting the hypothesis. Thus, to take a crude but easily manageable 494 Past 6: Science and Values example, if the hypothesis under consideration were to the effect that a toxic ingredient of a drug was not present in lethal quantity, we would require a relatively high degree of confirmation or confidence before accepting the hypothesis—for the consequences of making a mistake here are exceedingly grave by our moral standards. On the other hand, if, say, our hypothesis stated that, on the basis of a sample, a certain lot of machine, stamped belt buckles was not defective, the degree of confidence we should require would be relatively not so high. How sure we need to be before we accept a hypothesis will depend on how serious a mistake would be. The examples I have chosen are from scientific inferences in industrial quality control. But the point is clearly quite general in application. It would be interesting and instructive, for example, to know just how high a degree of probability the Manhattan Project scientists demanded for the hypothesis that no uncontrollable pervasive chain reaction would occur, before they proceeded with the first atomic bomb detonation or first activated the Chicago pile above a critical level. It would be equally interesting and instructive to know why they decided that that probability value (if one was decided upon) was high enough rather than one which was higher; and perhaps most interesting of all to learn whether the problem in this form was brought to consciousness at all. In general then, before we can accept any hypothesis, the value decision must be made in the light of the seriousness of a mistake, that the probability is high enough or that the evidence is strong enough, to warrant its acceptance. Before going further, it will perhaps be well to clear up two points which might otherwise prove troublesome below. First I have obviously used the term "probability" up to this point in a quite loose and preanalytic sense. But my point can be given a more rigorous formulation in terms of a description of the process of making statistical inference and of the acceptance or rejection of hypotheses in statistics. As is well known, the acceptance or rejection of such a hypothesis presupposes that a certain level of significance or level of confidence or critical region be selected.3 It is with respect at least to the necessary selection of a confidence level or interval that the necessary value judgment in the inquiry occurs. For, "the size of the critical region (one selects) is related to the risk one wants to accept in testing a statistical hypothesis."3**p-435 And clearly how great a risk one is willing to take of being wrong in accepting or rejecting the hypothesis will depend upon how seriously in the typically ethical sense one views the consequences of making a mistake. I believe, of course, that an adequate rational reconstruction of the procedures of science would show that every scientific inference is properly constructable as a statistical inference (i.e., as an inference from a set of characteristics of a sample of a population to a set of characteristics of the total population) and that such an inference would be scientifically in control only The Scientist Qua Scientist Makes Value Judgments 495 insofar as it is statistically in control. But it is not necessary to argue this point, for even if one believes that what is involved in some scientific inferences is not statistical probability but rather a concept like strength of evidence or degree of confirmation, one would still be concerned with making the decision that the evidence was strong enough or the degree of confirmation high enough to warrant acceptance of the hypothesis. Now, many empiricists who reflect on the foregoing considerations agree that acceptances or rejections of hypotheses do essentially involve value judgments, but they are nonetheless loathe to accept the conclusion. And one objection which has been raised against this line of argument by those of them who are suspicious of the intrusion of value questions into the "objective realm of science," is that actually the scientist's task is only to determine the degree of confirmation or the strength of the evidence which exists for a hypothesis. In short, they object that while it may be a function of the scientist qua member of society to decide whether a degree of probability associated with the hypothesis is high enough to warrant its acceptance, still the task of the scientist qua scientist is just the determination of the degree of probability or the strength of the evidence for a hypothesis and not the acceptance or rejection of that hypothesis. But a little reflection will show that the plausibility of this objection is merely apparent. For the determination that the degree of confirmation is say, p, or that the strength of evidence is such and such, which is on this view being held to be the indispensable task of the scientist qua scientist, is clearly nothing more than the acceptance by the scientist of the hypothesis that the degree of confidence ispor that the strength of the evidence is such and such; and as these men have conceded, acceptance of hypotheses does require value decisions. The second point which it may be well to consider before finally turning our attention to the Quine-Carnap discussion has to do with the nature of the suggestions which have thus far been made in this essay. In this connection, it is important to point out that the preceding remarks do not have as their import that an empirical description of every present day scientist ostensibly going about his business would include the statement that he made a value judgment at such and such a juncture. This is no doubt the case; but it is a hypothesis which can only be confirmed by a discipline which cannot be said to have gotten extremely far along as yet; namely, the Sociology and Psychology of Science, whether such an empirical description is warranted, cannot be settled from the armchair. My remarks have, rather, amounted to this: Any adequate analysis or (if I may use the term) rational reconstruction of the method of science must comprise the statement that the scientist qua scientist accepts or rejects hypotheses; and further that an analysis of that statement would reveal it to entail that the scientist qua scientist makes value judgments. I think that it is in the light of the foregoing arguments, the substance of 496 Part 6: Science and Values which has, in one form or another, been alluded to in past years by a number of inquirers (notably C. W. Churchman, R. L. Ackoff, and A. Wald), that the Quine-Carnap discussion takes on heightened interest. For, if I understand that discussion and its outcome correctly, although it apparently begins a good distance away from any consideration of the fact-value dichotomy, and although all the way through it both men touch on the matter in a way which indicates that they believe that questions concerning the dichotomy are, if anything, merely tangential to their main issue, yet it eventuates with Quine by an independent argument apparently in agreement with at least the conclusion here reached and also apparently having forced Carnap to that conclusion. (Carnap, however, is expected to reply to Quine's article and I may be too sanguine here.) The issue of ontological commitment between Carnap and Quine has been one of relatively long standing. In this recent article,1 Carnap maintains that we are concerned with two kinds of questions of existence relative to a given language system. One is what kinds of entities it would be permissible to speak about as existing when that language system is used, i.e., what kind of framework for speaking of entities should our system comprise. This, according to Carnap, is an external question. It is the practical question of what sort of linguistic system we want to choose. Such questions as "Are there abstract entities?" or "Are there physical entities?" thus are held to belong to the category of external questions. On the other hand, having made the decision regarding which linguistic framework to adopt, we can then raise questions like "Are there any black swans?" "What are the factors of 544?" et cetera. Such questions are internal questions. For our present purposes, the important thing about all of this is that while for Carnap internal questions are theoretical ones, i.e., ones whose answers have cognitive content, external questions are not theoretical at all. They are practical questions—they concern our decisions to employ one language structure or another. They are of the kind that face us when for example we have to decide whether we ought to have a Democratic or a Republican administration for the next four years. In short, though neither Carnap nor Quine employ the epithet, they are value questions. Now if this dichotomy of existence questions is accepted Carnap can still deny the essential involvement of the making of value judgments in the procedures of science by insisting that concern with external questions, admittedly necessary and admittedly axiological, is nevertheless in some sense a prescientific concern. But most interestingly, what Quine then proceeds to do is to show that the dichotomy, as Carnap holds it, is untenable. This is not the appropriate place to repeat Quine's arguments which are brilliantly presented in the article referred to. They are in line with the views he has expressed in his "Two Dogmas of Empiricism" essay and especially with his introduction to his recent book, Methods of Logic. Nonetheless the final paragraph of the Quine article I'm presently considering sums up his conclusions neatly: The Scientist Qua Scientist Makes Value Judgments 497 Within natural science there is a continuum of gradations, from the statements which report observations to those which reflect basic features say of quantum theory or the theory of relativity. The view which I end up with, in the paper last cited, is that statements of ontology or even of mathematics and logic form a continuation of this continuum, a continuation which is perhaps yet more remote from observation than are the central principles of quantum theory or relativity. The differences here are in my view differences only in degree and not in kind. Science is a unified structure, and in principle it is the structure as a whole, and not its component statements one by one, that experience confirms or shows to be imperfect. Carnap maintains that ontological questions, and likewise questions of logical or mathematical principle, are questions not of fact but of choosing a convenient conceptual scheme or framework for science; and with this I agree only if the same be conceded for every scientific hypothesis, (n. 2, pp. 71-72.) In the light of all this I think that the statement that Scientists qua Scientists make value judgments is also a consequence of Quine's position. Now, if the major point I have here undertaken to establish is correct, then clearly we are confronted with a first-order crisis in science and methodology. The positive horror which most scientists and philosophers of science have of the intrusion of value considerations into science is wholly understandable. Memories of the (now diminished but a certain extent still continuing) conflict between science and, e.g., the dominant religions over the intrusion of religious value considerations into the domain of scientific inquiry, are strong in many reflective scientists. The traditional search for objectivity exemplifies science's pursuit of one of its most precious ideals. But for the scientist to close his eyes to the fact that scientific method intrinsically requires the making of value decisions, for him to push out of his consciousness the fact that he does make them, can in no way bring him closer to the ideal of objectivity. To refuse to pay attention to the value decisions which must be made, to make them intuitively, unconsciously, haphazardly, is to leave an essential aspect of scientific method scientifically out of control. What seems called for (and here no more than the sketchiest indications of the problem can be given) is nothing less than a radical reworking of the ideal of scientific objectivity. The slightly juvenile conception of the coldblooded, emotionless, impersonal, passive scientist mirroring the world perfectly in the highly polished lenses of his steel-rimmed glasses—this stereotype—is no longer, if it ever was, adequate. What is being proposed here is that objectivity for science lies at least in becoming precise about what value judgments are being and might have been made in a given inquiry—and even, to put it in its most challenging form, what value decisions ought to be made; in short that a science of ethics is a necessary requirement if science's progress toward objectivity is to be continuous. Of course the establishment of such a science of ethics is a task of stu- 498 Part 6: Science and Values pendous magnitude and it will probably not even be well launched for many generations. But a first step is surely comprised of the reflective self-awareness of the scientist in making the value judgments he must make. Notes 1. R. Catnap, "Empiricism, Semantics, and Ontology," Revue Internationale de Philosophic H (1950): 20-40. 2. W. V. Quine, "On Camap's Views on Ontology," Philosophical Studies, 11, no. 5, (1951). 3. "In practice three levels are commonly used: 1 per cent, 5 per cent and 0.3 of one per cent. There is nothing sacred about these three values; they have become established in practice without any rigid theoretical justification." (my italics) (subnote 3*, p. 435). To establish significance at the 5 percent level means that one is willing to take the risk of accepting a hypothesis as true when one will be thus making a mistake, one time in twenty. Or in other words, that one will be wrong (over the long run) once every twenty times if one employed a .05 level of significance. See also (subnote 3t chap, v) for such statements as "which of these two errors is most important to avoid (it being necessary to make such a decision in order to accept or reject the given hypothesis) is a subjective matter . .." (my italics) (subnote 3f, p. 262). •A. C Rosander, Elementary Principles of Statistics (New York: D. Van Nostrand Co., 1951). tJ. Neyman, First Course in Probability and Statistics (New York: Henry Holt & Co., 1950). 30 Science and Human Values Carl G. Hempel 1. The Problem Our age is often called an age of science and of scientific technology, and with good reason: the advances made during the past few centuries by the natural sciences, and more recently by the psychological and sociological disciplines, have enormously broadened our knowledge and deepened our understanding of the world we live in and of our fellow men; and the practical application of scientific insights is giving us an ever increasing measure of control over the forces of nature and the minds of men. As a result, we have grown quite accustomed, not only to the idea of a physico-chemical and biological technology based on the results of the natural sciences, but also to the concept, and indeed the practice, of a psychological and sociological technology that utilizes the theories and methods developed by behavioral research. This growth of scientific knowledge and its applications has vastly reduced the threat of some of man's oldest and most formidable scourges, among them famine and pestilence; it has raised man's material level of living, and it has put within his reach the realization of visions which even a few decades ago would have appeared utterly fantastic, such as the active exploration of interplanetary space. But in achieving these results, scientific technology has given rise to a host of new and profoundly disturbing problems: The control of nuclear fission has brought us not only the comforting prospect of a vast new reservoir of energy, but also the constant threat of the atom bomb and of grave damage, to the present and to future generations, from the radioactive by-products of the fission process, even in its peaceful uses. And the very progress in biological and medical knowledge and technology which has so strikingly reduced infant mortality and increased man's life expectancy in large areas 499 500 Part 6: Science and Values of our globe has significantly contributed to the threat of the "population explosion," the rapid growth of the earth's population which we are facing today, and which, again, is a matter of grave concern to all those who have the welfare of future generations at heart. Clearly, the advances of scientific technology on which we pride ourselves, and which have left their characteristic imprint on every aspect of this "age of science," have brought in their train many new and grave problems which urgently demand a solution. It is only natural that, in his desire to cope with these new issues, man should turn to science and scientific technology for further help. But a moment's reflection shows that the problems that need to be dealt with are not straightforward technological questions but intricate complexes of technological and moral issues. Take the case of the population explosion, for example, To be sure, it does pose specific technological problems. One of these is the task of satisfying at least the basic material needs of a rapidly growing population by means of limited resources; another is the question of means by which population growth itself may be kept under control. Yet these technical questions do not exhaust the problem. For after all, even now we have at our disposal various ways of counteracting population growth; but some of these, notably contraceptive methods, have been and continue to be the subject of intense controversy on moral and religious grounds, which shows that an adequate solution of the problem at hand requires, not only knowledge of technical means of control, but also standards for evaluating the alternative means at our disposal; and this second requirement clearly raises moral issues. There is no need to extend the list of illustrations: any means of technical control that science makes available to us may be employed in many different ways, and a decision as to what use to make of it involves us in questions of moral valuation. And here arises a fundamental problem to which I would now like to turn: Can such valuational questions be answered by means of the objective methods of empirical science, which have been so successful in giving us reliable, and often practically applicable, knowledge of our world? Can those methods serve to establish objective criteria of right and wrong and thus to provide valid moral norms for the proper conduct of our individual and social affairs? 2. Scientific Testing Let us approach this question by considering first, if only in brief and sketchy outline, the way in which objective scientific knowledge is arrived at. We may leave aside here the question of ways of discovery; i.e., the problem of how a new scientific idea arises, how a novel hypothesis or theory is first conceived; for our purposes it will suffice to consider the scientific ways of Science and Human Values 501 validation; i.e., the manner in which empirical science goes about examining a proposed new hypothesis and determines whether it is to be accepted or rejected. I will use the word "hypothesis" here to refer quite broadly to any statements or set of statements in empirical science, no matter whether it deals with some particular event or purports to set forth a general law or perhaps a more or less complex theory. As is well known, empirical science decides upon the acceptability of a proposed hypothesis by means of suitable tests. Sometimes such a test may involve nothing more than what might be called direct observation of pertinent facts. This procedure may be used, for example, in testing such statements as "It is raining outside," "All the marbles in this urn are blue," "The needle of this ammeter will stop at the scale point marked 6," and so forth. Here a few direct observations will usually suffice to decide whether the hypothesis at hand is to be accepted as true or to be rejected as false. But most of the important hypotheses in empirical science cannot be tested in this simple manner. Direct observation does not suffice to decide, for example, whether to accept or to reject the hypotheses that the earth is a sphere, that hereditary characteristics are transmitted by genes, that all Indo-European languages developed from one common ancestral language, that light is an electromagnetic wave process, and so forth. With hypotheses such as these, science resorts to indirect methods of test and validation. While these methods vary greatly in procedural detail, they all have the same basic structure and rationale. First, from the hypothesis under test, suitable other statements are inferred which describe certain directly observable phenomena that should be found to occur under specifiable circumstances if the hypothesis is true; then those inferred statements are tested directly; i.e., by checking whether the specified phenomena do in fact occur; finally, the proposed hypothesis is accepted or rejected in the light of the outcome of these tests. For example, the hypothesis that the earth is spherical in shape is not directly testable by observation, but it permits us to infer that a ship moving away from the observer should appear to be gradually dropping below the horizon; that circumnavigation of the earth should be possible by following a straight course; that high-altitude photographs should show the curving of the earth's surface; that certain geodetic and astronomical measurements should yield such and such results; and so forth. Inferred statements such as these can be tested more or less directly; and as an increasing number and variety of them are actually borne out, the hypothesis becomes increasingly confirmed. Eventually, a hypothesis may be so well confirmed by the available evidence that it is accepted as having been established beyond reasonable doubt. Yet no scientific hypothesis is ever proved completely and definitively; there is always at least the theoretical possibility that new evidence will be discovered which conflicts with some of the observational statements inferred from the hypothesis, and which thus leads to its rejection. The his- 502 Part 6: Science and Values tory of science records many instances in which a once accepted hypothesis was subsequently abandoned in the light of adverse evidence. 3. Instrumental Judgments of Value We now turn to the question whether this method of test and validation may be used to establish moral judgments of value, and particularly judgments to the effect that a specified course of action is good or right or proper, or that it is better than certain alternative courses of action, or that we ought—or ought not—to act in certain specified ways. By way of illustration, consider the view that it is good to raise children permissively and bad to bring them up in a restrictive manner. It might seem that, at least in principle, this view could be scientifically confirmed by appropriate empirical investigations. Suppose, for example, that careful research had established (1) that restrictive upbringing tends to generate resentment and aggression against parents and other persons exercising educational authority, and that this leads to guilt and anxiety and an eventual stunting of the child's initiative and creative potentialities; whereas (2) permissive upbringing avoids these consequences, makes for happier interpersonal relations, encourages resourcefulness and self-reliance, and enables the child to develop and enjoy his potentialities. These statements, especially when suitably amplified, come within the purview of scientific investigation; and though our knowledge in the matter is in fact quite limited, let us assume, for the sake of the argument, that they had actually been strongly confirmed by careful tests. Would not scientific research then have objectively shown that it is indeed better to raise children in a permissive rather than in a restrictive manner? A moment's reflection shows that this is not so. What would have been established is rather a conditional statement; namely, that (/our children are to become happy, emotionally secure, creative individuals rather than guilt-ridden and troubled souls then it is better to raise them in a permissive than in a restrictive fashion. A statement like this represents a relative, or instrumental, judgment of value. Generally, a relative judgment of value states that a certain kind of action, Af, is good (or that it is better than a given alternative A/,) if a specified goal G is to be attained; or more accurately, that M is good, or appropriate, for the attainment of goal G. But to say that is tantamount to asserting either that, in the circumstances at hand, course of action M will definitely (or probably) lead to the attainment of G, or that failure to embark on course of action M will definitely (or probably) lead to the nonat-tainment of G. In other words, the instrumental value judgment asserts either that M is a (definitely or probably) sufficient means for attaining the end or goal G, or that it is a (definitely or probably) necessary means for attaining it. Thus, a relative, or instrumental, judgment of value can be reformulated as Science and Human Values 503 a statement which expresses a universal or a probabilistic kind of means-ends relationship, and which contains no terms of moral discourse—such as "good," "better," "ought to"—at all. And a statement of this kind surely is an empirical assertion capable of scientific test. 4. Categorical Judgments of Value Unfortunately, this does not completely solve our problem; for after a relative judgment of value referring to a certain goal G has been tested and, let us assume, well confirmed, we are still left with the question of whether the goal G ought to be pursued, or whether it would be better to aim at some alternative goal instead. Empirical science can establish the conditional statement, for example, that if we wish to deliver an incurably ill person from intolerable suffering, then a large dose of morphine affords a means of doing so; but it may also indicate ways of prolonging the patient's life, if also his suffering. This leaves us with the question whether it is right to give the goal of avoiding hopeless human suffering precedence over that of preserving human life. And this question calls, not for a relative but for an absolute, or categorical, judgment of value to the effect that a certain state of affairs (which may have been proposed as a goal or end) is good, or that it is better than some specified alternative. Are such categorical value judgments capable of empirical test and confirmation? Consider, for example, the sentence "Killing is evil." It expresses a categorical judgment of value which, by implication, would also categorically qualify euthanasia as evil. Evidently, the sentence does not express an assertion that can be directly tested by observation; it does not purport to describe a directly observable fact. Can it be indirectly tested, then, by inferring from it statements to the effect that under specified test conditions such and such observable phenomena will occur? Again, the answer is clearly in the negative. Indeed, the sentence "Killing is evil" does not have the function of expressing an assertion that can be qualified as true or false; rather, it serves to express a standard for moral appraisal or a norm for conduct. a categorical judgment of value may have other functions as well; for example, it may serve to convey the utterer's approval or disapproval of a certain kind of action, or his commitment to the standards of conduct expressed by the value judgment. Descriptive empirical import, however, is absent; in this respect a sentence such as "Killing is evil" differs strongly from, say, "Killing is condemned as evil by many religions," which expresses a factual assertion capable of empirical test. Categorical judgments of value, then, are not amenable to scientific test and confirmation or disconfirmation; for they do not express assertions but rather standards or norms for conduct. It was Max Weber, i believe, who 504 Part 6: Science and Values expressed essentially the same idea by remarking that science is like a map: it can tell us how to get to a given place, but it cannot tell us where to go. Gunnar Myrdal, in his book An American Dilemma (p. 1052), stresses in a similar vein that "factual or theoretical studies alone cannot logically lead to a practical recommendation. A practical or valuations conclusion can be derived only when there is at least one valuation among the premises." Nevertheless, there have been many attempts to base systems of moral standards on the findings of empirical science; and it would be of interest to examine in some detail the reasoning which underlies those procedures. In the present context, however, there is room for only a few brief remarks on this subject. It might seem promising, for example, to derive judgments of value from the results of an objective study of human needs. But no cogent derivation of this sort is possible. For this procedure would presuppose that it is right, or good, to satisfy human needs—and this presupposition is itself a categorical judgment of value: it would play the role of a valuational premise in the sense of Myrdal's statement. Furthermore, since there are a great many different, and partly conflicting, needs of individuals and of groups, we would require not just the general maxim that human needs ought to be satisfied, but a detailed set of rules as to the preferential order and degree in which different needs are to be met, and how conflicting claims are to be settled; thus, the valuational premise required for this undertaking would actually have to be a complex system of norms; hence, a derivation of valuational standards simply from a factual study of needs is out of the question. Several systems of ethics have claimed the theory of evolution as their basis; but they are in serious conflict with each other even in regard to their most fundamental tenets. Some of the major variants are iUuminatingly surveyed in a chapter of G. G. Simpson's book, The Meaning of Evolution. One type, which Simpson calls a "tooth-and-claw ethics," glorifies a struggle for existence that should lead to a survival of the fittest. A second urges the harmonious adjustment of groups or individuals to one another so as to enhance the probability of their survival, while still other systems hold up as an ultimate standard the increased aggregation of organic units into higher levels of organization, sometimes with the implication that the welfare of the state is to be placed above that of the individuals belonging to it. It is obvious that these conflicting principles could not have been validly inferred from the theory of evolution—unless indeed that theory were self-contradictory, which does not seem very likely. But if science cannot provide us with categorical judgments of value, what then can serve as a source of unconditional valuations? This question may either be understood in a pragmatic sense, as concerned with the sources from which human beings do in fact obtain their basic values. Or it may be understood as concerned with a systematic aspect of valuation; namely, the question where a proper system of basic values is to be found on which all other valuations may then be grounded. Science and Human Values 505 The pragmatic question comes within the purview of empirical science. Without entering into details, we may say here that a person's values—both those he professes to espouse and those he actually conforms to—are largely absorbed from the society in which he lives, and especially from certain influential subgroups to which he belongs, such as his family, his schoolmates, his associates on the job, his church, clubs, unions, and other groups. Indeed his values may vary from case to case depending on which of these groups dominates the situation in which he happens to find himself. In general, then, a person's basic valuations are no more the result of careful scrutiny and critical appraisal of possible alternatives than is his religious affiliation. Conformity to the standards of certain groups plays a very important role here, and only rarely are basic values seriously questioned. Indeed, in many situations, we decide and act unreflectively in an even stronger sense; namely, without any attempt to base our decisions on some set of explicit, consciously adopted, moral standards. Now, it might be held that this answer to the pragmatic version of our question reflects a regrettable human inclination to intellectual and moral inertia; but that the really important side of our question is the systematic one: If we do want to justify our decisions, we need moral standards of conduct of the unconditional type—but how can such standards be established? If science cannot provide categorical value judgments, are there any other sources from which they might be obtained? Could we not, for example, validate a system of categorical judgments of value by pointing out that it represents the moral standards held up by the Bible, or by the Koran, or by some inspiring thinker or social leader? Clearly, this procedure must fail, for the factual information here adduced could serve to validate the value judgments in question only if we were to use, in addition, a valuational presupposition to the effect that the moral directives stemming from the source invoked ought to be complied with. Thus, if the process of justifying a given decision or a moral judgment is ever to be completed, certain judgments of value have to be accepted without any further justification, just as the proof of a theorem in geometry requires that some propositions be accepted as postulates, without proof. The quest for a justification of all our valuations overlooks this basic characteristic of the logic of validation and of justification. The value judgments accepted without further justification in a given context need not, however, be accepted once and for all, with a commitment never to question them again. This point will be elaborated further in the final section of this essay. As will hardly be necessary to stress, in concluding the present phase of our discussion, the ideas set forth in the preceding pages do not imply or advocate moral anarchy; in particular, they do not imply that any system of values is just as good, or just as valid, as any other, or that everyone should adopt the moral principles that best suit his convenience. For all such maxims 506 Part 6: Science and Values have the character of categorical value judgments and cannot, therefore, be implied by the preceding considerations, which are purely descriptive of certain logical, psychological, and social aspects of moral valuation. 5. Rational Choice: Empirical and Valuational Components To gain further insight into the relevance of scientific inquiry for categorical valuation let us ask what help we might receive, in dealing with a moral problem, from science in an ideal state such as that represented by Laplace's conception of a superior scientific intelligence, sometimes referred to as Laplace's demon. This fiction was used by Laplace, early in the nineteenth century, to give a vivid characterization of the idea of universal causal determinism. The demon is conceived as a perfect observer, capable of ascertaining with infinite speed and accuracy all that goes on in the universe at a given moment; he is also an ideal theoretician who knows all the laws of nature and has combined them into one universal formula; and finally, he is a perfect mathematician who, by means of that universal formula, is able to infer, from the observed state of the universe at the given moment, the total state of the universe at any other moment; thus past and future are present before his eyes. Surely, it is difficult to imagine that science could ever achieve a higher degree of perfection! Let us assume, then, that, faced with a moral decision, we are able to call upon the Laplacean demon as a consultant. What help might we get from him? Suppose that we have to choose one of several alternative courses of action open to us, and that we want to know which of these we ought to follow. The demon would then be able to tell us, for any contemplated choice, what its consequences would be for the future course of the universe, down to the most minute detail, however remote in space and time. But, having done this for each of the alternative courses of action under consideration, the demon would have completed his task: he would have given us all the information that an ideal science might provide under the circumstances. And yet he would not have resolved our moral problem, for this requires a decision as to which of the several alternative sets of consequences mapped out by the demon as attainable to us is the best; which of them we ought to bring about. And the burden of this decision would still fall upon our shoulders: it is we who would have to commit ourselves to an unconditional judgment of value by singling out one of the sets of consequences as superior to its alternatives. Even Laplace's demon, or the ideal science he stands for, cannot relieve us of this responsibility. In drawing this picture of the Laplacean demon as a consultant in decision-making, I have cheated a little; for if the world were as strictly deter- Science and Human Values 507 ministic as Laplace's fiction assumes, then the demon would know in advance what choice we were going to make, and he might disabuse us of the idea that there were several courses of action open to us. However that may be, contemporary physical theory has cast considerable doubt on the classical conception of the universe as a strictly deterministic system: the fundamental laws of nature are now assumed to have a statistical or probabilistic rather than a strictly universal, deterministic, character. But whatever may be the form and the scope of the laws that hold in our universe, we will obviously never attain a perfect state of knowledge concerning them; confronted with a choice, we never have more than a very incomplete knowledge of the laws of nature and of the state of the world at the time when we must act. Our decisions must therefore always be made on the basis of incomplete information, a state which enables us to anticipate the consequences of alternative choices at best with probability. Science can render an indispensable service by providing us with increasingly extensive and reliable information relevant to our purpose; but again it remains for us to evaluate the various probable sets of consequences of the alternative choices under consideration. And this requires the adoption of pertinent valuational standards which are not objectively determined by the empirical facts. This basic point is reflected also in the contemporary mathematical theories of decision-making. One of the objectives of these theories is the formulation of decision rules which will determine an optimal choice in situations where several courses of action are available. For the formulation of decision rules, these theories require that at least two conditions be met: (1) Factual information must be provided specifying the available courses of action and indicating for each of these its different possible outcomes—plus, if feasible, the probabilities of their occurrence; (2) there must be a specification of the values—often prosaically referred to as utilities—that are attached to the different possible outcomes. Only when these factual and valuational specifications have been provided does it make sense to ask which of the available choices is the best, considering the values attaching to their possible results. In mathematical decision theory, several criteria of optimal choice have been proposed. In case the probabilities for the different outcomes of each action are given, one standard criterion qualifies a choice as optimal if the probabilistically expectable utility of its outcome is at least as great as that of any alternative choice. Other rales, such as the maximin and the maximax principles, provide criteria that are applicable even when the probabilities of the outcomes are not available. But interestingly, the various criteria conflict with each other in the sense that, for one and the same situation, they will often select different choices as optimal. The policies expressed by the conflicting criteria may be regarded as reflecting different attitudes toward the world, different degrees of optimism 508 Part 6: Science and Values or pessimism, of venturesomeness or caution. It may be said therefore that the analysis offered by current mathematical models indicates two points at which decision-making calls not solely for factual information, but for categorical valuation, namely, in the assignment of utilities to the different possible outcomes and in the adoption of one among many competing decision rules or criteria of optimal choice____ 6. Valuational ''^suppositions" of Science The preceding three sections have been concerned mainly with the question whether, or to what extent, valuation and decision presuppose scientific investigation and scientific knowledge. This problem has a counterpart which deserves some attention in a discussion of science and valuation; namely, the question whether scientific knowledge and method presuppose valuation. The word "presuppose" may be understood in a number of different senses which require separate consideration here. First of all, when a person decides to devote himself to scientific work rather than to some other career, and again, when a scientist chooses some particular topic of investigation, these choices will presumably be determined to a large extent by his preferences, i.e., by how highly he values scientific research in comparison with the alternatives open to him, and by the importance he attaches to the problems he proposes to investigate. In this explanatory, quasi-causal sense the scientific activities of human beings may certainly be said to presuppose valuations. Much more intriguing problems arise, however, when we ask whether judgments of value are presupposed by the body of scientific knowledge, which might be represented by a system of statements accepted in accordance with the rules of scientific inquiry. Here presupposing has to be understood in a systematic-logical sense. One such sense is invoked when we say, for example, that the statement "Henry's brother-in-law is an engineer" presupposes that Henry has a wife or a sister: in this sense, a statement presupposes whatever can be logically inferred from it. But, as we noted earlier, no set of scientific statements logically implies an unconditional judgment of value; hence, scientific knowledge does not, in this sense, presuppose valuation. There is another logical sense of presupposing, however. We might say, for example, that in Euclidean geometry the angle-sum theorem for triangles presupposes the postulate of the parallels in the sense that that postulate is an essential part of the basic assumptions from which the theorem is deduced. Now, the hypotheses and theories of empirical science are not normally validated by deduction from supporting evidence (though it may happen that a scientific statement, such as a prediction, is established by deduction from a previously ascertained, more inclusive set of statements); rather, as was mentioned in section 2, they are usually accepted on the basis of evidence that Science and Human Values 509 lends them only partial, or "inductive," support. But in any event it might be asked whether the statements representing scientific knowledge presuppose valuation in the sense that the grounds on which they are accepted include, sometimes or always, certain unconditional judgments of value. Again the answer is in the negative. The grounds on which scientific hypotheses are accepted or rejected are provided by empirical evidence, which may include observational findings as well as previously established laws and theories, but surely no value judgments. Suppose for example that, in support of the hypothesis that a radiation belt of a specified kind surrounds the earth, a scientist were to adduce, first, certain observational data, obtained perhaps by rocket-borne instruments; second, certain previously accepted theories invoked in the interpretation of those data; and finally, certain judgments of value, such as "it is good to ascertain the truth." Clearly, the judgments of value would then be dismissed as lacking all logical relevance to the proposed hypothesis since they can contribute neither to its support nor to its dis-confirmation. But the question whether science presupposes valuation in a logical sense can be raised, and recently has been raised, in yet another way, referring more specifically to valuational presuppositions of scientific method. In the preceding considerations, scientific knowledge was represented by a system of statements which are sufficiently supported by available evidence to be accepted in accordance with the principles of scientific test and validation. We noted that as a rule the observational evidence on which a scientific hypothesis is accepted is far from sufficient to establish that hypothesis conclusively. For example, Galileo's law refers not only to past instances of free fall near the earth, but also to all future ones; and the latter surely are not covered by our present evidence. Hence, Galileo's law, and similarly any other law in empirical science, is accepted on the basis of incomplete evidence. Such acceptance carries with it the "inductive risk" that the presumptive law may not hold in full generality, and that future evidence may lead scientists to modify or abandon it. A precise statement of this conception of scientific knowledge would require, among other things, the formulation of rules of two kinds: First, rules of confirmation, which would specify what kind of evidence is confirmatory, what kind disconfirmatory for a given hypothesis. Perhaps they would also determine a numerical degree of evidential support (or confirmation, or inductive probability) which a given body of evidence could be said to confer upon a proposed hypothesis. Secondly, there would have to be rules of acceptance: these would specify how strong the evidential support for a given hypothesis has to be if the hypothesis is to be accepted into the system of scientific knowledge; or, more generally, under what conditions a proposed hypothesis is to be accepted, under what conditions it is to be rejected by science on the basis of a given body of evidence. 510 Part 6: Science and Values Recent studies of inductive inference and statistical testing have devoted a great deal of effort to the formulation of adequate rules of either kind. In particular, rules of acceptance have been treated in many of these investigations as special instances of decision rules of the sort mentioned in the preceding section. The decisions in question are here either to accept or to reject a proposed hypothesis on the basis of given evidence. As was noted earlier, the formulation of "adequate" decision rules requires, in any case, the antecedent specification of valuations that can then serve as standards of adequacy. The requisite valuations, as will be recalled, concern the different possible outcomes of the choices which the decision rules are to govern. Now, when a scientific rule of acceptance is applied to a specified hypothesis on the basis of a given body of evidence, the possible "outcomes" of the resulting decision may be divided into four major types: (1) the hypothesis is accepted (as presumably true) in accordance with the rule and is in fact true; (2) the hypothesis is rejected (as presumably false) in accordance with the rule and is in fact false; (3) the hypothesis is accepted in accordance with the rule, but is in fact false; (4) the hypothesis is rejected in accordance with the rule, but is in fact true. The former two cases are what science aims to achieve; the possibility of the latter two represents the inductive risk that any acceptance rule must involve. And the problem of formulating adequate rules of acceptance and rejection has no clear meaning unless standards of adequacy have been provided by assigning definite values or disvalues to those different possible "outcomes" of acceptance or rejection. It is in this sense that the method of establishing scientific hypotheses "presupposes" valuation: the justification of the rules of acceptance and rejection requires reference to value judgments. In the cases where the hypothesis under test, if accepted, is to be made the basis of a specific course of action, the possible outcomes may lead to success or failure of the intended practical application; in these cases, the values and disvalues at stake may well be expressible in terms of monetary gains or losses; and for situations of this sort, the theory of decision functions has developed various decision rules for use in practical contexts such as industrial quality control. But when it comes to decision rules for the acceptance of hypotheses in pure scientific research, where no practical applications are contemplated, the question of how to assign values to the four types of outcome mentioned earlier becomes considerably more problematic. But in a general way, it seems clear that the standards governing the inductive procedures of pure science reflect the objective of obtaining a certain goal, which might be described somewhat vaguely as the attainment of an increasingly reliable, extensive, and theoretically systematized body of information about the world. Note that if we were concerned, instead, to form a system of beliefs or a world view that is emotionally reassuring or esthetically satisfying to us, then it would not be reasonable at all to insist, as science does, Science and Human Values 511 on a close accord between the beliefs we accept and our empirical evidence; and the standards of objective testability and confirmation by publicly ascertainable evidence would have to be replaced by acceptance standards of an entirely different kind. The standards of procedure must in each case be formed in consideration of the goals to be attained; their justification must be relative to those goals and must, in this sense, presuppose them. 7. Concluding Comparisons If, as has been argued in section 4, science cannot provide a validation of categorical value judgments, can scientific method and knowledge play any role at all in clarifying and resolving problems of moral valuation and decision? The answer is emphatically in the affirmative. I will try to show this in a brief survey of the principal contributions science has to offer in this context. First of all, science can provide factual information required for the resolution of moral issues. Such information will always be needed, for no matter what system of moral values we may espouse—whether it be egoistic or altruistic, hedonistic or utilitarian, or of any other kind—surely the specific course of action it enjoins us to follow in a given situation will depend upon the facts about that situation; and it is scientific knowledge and investigation that must provide the factual information which is needed for the application of our moral standards. More specifically, factual information is needed, for example, to ascertain (a) whether a contemplated objective can be attained in a given situation; (b) if it can be attained, by what alternative means and with what probabilities; (c) what side effects and ulterior consequences the choice of a given means may have apart from probably yielding the desired end; (d) whether several proposed ends are jointly realizable, or whether they are incompatible in the sense that the realization of some of them will definitely or probably prevent the realization of others. By thus giving us information which is indispensable as a factual basis for rational and responsible decision, scientific research may well motivate us to change some of our valuations. If we were to discover, for example, that a certain kind of goal which we had so far valued very highly could be attained only at the price of seriously undesirable side effects and ulterior consequences, we might well come to place a less high value upon that goal. Thus, more extensive scientific information may lead to a change in our basic valuations—not by "disconfirming" them, of course, but rather by motivating a change in our total appraisal of the issues in question. Secondly, and in a quite different manner, science can illuminate certain problems of valuation by an objective psychological and sociological study of the factors that affect the values espoused by an individual or a group; of 512 Part 6: Science and Values the ways in which such valuational commitments change; and perhaps of the manner in which the espousal of a given value system may contribute to the emotional security of an individual or the functional stability of a group. Psychological, anthropological, and sociological studies of valuational behavior cannot, of course, "validate" any system of moral standards. But their results can psychologically effect changes in our outlook on moral issues by broadening our horizons, by making us aware of alternatives not envisaged, or not embraced, by our own group, and by thus providing some safeguard against moral dogmatism or parochialism. Finally, a comparison with certain fundamental aspects of scientific knowledge may help to illuminate some further questions concerning valuation. If we grant that scientific hypotheses and theories are always open to revision in the light of new empirical evidence, are we not obliged to assume that there is another class of scientific statements which cannot be open to doubt and reconsideration, namely, the observational statements describing experiential findings that serve to test scientific theories? Those simple, straightforward reports of what has been directly observed in the laboratory or in scientific field work, for example—must they not be regarded as immune from any conceivable revision, as irrevocable once they have been established by direct observation? Reports on directly observed phenomena have indeed often been considered as an unshakable bedrock foundation for all scientific hypotheses and theories. Yet this conception is untenable; even here, we find no definitive, unquestionable certainty. For, first of all, accounts of what has been directly observed are subject to error that may spring from various physiological and psychological sources. Indeed, it is often possible to check on the accuracy of a given observation report by comparing it with the reports made by other observers, or with relevant data obtained by some indirect procedure, such as a motion picture taken of the finish of a horse race; and such comparison may lead to the rejection of what had previously been considered as a correct description of a directly observed phenomenon. We even have theories that enable us to explain and anticipate some types of observational error, and in such cases, there is no hesitation to question and to reject certain statements that purport simply to record what has been directly observed. Sometimes relatively isolated experimental findings may conflict with a theory that is strongly supported by a large number and variety of other data; in this case, it may well happen that part of the conflicting data, rather than the theory, is refused admission into the system of accepted scientific statements—even if no satisfactory explanation of the presumptive error of observation is available. In such cases it is not the isolated observational finding which decides whether the theory is to remain in good standing, but it is the previously well-substantiated theory which determines whether a purported observation report is to be regarded as describing an actual empirical occur- Science and Human Values 513 rence. For example, a report that during a spiritualistic seance, a piece of furniture freely floated above the floor would normally be rejected because of its conflict with extremely well-confirmed physical principles, even in the absence of some specific explanation of the report, say, in terms of deliberate fraud by the medium, or of high suggestibility on the part of the observer. Similarly, the experimental findings reported by the physicist Ehrenhaft, which were claimed to refute the principle that all electric charges are integral multiples of the charge of the electron, did not lead to the overthrow, nor even to a slight modification, of that principle, which is an integral part of a theory with extremely strong and diversified experimental support. Needless to say, such rejection of alleged observation reports by reason of their conflict with well-established theories requires considerable caution; otherwise, a theory, once accepted, could be used to reject all adverse evidence that might subsequently be found—a dogmatic procedure entirely irreconcilable with the objectives and the spirit of scientific inquiry. Even reports on directly observed phenomena, then, are not irrevocable; they provide no bedrock foundation for the entire system of scientific knowledge. But this by no means precludes the possibility of testing scientific theories by reference to data obtained through direct observation. As we noted, the results obtained by such direct checking cannot be considered as absolutely unquestionable and irrevocable; they are themselves amenable to further tests which may be carried out if there is reason for doubt. But obviously if we are ever to form any beliefs about the world, if we are ever to accept or to reject, even provisionally, some hypothesis or theory, then we must stop the testing process somewhere; we must accept some evidential statements as sufficiently trustworthy not to require further investigation for the time being. And on the basis of such evidence, we can then decide what credence to give to the hypothesis under test, and whether to accept or to reject it. This aspect of scientific investigation seems to me to have a parallel in the case of sound valuation and rational decision. In order to make a rational choice between several courses of action, we have to consider, first of all, what consequences each of the different alternative choices is likely to have. This affords a basis for certain relative judgments of value that are relevant to our problem. If this set of results is to be attained, this course of action ought to be chosen; if that other set of results is to be realized, we should choose such and such another course; and so forth. But in order to arrive at a decision, we still have to decide upon the relative values of the alternative sets of consequences attainable to us; and this, as was noted earlier, calls for the acceptance of an unconditional judgment of value, which will then determine our choice. But such acceptance need not be regarded as definitive and irrevocable, as forever binding for all our future decisions: an unconditional judgment of value, once accepted, still remains open to reconsideration and 514 Part 6: Science and Values to change. Suppose, for example, that we have to choose, as voters or as members of a city administration, between several alternative social policies, some of which are designed to improve certain material conditions of living, whereas others aim at satisfying cultural needs of various kinds. If we are to arrive at a decision at all, we will have to commit ourselves to assigning a higher value to one or the other of those objectives. But while the judgment thus accepted serves as an unconditional and basic judgment of value for the decision at hand, we are not for that reason committed to it forever—we may well reconsider our standards and reverse our judgment later on; and though this cannot undo the earlier decision, it will lead to different decisions in the future. Thus, if we are to arrive at a decision concerning a moral issue, we have to accept some unconditional judgments of value; but these need not be regarded as ultimate in the absolute sense of being forever binding for all our decisions, any more than the evidence statements relied on in the test of a scientific hypothesis need to be regarded as forever irrevocable. All that is needed in either context are relative ultimates, as it were: a set of judgments—moral or descriptive—which are accepted at the time as not in need of further scrutiny. These relative ultimates permit us to keep an open mind in regard to the possibility of making changes in our heretofore unquestioned commitments and beliefs; and surely the experience of the past suggests that if we are to meet the challenge of the present and the future, we will more than ever need undogmatic, critical, and open minds. 31 Values in Science Ernan McMullin Thirty years ago, Richard Rudner argued in a brief essay in Philosophy of Science that the making of value-judgments is an essential part of the work of science. He fully realized how repugnant such a claim would be to the pos-itivist orthodoxy of the day, so repugnant indeed that its acceptance (he prophesied) would bring about a "first-order crisis in science and methodology" (1953, p. 6). Carnap, in particular, has been emphatic in excluding values from any role in science proper. His theory of meaning had led him to conclude that "the objective validity of a value . . . cannot be asserted in a meaningful statement at all" (1932/1959, p. 77). The contrast between science, the paradigm of meaning, and all forms of value-judgment could scarcely have been more sharply drawn: "it is altogether impossible to make a statement that expresses a value-judgment." No wonder, then, that Rudner's thesis seemed so shocking. Thirty years later, the claim that science is value-laden might no longer even seem controversial, among philosophers of science, at least, who have become accustomed to seeing the pillars of positivism fall, one by one. One might even characterize the recent deep shifts in theory of science as consequences (many of them, at least) of the growing realization of the part played by value-judgment in scientific work. If this way of describing the Kuhnian "revolution" seems unfamiliar, it is no doubt due in part to the uneasiness that the ambiguity of the terms "value" and "value-judgment" still engenders. There are other ways of describing what has happened since the 1950s on philosophy of science that do not require so much preliminary ground-clearing. Nevertheless, I shall try to show that the watershed between "classic" philosophy of science (by this meaning, not just logical positivism but the logicist tradition in theory of science stretching back through Immanuel Kant and René Descartes to Aristotle) and the "new" philosophy of science can 515 516 Part 6: Science and Values best be understood by analyzing the change in our perception of the role played by values in science. I shall begin with some general remarks about the nature of value, go on to explore some of the historical sources for the claim that judgment in science is value-laden, and conclude by reflecting on the implications of this claim for traditional views of the objectivity of scientific knowledge-claims. I will not address the problem of the social sciences, where these issues take on an added complexity. They are, as we shall soon see, already complicated enough in the context of natural sciences. I. The Anatomy of Value "Value" is one of those weasel words that slip in and out of the nets of the philosopher. We shall have to try to catch it first, or else what we have to say about the role of values in science may be of small use. It is not much over a hundred years since the German philosopher, Hermann Lotze, tried to construct a single theory of value which would unite the varied value-aspects of human experience under a single discipline. The venture was, of course, not really new since Plato had attempted a similar project long before, using the cognate term "good" instead of "value." Aristotle's response to Plato's positing of the Good as a common element answering to one idea was to point to the great diversity of ways in which the term "good" might be used. In effect, our response to Lotze's project of a general axiology would likewise be to question the usefulness of trying to find a single notion of value that would apply to all contexts equally well. Let us begin with the sense of "value" that the founders of value-theory seem to have preferred. They took it to correspond to such features of human experience as attraction, emotion, and feeling. They wanted to secure an experiential basis for value in order to give the realm of value an empirical status just as valid as that of the (scientific) realm of fact. The reality of emotive value (as it may be called) lies in the feelings of the subject, not primarily in a characteristic of the object. Value-differences amount, then, to differences of attitude or of emotional response in specific subjects. If one takes "value" in this sense, value-decision becomes a matter of clarifying emotional responses. To speak of value-judgment here (as indeed is often done) is on the whole misleading since "judgment" could suggest a cognitive act, a weighing-up. When the value of something is determined by one's attitude to it, the declaration of this value is a matter of value-clarifi-cation rather than of judgment, strictly speaking. It was primarily from this sense of value that the popular positivist distinction between differences of belief and differences of attitude took its origin, though C. L. Stevenson (who, when specifying his own notion of attitude, recalls R. B. Perry's definition of "interest" as a psychological disposition to be for or against some- Values in Science 517 thing) allows that value-differences may have components both of attitude and belief (1949, p. 591). It seems plausible to hold that emotive values are alien to the work of natural science. There is no reason to think that human emotionality is a trustworthy guide to the structures of the natural world. Indeed, there is every reason, historically speaking, to view emotive values, as Francis Bacon did, as potentially distortive "Idols," projecting in anthropomorphic fashion the pattern of human wants, desires, and emotions on a world where they have no place. When "ideology" is understood as a systematization of such values, it automatically becomes a threat to the integrity of science. The notion of value which is implicit in much recent social history of science, as well as in many analyses of the science-ideology relationship, is clearly that of emotive value. A second kind of "value" is more important for your quest. A property or set of properties may count as a value in an entity of a particular kind because it is desirable for an entity of that kind. (The same property in a different entity might not count as a value.) The property can be a desirable one for various sorts of reasons. Speed is a desirable trait in wild antelope because it aids survival. Sound heart action is desirable in an organism with a circulatory system because of the functional needs of the organism. A retentive memory is desirable for a lawyer because of the nature of the lawyer's task. Sharpness is desirable in a knife because of the way in which it functions as a utensil. Efficiency is desirable in a business firm if the firm is to accomplish the ordinary ends of business____ Let us focus on what these examples have in common. (In another context, we might be more concerned about their differences.) In each case, the desirable property is an objective characteristic of the entity. We can thus call it a characteristic value. In some cases, it is relative to a pattern of human ends; in others, it is not. In some cases, a characteristic value is a means to an end served by the entity possessing it; in others, it is not. In all cases, it serves to make its possessor function better as an entity of that kind. Assessment of characteristic values can take on two quite different forms. One can judge the extent to which a particular entity realizes the value. We may be said to evaluate when we judge the speediness of a particular antelope or the heartbeat of a particular patient. On the other hand, we may be asked to judge whether or not (or to what extent) this characteristic really is a value for this kind of entity. How much do we value the characteristic? Here we are dealing, not with particulars, but with the more abstract relation of characteristic and entity under a particular description. Why ought one value speed in an antelope, rather than strength, say? How important is a retentive memory to a lawyer? The logical positivists stressed the distinction between these two types of value-judgment, what I have called evaluation and valuing.1 Valuing they 518 Part 6: Science and Values took to be subjective and thus foreign to science. Evaluation, however, may be permissible because it expresses an estimate of the degree to which some commonly recognized (and more or less clearly defined) type of action, object, or institution is embodied in a given instance" (Nagel 1961, p. 492).2 Notice the presupposition here: clear definition of the characteristic is required in order that there be a standard against which an estimate may be made. It was already a large concession to allow a role for mere estimation (as against measurement proper) in science; no further concession would be allowed. Value-judgment, in the sense of evaluation could thus fall on the side of the factual, and the old dichotomy between fact and value could still be maintained. Value-judgment in the sense either of valuing or of evaluating, where the characteristic value is not sharply defined, was still to be rigorously excluded from science. Such value-judgment (so the argument went) is necessarily subjective; it involves a decision which is not rule-guided, and therefore has an element of the arbitrary. It intrudes individual human norms into what should ideally (if it were to be properly scientific) be an impersonal mapping of propositions onto the world. What was offensive about value-judgment, then, was not its concern with characteristic values. Indeed, when such values are measured (when, for example, human blood-pressure is measured as a means to determining any departure from "normality"), the results are obviously "scientific" in the most conservative sense. Not every judgment in regard to characteristic value counts therefore as a "value-judgment," as this term has come to be used. Such a judgment must not only be concerned with value, but must function, not as measurement does, but in a nonmechanical, individual way. Since it is a matter of experience and skill, individual differences in judgment can thus in the normal course be expected. It is clear, therefore, where the tension arises between value-judgment and not only the positivist view of science but the entire classical theory of science back to Aristotle. Max Weber spoke for that long tradition when, in his effort to eliminate value-judgment from social science, he opposed any form of assessment which could not immediately be enforced on all. The objectivity of science (he insisted) requires public norms accessible to all, and interpreted by all in the same way (Weber 1917). What I want to argue here is that value-judgment, in just the sense that Weber deplored, does play a central role in science. Both evaluation and value are involved. The attempt to construe all forms of scientific reasoning as forms of deductive or inductive inference fails. The sense of my claim that science is value-laden is that there are certain characteristic epistemic values which are integral to the entire process of assessment in science. Since my topic is "values in science," there are, however, some other construals of this title that ought to be briefly addressed first, in order to be laid aside. Values in Science 519 n. Other Construals One value, namely truth itself, has always been recognized as permeating science. In the classic account, it was in fact the goal of the entire enterprise. Unlike other values, it was deemed to have nothing of the personal about it. On the contrary, it connoted an objective relation of proposition and world and thus was constitutive of the very category of fact itself. But this was not thought to weaken the maxim that science should be value-free, because the values that were thus being enjoined from intrusion into the work of science were the particular ones that would tend to compromise the objectivity of the effort and not the transcendental one which defined the tradition of science itself. There has been much debate in recent philosophy of science about the sense in which truth can still be taken to be constitutive of science. The correspondence view of truth as a matching of language and mind-independent reality has been assailed by Ludwig Wittgenstein and many other more recent critics like Hilary Putnam and Richard Rorty. More to the point here, it seems clear that when a scientist "accepts" a theory, even a long-held theory, he is not claiming that it is true. The predicate in terms of which theory is valued is not truth, as the earlier account held it to be. We speak of a theory as being "well-supported," "rationally acceptable," or the like. To speak of it as true would suggest that a later anomaly that would force a revision or even abandonment of the theory can in principle be excluded. The recent history of science would make both scientists and philosophers wary of any such presumption, except perhaps in cases of very limited theories or ones which are vaguely stated It can, however, be argued that truth is still a sort of horizon-concept or ideal of the scientific enterprise, even though we may not be able to assert truth in a definitive manner of any component of science along the way. There are many variations of this view, one which was clearly articulated a century ago by C. S. Peirce. I do not intend to discuss this issue further here (though I will return to it obliquely in my conclusion), because to argue that truth is at least in some sense a characteristic value admissible in science is hardly novel, and does not constitute the point of division with classic logi-cist theories of science that I am seeking to identify. Nor am I concerned here with ethical values. Weber and the positivists of the last century and this one recognized that the work of science makes ethical demands on its practitioners, demands of honesty, openness, and integrity. Science is a communal work. It cannot succeed unless results are honestly reported, unless every reasonable precaution be taken to avoid experimental error, unless evidence running counter to one's own view is fairly handled, and so on. These are severe demands, and scientists do not always live up to them. Outright fraud, as we have been made uncomfortably aware in recent years, does occur. But so far as we can tell, it is rare and does 520 Part 6: Science and Values not threaten the integrity of the research enterprise generally. In any event, there never has been any disagreement about the value-ladenness of science where moral values of this kind are concerned. If I am to make a claim about a change in regard to the recognition of the proper presence in science of value-judgment, it cannot be in regard to those moral values which have always been seen as essential to the success of communal inquiry.3 In support of his claim that "value-judgments are essentially involved in the procedures of science," Rudner argued that the acceptance of a scientific hypothesis: is going to be a function of the importance, in the typically ethical sense, of making a mistake in accepting or rejecting the hypothesis. Thus, to take a crude but easily manageable example, if the hypothesis under consideration were to the effect that a toxic ingredient of a drug was not present in lethal quantity, we would require a relatively high degree of confirmation or confidence before accepting the hypothesis, for the consequences of making a mistake here are exceedingly grave by our moral standards. (1953, p. 2) This notion of hypothesis "acceptance" is dangerously ambiguous. Rudner takes it to mean: "approve as a basis for a specific kind of action." But acceptance in this sense is not part of theoretical science, strictly speaking. When a physicist "accepts" a particular theory, this can mean that he believes it to be the best-supported of the alternatives available or that he sees it as offering the most fruitful research-program for the immediate future. These are epistemic assessments; they attach no values to the theoretical alternatives other than those of likelihood or probable fertility. On the other hand, if theory is being applied to practical ends, and the theoretical alternatives carry with them outcomes of different value to the agents concerned, we have the typical decision-theoretic grid involving not only likelihood estimates but also "utilities" of one sort or another. Such utilities are irrelevant to theoretical science proper and the scientist is not called upon to make value-judgments in their regard as part of his scientific work. The values of life and death involved in a decision to use or not to use a possibly toxic drug in a case where it alone seems to offer a chance of recovery are not relevant to the much more limited question as to whether or not the drug would be toxic for this patient. The utilities typically associated with the application of science to human ends in medicine, engineering and the like, cannot, therefore, be cited as a reason for holding natural science itself to be value-laden. The conclusion that Rudner draws from his analysis of hypothesis-"acceptance" is that "a science of ethics is a necessary requirement if science's progress toward objectivity is to be continuous." But scientists are (happily!) not called on to "accept" hypotheses in the sense he is presupposing,4 and so his conclusion does not go through.5 If we are to hold that the work of science is value-laden, it ought to be for another reason. Values in Science 521 My argument for the effective presence of "values in science" does not, then, refer to the constitutive role in science of the value or truth; nor does it refer to the ethical values required for the success of science as a communal activity, or to the values implicit in decision-making in applied science. Rather, it is directed to showing that the appraisal of theory is in important respects closer in structure to value-judgment than it is to the rule-governed inference that the classic tradition in philosophy of science took for granted. Not surprisingly, the recognition of this crucial epistemological shift has been slow and painful. Already there are intimations of it among the more perceptive nineteenth-century philosophers of science. William Whewell, for example, describes a process very like value-judgment in his influential account of the "consilience of inductions," though he draws back from the threatening subjectivism of this line of thought, asserting that consilience will amount to "demonstration" in the long run (Laudan 1981). The logical positivists, as already noted, resolutely turned the theory of science back into the older logicist channels once more. Yet as they (and their critics) tried to characterize the strategies of science in closer detail, doubts began to grow. To these earlier anticipations of our theme, I now briefly turn. III. Anticipations The prevailing inductivism of the nineteenth century made it seem as though science ultimately consisted of laws, that is, statements of empirical regularities. These laws were arrived at by generalization from the facts of observation; the facts themselves were regarded as an unproblematic starting-point for the process of induction. It was, of course, realized that the laws were open to revision as measuring apparatus was improved, as the ranges of the variables were extended, as new relevant factors were discovered. There was no logic, strictly speaking, which would lead from a finite set of observation-statements to a universally valid law of nature. Human decision had to enter in, therefore, by way of curve-fitting, extrapolation, and estimates of relevance. Such decision was not arbitrary; there were skills and techniques to be learnt which would aid the scientist in drawing the best generalizations from the data available. Was this not a matter of value-judgment rather than of a common logic of formal rules? We would say so today. But the point was not so evident then, or perhaps it would be more accurate to say that it seemed of little importance. The reason was that the laws were taken to be true to a degree of approximation that could be improved indefinitely. Thus the influence of these decisional aspects, where the individual skills of curve-fitting, extrapolation, and estimation of relevance, entered into the process of formulating a law, would be progressively lessened, as the law came closer and closer to being an exact 522 Part 6: Science and Values description of the real, that is, as the law gradually attained the status of fact. Thus, even though value-judgment did enter, in a number of ways, into the process of inductive generalization, its presence could in practice be ignored. It was, after all, no more than an accessory activity, of little significance to the ultimate deliverances of science, namely, the exact statements of the laws of nature. The logical positivists still adhered to this nomothetic ideal. But from the beginning, they encountered difficulties as soon as they tried to spell out how an inductive method might work. The story is a familiar one. I am going to focus on only two episodes in it, one involving Karl Popper and the other Rudolf Carnap, in order to show how "value-uneasiness" was already in evidence among philosophers of science fifty years ago, though in neither of these episodes was it altogether satisfactorily characterized. As we all know, Popper rejected the nomothetic ideal of science that the logical positivists took over from the nineteenth century. For him, science is a set of conjectures rather than a set of laws. The testing of conjectures is thus the central element in scientific method and it can work only by falsification, when a basic statement conflicts with a conjectured explanation, leading to the rejection of the conjecture. The entire logical weight of this operation is carried by the "basic statements," that is, reports of observable events at particular locations in space and time. But now a difficulty arises. Could not the basic statements themselves be falsified? They could not consistently be held immune to the test-challenge that Popper saw as the criterion of demarcation between science and non-science. But if the basic statements themselves are open to challenge, how is the whole procedure of falsification of conjecture to work? It sounds as if a destructive regress cannot be avoided. Popper's answer is to say that: Every test of a theory, whether resulting in its corroboration or falsification, must stop at some basic statement or other which we decide to accept____Considered from a logical point of view, the situation is never such that it compels us to stop at this particular basic statement rather than at that.... For any basic statement can again in its turn be subjected to tests, using as a touchstone any of the basic statements which can be deduced from it, with the help of some theory, either the one under test or another. This process has no natural end. Thus if the test is to lead anywhere, nothing remains but to stop at some point or other and say we are satisfied for the time being. (1934/1959, p. 104) Thus the designation of a statement as a "basic" one is not definitive, and hence falsification is not quite the decisive logical step Popper would have liked it to be. He continues: "Basic statements are accepted as the result of a decision or convention; and to that extent they are conventions" (p. 106). His choice of the term "convention" here is a surprising one since it carries the Values in Science 523 overtone of arbitrariness, of arbitrary choice and not just choice. But Popper is explicit in excluding this suggestion. He criticizes Otto Neurath, in fact, who (he says) made a "notable advance" by recognizing that protocol statements are not irrevocable, but then failed to specify a method by which they might be evaluated. Such a move, he goes on, leads nowhere if it is not followed by another step; we need a set of rules to limit the arbitrariness of "deleting" (or else "accepting") a protocol sentence. Neurath fails to give any such rules and thus unwittingly throws empiricism overboard. For without such rules, empirical statements are no longer distinguished from any other sort of statements, (p. 97) For Popper, the need for such a line of demarcation takes precedence over any other demand. So if there are to be decisions regarding the basic statements, these must (he says) be "reached in accordance with a procedure governed by rules" (p. 106). If there are rules, however, to guide the decision, it sounds as though a definite answer might be obtained by the application of these rules in any given case. And so the properly decisional element would be minimal, and value-judgment (as we have defined it) would not enter in. But, in fact, we discover that the word, "rule," here (like the word, "convention") is not to be taken literally. When Popper specifies how these "rules" would operate, all he has to say is that we can arrive at: a procedure according to which we stop only at a kind of statement that is especially easy to test. For it means that we are stopping at statements about whose acceptance or rejection the various investigators are likely to reach agreement (p. 104) So that ease in testing is to guide the investigator in deciding which statements to designate as basic. But this clearly operates here as a value rather than as a rule. There could be differences in judgment as to the extent to which the value was realized in a given case. Popper himself says of his "rules" that though they are "based on certain fundamental principles" which aim at the discovery of objective truth, "they sometimes leave room not only for subjective convictions but even for subjective bias" (p. 110). Thus what we have here is value-judgment, not the application of rule, strictly speaking. There is no rule as to where to stop the testing process. If some investigators prolong it further than others do, we would not be inclined to describe this as either "following" or "breaking" a rule. But we would call it the pursuing of a particular goal or value. Popper's use of the term "convention" to describe the element of value-judgment in the designation of basic statements has proved misleading to later commentators, even though he explicitly rejected classical conventionalism, mainly because it was unable, in his view, to generate a proper crite- 524 Part 6: Science and Values rion of demarcation between science and nonscience (McMullin 1978a, section 7). lmre Lakatos, for example, described Popper's view as a form d "revolutionary conventionalism" because of its explicit admission of the rale of decisional elements in the scientific process. This led him to characterae his own MSRP as a way of "rationalizing classical conventionalism," rational because the criteria for distinguishing between "hard core" and "protective can be partia))y specified, ascanthe criteriaof'theory-choice, out "con-ventional" because the process is not one of a mechanical application of rule, involving, as it does, individual judgment (1970, p. 134). Joseph Agassi likewise proposed that the most accurate label for Popper's theory of science is "modified conventionalism" (Agassi 1974, p. 693) to which suggestion Popper rather testily responded "I am not a conventionalist, whether modified or not" (Popper 1974, p. 1117). Much of the confusion prompted by Popper's use of the term "convention" might have been avoided if he had used the notion of value-judgment instead. It has precisely the flexibility that he needed in order to distance himself, as he wished to do, from both positivism and conventionalism, from positivism because of his insistence upon the decisional elements in the selection of basic statements and from conventionalism because he believed that the values guiding judgment in this case are grounded in the "autonomous aim" of science, which is the pursuit of objective knowledge (Popper 1974, p. 1117). Though \he admission of vame-judgmem \mo science, had moved Popper away from his rationalist moorings, it is significant that he never extended the range of value-judgment to theory-choice, which today to us would seem the much more likely locus. Even though he allows that "the choice of art theory is an act, a practical matter" (1935/1959, p. 109), his opposition 10 verification made him wary of allowing that theories might ever he "accepted." To the extent that they are, it is a provisional affair, he remm* us. But this sort of provisional acceptance is still, in his view, decisively influenced by the success of the theory in avoiding falsification (McMuftn 1978a, p. 224). Rationalism is thus preserved at this level by the assumptions of a more or less decisive method of choosing between theories at any grim stage of development. This is the assumption that Carnap helped, somewhat unwittingly perhaps, to undermine. In 1950, he drew his famous distinction betweeat "internal" questions, which can be answered within a given linguistic framework and "external" questions, which bear on the acceptability of the framework itself (1950, p. 214). The point of the distinction was to clarify tbe debate about the existence of such abstract entities as classes or numbers to which Carnap assimilated the question of theoretical entities like electrons. To ask about the existence of such entities within a given linguistic framework is perfectly legitimate, he said, and an answer can be given along tog- Values in Science 525 ical or empirical lines. But to ask about the reality of such a system of entities taken as a whole is to pose a metaphysical question to which only a pseudo-answer can be given. The question can, however, be framed in a different way and then it becomes perfectly legitimate. We can ask whether the linguistic framework itself is an appropriate one for our purposes, whatever they may be. This is the form in which external questions should be put in order to avoid idle philosopher's questions about the existence of numbers or electrons. Once the question is put in this way, he goes on, it is seen to be a practical, not a theoretical matter. The decision to accept a particular framework: although itself not of a cognitive nature, will nevertheless usually be influenced by theoretical knowledge, just like any other deliberate decision concerning the acceptance of linguistic or other rules. The purposes for which the language is intended to be used, for instance, the purpose of communicating factual knowledge, will determine which factors are relevant for the decision. (1950, p. 208) And he goes on to enumerate some of the factors that might influence a pragmatic decision of this sort: he mentions the "efficiency, fruitfulness and simplicity" of the language, relative to the purposes for which it is intended. These are values, of course, and so what he is talking about here (though he does not explicitly say so) is value-judgment. In this essay, Carnap is worrying mainly about the challenge of the nominalists to such entities as classes, properties, and numbers. He wants to answer this challenge, not by asserting the existence of these entities directly —this would violate his deepest empiricist convictions—but by appealing to the practical utility of everyday language where terms corresponding to these entities play an indispensable role. And so he counters Occam's razor with a plea for "tolerance in permitting linguistic forms" (1950, p. 220). As long as the language is efficient as an instrument, he says, it would be foolish, indeed harmful, to impoverish it on abstract nominalist grounds. But Carnap conceded much more than he may have realized by this maneuver. By equating the general semantical problem of abstract entities with the problem of theoretical entities in science, he implied that pragmatic "external" criteria are the appropriate ones for deciding on the acceptability of the linguistic frameworks of science, that is, of scientific theories. For the first time, he is implicitly admitting that the tight "internal" logicist criteria which he had labored so long to impose on the problems of confirmation and explanation are inappropriate when it is the very language of science itself, that is, the theory, that is in question. It is the theory that leads us to speak of electrons; to assess this usage, we have to evaluate as a single unit the theory in which this concept occurs and by means of which it is defined. If more than one "linguistic framework" 526 Part 6: Science and Values for theory is being defended in some domain, the decision as to which is the better one has to be resolved, not by inductive logic, but by these so-called external criteria. The term, "external," was obviously an unhappy choice, as things would turn out. The questions Carnap dubs "external" would be external to science only if theory-decision is external to science. They were external to his logi-cist conception of how science ought to be carried on, of course. Only if science can be regarded as a "given" formal system can the enterprise of the logician get under way. The question of whether a particular theory should be "given" or whether another might not accomplish the theoretical ends of science better, cannot be properly (i.e., "internally") posed in the original posi-tivist scheme of things. Once Carnap allowed it to be posed, however "externally," it would not be long until theory-evaluation would be clearly recognized as the most "internal" of all scientific issues, defining as it does scientific rationality and scientific progress. After we have discarded his term "external," we still retain his insight that the structure of decision in regard to the acceptability of a theoretical language is not one of logical rule but of value-judgment. IV. Theory-choice as Value-judgment This gets us up only to 1950, which seems like very long ago in philosophy of science. Yet the shape of things to come is already clear to us, even though it was by no means clear then. The watershed between classic theory of science and our as yet unnamed postlogicist age has been variously defined since then. But for our purposes here, it can best be laid out in four propositions, three of them familiar, the other (P3) a little less so. PI: The goal of science is theoretical knowledge. P2: The theories of science are underdetermined by the empirical evidence. P3: The assessment of theories involves value-judgment in an essential way. P4: Observation in science is theory-dependent. PI tells us that the basic explanatory form in science is theory, not law, and thus that retroduction, not induction, is the main form of scientific validation. Theories by their very nature are hypothetical and tentative; they remain open to revision or even to rejection. P2 reminds us that there is no direct logical link, of the sort that classical theories of science expected, between evidence and Iheory. Since one is not compelled, as one would be in Values in Science 527 a logical or mathematical demonstration, one has to rely on oblique modes of assessment. And P3 tells us that these take the form of value-judgments. P4 serves to emphasize that a thesis in regard to theory-appraisal has broader scope. To the extent that scientific observation is theory-dependent, it is also indirectly value-impregnated. This last point is not stressed any further in what follows, but it is well that it should be kept in mind lest it be thought that only one element in science, theory-choice, is affected by the shift described here, and that the traditional logicist/empiricist picture might be sustained at all other points. So much for the schema. It can be found, more or less in the form in which I have sketched it here, in the work of Kuhn, specifically in his 1973 essay "Objectivity, Value-judgment, and Theory Choice" (1977). He asks there what are the characteristic values of a good scientific theory and lists, as a start, five that would be pretty well agreed upon. I will rework his fist just a little, and add some comments. Predictive accuracy is the desideratum that scientists would usually list first. But one has to be wary about the emphasis given it. As Lakatos and Feyerabend in particular have emphasized, scientists must often tolerate a certain degree of inaccuracy, especially in the early stages of theory-development. Nearly every theory is "bom refuted"; there will inevitably be anomalies it cannot handle. There will be idealizations that have to be worked out in order to test the theory in complex concrete contexts. Were this demand to be enforced in a mechanical manner, the results for science could be disastrous. Nevertheless, a high degree of predictive accuracy is in the long run something a theory must have if it is to be acceptable. A second criterion is internal coherence. The theory should hang together properly; there should be no logical inconsistencies, no unexplained coincidences. One recalls the primary motivating factor for many astronomers in abandoning Ptolemy in favor of Copernicus. There were too many features of the Ptolemaic orbits, particularly the incorporation in each of a one-year cycle and the handling of retrograde motions, that seemed to leave coincidence unexplained and thus, though predictively accurate, to appear as ad hoc. A third is external consistency: consistency with other theories and with the general background of expectation. When steady-state cosmology was proposed as an alternative to the Big Bang hypothesis in the late 1940s, the criticism it first had to face was that it flatly violated the principle of conservation of energy, which long ago attained the status almost of an a priori in mechanics. Even if Fred Hoyle had managed to make his model satisfy the other demands laid on it, such as the demand that it yield testable predictions in advance and not just after the fact, it would always have had a negative rating on the score of external consistency. A fourth feature that scientists value is unifying power, the ability to 528 Pakt 6: Science and Values bring together hitherto disparate areas of inquiry. The standard illustration is James Maxwell's electromagnetic theory. A more limited, but quite striking, example would be the plate-tectonic model in geology. Over the past twenty years, it has successfully explained virtually all major features of the earth's surface. What has impressed geologists sufficiently to persuade most (not all) of them to overcome the scruples that derive, for example, from the lack of a mechanism to account for the plate-movements themselves, is not just its predictive accuracy but the way in which it has brought together previously unrelated domains of geology under a single explanatory roof. A further, and quite crucial, criterion is fertility. This is rather a complex affair (McMullin 1976). The theory proves able to make novel predictions that were not part of the set of original explananda. More important, the theory proves to have the imaginative resources, functioning here rather as a metaphor might in literature, to enable anomalies to be overcome and new and powerful extensions to be made. Here it is the long-term proven ability of the theory or research program to generate fruitful additions and modifications that has to be taken into account. One other, and more problematic, candidate as a theory-criterion is simplicity. It was a favorite among the logical positivists because it could be construed pragmatically as a matter of convenience or of aesthetic taste, and seemed like an optional extra which the scientist could decide to set aside, without affecting the properly epistemic character of the theory under evaluation (Hempel 1966, pp. 40-45). Efforts to express a criterion of "simplicity'' in purely formal terms continue to be made, but have not been especially successful. One could easily find other desiderata. And it would be important to supply some detailed case-histories in order to illustrate the operation of the ones I have just listed. But my concern here is rather to underline that these criteria clearly operate as values do, so that theory choice is basically a matter of value-judgment. Kuhn puts it this way: The criteria of [theory] choice function not as rules, which determine choice, but as values which influence it Two men deeply committed to the same values may nevertheless, in particular situations, make different choices, as in fact they do. (1977, p. 331) They correspond to the two types of value-judgement discussed above in section 1. First, different scientists may evaluate the fertility, say, of a particular theory differently. Since there is no algorithm for an assessment of this sort, it will depend on the individual scientist's training and experience. Though there is likely to be a very large measure of agreement, nonetheless the skills of evaluation here are in part personal ones, relating to the community consensus in complex ways. Values in Science 529 Second, scientists may not attach the same relative weights to different characteristic values of theory, that is they may not value the characteristics in the same way, when, for example, consistency is to be weighed over against predictive accuracy. It is above all because theory has more than one criterion to satisfy, and because the "valuings" given these criteria by different scientists may greatly differ, that disagreement in regard to the merits of rival theories can on occasion be so intractable. It would be easy to illustrate this by calling on the recent history of science. A single example will have to suffice. The notorious disagreement between Niels Bohr and Albert Einstein in regard to the acceptability of the quantum theory of matter did not bear on matters of predictive accuracy. Einstein regarded the new theory as lacking both in coherence and in consistency with the rest of physics. He also thought it failing in simplicity, the value that he tended to put first. Bohr admitted the lack of consistency with classical physics, but played down its importance. The predictive successes of the new theory obviously counted much more heavily with him than they did with Einstein. The differences between their assessments were not solely due to differences in the values they employed in theory-appraisal. Disagreement in substantive metaphysical belief about the nature of the world also played a part. But there can be no doubt from the abundant testimony of the two physicists themselves that they had very different views as to what constituted a "good" theory. The fact that theory-appraisal is a sophisticated form of value-judgment explains one of the most obvious features of science, a feature that could only appear as a mystery in the positivist scheme of things. Controversy, far from being rare and wrong-headed, is a persistent and pervasive presence in science at all levels. Yet if the classical logicist view of science had been right, controversy would be easily resolvable. One would simply employ an algorithm, a "method," to decide which of the contending theories is best confirmed by the evidence available. At any given moment, there would then be a "best" theory, to which scientists properly versed in their craft ought to adhere. But, of course, not only is this not the case, but it would be a disaster if it were to be the case (McMullin 1983). The clash of theories, Popper has convinced us, is needed in order that weak spots may be probed and potentialities fully developed. Popper's own theory of science made it difficult to see how such a pluralism of theories could be maintained. But once theory-appraisal is recognized to be a complex form of value-judgment, the persistence of competing theories immediately follows as a consequence. Thomas Kuhn characteristically sees the importance of value-difference not so much in the clash of theories—such controversy is presumably not typical of his "normal science"—as in the period of incipient revolution when a new paradigm is struggling to be born: 530 Part 6: Science and Values Before the group accepts it, a new theory has been tested over time by the research of a number of men, some working within it, others within its traditional rival. Such a mode of development, however, requires a decision process which permits rational men to disagree, and such disagreement would be barred by the shared algorithm which philosophers generally have sought. If it were at hand, all conforming scientists would make the same decision at the same time. ... I doubt that science would survive the change. What from one point of view may seem the looseness and imperfection of choice criteria conceived as rules may, when the same criteria be seen as values, appear an indispensable means of spreading the risk which the introduction or support of novelty always entails. (1961, p. 220) It almost seems as though the value-ladenness of theory-decision is specially designed to ensure the continuance of controversy and to protect endangered but potentially important new theoretical departures. A Hegelian might see in this, perhaps, the cunning of Reason in bringing about a desirable result in a humanly unpremeditated way. But, of course, these are just the fortunate consequences of the nature of theory-decision itself. It is not as though theories could be appraised in a different more rule-guided way. One is forced to recognize that the value-ladenness described above derives from the problematic and epistemologically complex way in which theory relates to the world. It is only through theory that the world is scientifically understood. There is no alternative mode of access which would allow the degree of "fit" between theory and world to be independently assessed, and the values appropriate to a good theory to be definitively established. And so there is no way to exchange the frustrating demands of value-judgment for the satisfying simplicities of logical rule. V. Epistemic Values Even though we cannot definitively establish the values appropriate to the assessment of theory, we saw just a moment ago that we can provide a tentative list of criteria that have gradually been shaped over the experience of many centuries, the values that are implicit in contemporary scientific practice. Such characteristic values I will call epistemic, because they are presumed to promote the truthlike character of science, its character as the most secure knowledge available to us of the world we seek to understand. An epistemic value is one we have reason to believe will, if pursued, help toward the attainment of such knowledge. I have concentrated here on the values that one expects a good theory to embody. But there are, of course, many other epistemic values, like that of reproducibility in an experiment or accuracy in a measurement. When I say that science is value-laden, I would not want it to be thought that these values derive from theory-appraisal only. Value-judgment perme- Values in Science 531 ates the work of science as a whole, from the decision to allow a particular experimental result to count as "basic" or "accepted" (the decisional element that Popper stressed), to the decision not to seek an alternative to a theory which so far has proved satisfactory. Such values as these may be pragmatic rather than epistemic; they may derive from the finiteness of the time or resources available to the experimenter, for example. And sometimes the borderline between the epistemic and the pragmatic may be hard to draw, since (as Pierre Duhem and Karl Popper among others have made clear) it is essential to the process of science that pragmatic decisions be made, on the temporary suspension of further testing for example. Of course, it is not pragmatic values that pose the main challenge to the epistemic integrity of the appraisal process. If values are needed in order to close the gap between underdetermined theory and the evidence brought in its support, presumably all sorts of values can slip in: political, moral, social, and religious. The list is as long as the list of possible human goals. I shall lump these values together under the single blanket term, "nonepistemic." The decision as to whether a value is epistemic or nonepistemic in a particular context can sometimes be a difficult one. But the grounds on which it should be made are easy to specify in the abstract. When no sufficient case can be made for saying that the imposition of a particular value on the process of theory choice is likely to improve the epistemic status of the theory, that is, the conformity between theory and world, this value is held to be nonepistemic in the context in question. This decision is itself, of course, a value-judgment and there is an obvious danger of a vicious regress at this point. I hope it can be headed off, and will return to this task in a moment. But first, one sort of factor that plays a role in theory-assessment can be hard to situate. Externalist historians of science have been accustomed to grouping under the elastic term "value" not only social and personal goals but also various elements of world-view, metaphysical, theological, and the like. Thus, for example, when Newton's theology or Bohr's metaphysics affected the choice each made of "best" theory in mechanics, such historians have commonly described this as an influence of "values" upon science (see, for example, Graham 1981). Since I have been arguing so strongly here for the value-ladenness of science, it might seem that I should welcome this practice. But it is rooted, I think, in a sort of residual positivism that is often quite alien to the deepest convictions of the historians themselves who indulge in it (McMullin 1982). They would be the first to object to the label "externalist," but here they are assuming that a philosophical world-view is of its nature so "external" to science that it must be flagged as a "value," and consequently dealt with quite differently from the point of view of explanation. Let me try to clarify the source of my opposition to this practice. A philosophical system can in certain contexts serve as a value, as a touchstone of 532 Part 6: Science and Values decision. So for that matter can a scientific theory. But this does not convert it into a "value" in the sense in which social historians sometimes interpret this term, namely as something for which socio-psychological explanation is all-sufficient. The effect of calling metaphysics a "value" can be to shift it from the category of belief to be explained in terms of reasons adduced, in the way that science is ordinarily taken, to the category of goal to be explained, in terms of character, upbringing, community pressures, and the rest. What I am arguing for is the potentially epistemic status that philosophical or theological world-view can have in science. From the standpoint of today, it would be inadmissible to use theological argumentation in mechanics. Yet Isaac Newton in effect did so on occasion. In describing this, it is important to note that theology functioned for him as an epistemic factor, as a set of reasons that Newton thought were truth-bearing (McMullin 1978b, p. 55). It did not primarily operate as a value if by "value" one were to mean a socio-psychological causal factor, superimposed upon scientific argument from the outside, to be understood basically as a reflection of underlying social or psychological structures. Now, of course, the historian may find that someone's use of theological or philosophical considerations did, on a given occasion, reflect such structures. But this has to be historically proven. The question must not be begged by using the term "value" as externalist historians have too often done. Incidentally, the pervasive presence of nonstandard epistemic factors in the history of science is the main reason, to my mind, why the one-time popular internal-external dichotomy fails. Sociologists of science in the "strong program" tradition are more consistent in this respect. They do take metaphysics and theology to be a reflection of socio-psychological structure, but-of course, they regard science itself in the same epistemically unsympathetic light. My point here has simply been that it is objectionable to single out nonstandard forms of argument in science by an epistemically pejorative use of the term "value" (McMullin 1983). VI. The Place of Act in a World of Values That being said, let me return to the question that must by now be uppermost in the reader's mind. What is left of the vaunted objectivity of science, the element of the factual, in all this welter of value-judgment? Once the camel's nose is inside, the tent rapidly becomes uncomfortable. Is there any reasoned way to stop short of a relativism that would see in science no more than the product of a contingent social consensus, bearing testimony to the historical particularity of culture and personality much more than to an objective truth about the world? I think there is, but I can at this stage only provide an outline of the argument needed. It requires two separate steps. Values in Science 533 Step one is to examine the epistemic values employed in theory-appraisal, the values that lie at the heart of the claim that theory assessment in science is essentially value-laden, and to ask how they in turn are to be validated, and how in particular, circularity is to be avoided in doing so. First, let me recall how the skills of epistemic value-judgment are learnt. Apprentice scientists learn them not from a method book but from watching others exercise them. They learn what to expect in a "good" theory. They note what kinds of considerations carry weight, and why they do so. They get a feel for the relative weight given the different kinds of considerations, and may quickly come to realize that there are divergences here in practice. Their own value-judgments will gradually become more assured, and will be tested against the practice of their colleagues as well as against historical precedent (Polanyi 1958; Kuhn 1962).6 What is the epistemic worth of the consensus from which these skills derive? Kuhn is worried about the validity of invoking history as warrant in this case: Though the experience of scientists provides no philosophical justification for the values they deploy (such justification would solve the problem of induction), those values are in part learned from that experience and they evolve with it. (1977, p. 335) This is to take the Hume-Popper challenge to induction far too seriously (unless, of course, "justification" were to be taken to mean definitive proof). The characteristic values guiding theory-choice are firmly rooted in the complex learning experience which is the history of science; this is their primary justification, and it is an adequate one. We have gradually learnt from this experience that human beings have the ability to create those constructs we call "theories" which can provide a high degree of accuracy in predicting what will happen, as well as accounting for what has happened, in the world around us. It has been discovered, further, that these theories can embody other values too, such values as coherence and fertility, and that an insistence on these other values is likely to enhance the chances over the long run of the attainment of the first goal, that of empirical accuracy. It was not always clear that these basic values could be pursued simultaneously.7 In medieval astronomy, it seemed as though one had to choose between predictive accuracy and explanatory coherence, the Ptolemaic epicycles exemplifying one and Aristotelian cosmology the other. Since the two systems were clearly incompatible, philosophers like Aquinas reluctantly concluded that there were two sorts of astronomical science, one (the "mathematical") which simply "saved the appearances," and the other (the "physical") whose goal it was to explain the truth of things (Duhem, 1908/1969, 534 Part 6: Science and Values chapter 3). Galileo's greatest accomplishment, perhaps, was to demonstrate the possibility of a single science in which the values of both the physical and the mathematico-predictive traditions could be simultaneously realized (Machamer 1978). There was nothing necessary about this historical outcome. The worlo might well have turned out to be one in which our mental constructions would not have been able to combine these two ideals. What became clear in the course of the seventeenth century was that they can be very successfully combined, and that other plausible values can be worked in as well. When I say "plausible" here, I am suggesting that there is a second convergent mode of validation for these values of theory-appraisal (for "valuings" in the sense defined in section 1). We can endeavor to account for their desirability in terms of a higher-order epistemological account of scientific knowing. This is to carry retro-duction to the next level upwards. It is asking the philosopher to provide a theory in terms of which such values as fertility would be shown to be appropriate demands to lay on scientific theory. The philosopher's ability to provide just such a theory (and it is not difficult to do this) in turn then testifies to the reliability to taking these criteria to be proper values for theory-appraisal in the first place. This is only the outline of an argument, and much more remains to be filled in. But perhaps I have said enough to indicate how one could go about showing that the characteristic values scientists have come to expect a theory to embody are a testimony to the objectivity of the theory, as well as of the involvement of the subjectivity of the scientist in the effort to attain that objectivity. There is a further argument I would use in support of this conclusion, but it is based on a premise that is not shared by all. That is the thesis of scientific realism. I think that there are good reasons to accept a cautious and carefully restricted form of scientific realism, prior to posing the further question of the objective basis of the values we use in theory-appraisal (McMullin 1983). The version of realism I have in mind would suggest that in many parts of science, like geology and cell-biology, we have good reasons to believe that the models postulated by our current theories gives us a reliable, though still incomplete, insight into the structures of the physical world. Thus, for example, we would suppose that the success of certain sorts of theoretical model would give us strong reason to believe that the core of the earth is composed of iron, or that stars are glowing masses of gas. We have no direct testimony regarding either of these beliefs, of course. To claim that the world does resemble our theoretical models in these cases, is to claim that the method of retroduction on which they are based, and which rests finally on the values of theory-appraisal I have already discussed, is in fact (at least in certain sorts of cases) reliable in what it claims.8 Obviously, the realist thesis will not hold, or will hold only in attenuated form, where theory is still Values in Science 535 extremely underdetermined (as in current elementary-particle theory) or where the ontological implications of the theory are themselves by no means clear (as in classical mechanics). And so, to conclude step one, there is reason to trust in the values used commonly in current science for theory-appraisal as something much more than the contingent consensus of a peculiar social subgroup. But a further step is needed, because these values do not of themselves determine theory-choice, a point I have stressed from the beginning. And so other values can and do enter in, the sorts of value that sociologists of science have so successfully been drawing to our attention of late, as they scrutinize particular episodes in the history of science. I am thinking of such values as the personal ambition of the scientist, the welfare of the social class to which he or she belongs, and so on. Has the camel not, then, poked its wet nose in beside us once again? It has, of course, but perhaps we can find a way to push it out—or almost out—one final time. The process of science is one long series of test and tentative imaginative extensions. When a particular theory seems to have triumphed, when Louis Pasteur has overcome Felix Pouchet, to cite one nineteenth-century illustration that has recently come in for a lot of attention from social historians of science (Farley and Geison 1976), it is not as though the view that has prevailed is allowed to reign in peace. Other scientists attempt to duplicate experimental claims; theoreticians try to extend the theories involved in new and untried ways; various tests are devised for the more vulnerable theoretical moves involved, and so on. This is not just part of the mythology of science. It really does happen, and is easy to document. To the extent that nonepistemic values and other nonepistemic factors have been instrumental in the original theory-decision (and sociologists of science have rendered a great service by revealing how much more pervasive these factors are than one might have expected), they are gradually sifted by the continued application of the sort of value-judgment we have been describing here. The nonepistemic, by very definition, will not in the long run survive this process. The process is designed to limit the effects not only of fraud and carelessness, but also of ideology, understood in its pejorative sense as distortive intrusion into the slow process of shaping our thought to the world. Notes 1. The terminology of "evaluation" and "valuing" is used by Kovesi (1967) in a somewhat different way. He supposes value-judgment to apply to things via their descriptions. Thus, we "evaluate" particulars insofar as they "fall under a certain description" (p. 151). Whereas we "value" things "insofar as they are such and such." We would "evaluate" a particular lawyer as a lawyer (being given a description of the qualities that make up a lawyer), whereas we would 536 Part 6: Science and Values "value" lawyers for what they are, as indispensable to the conduct of complex communities or however we might wish to describe their "value" in some broader context. (His aim is to contrast "evaluation" with moral judgment.) My focus is on specific characteristic values, on the V-ness of A"s, where his is on entity-descriptions, on A'-ness itself as a subject for evaluation or valuing. The advantage of the former is that it makes the basis of the value-judgment specific. It focuses evaluation on the characteristic which can be present to a greater or lesser degree. And it provides a context for valuing which Kovesi's notion appears to lack, thus risking confusion with emotive value. Finally, Kovesi's emphasis on description could mislead, since the characteristic value need not be described, strictly speaking. Indeed, as we shall see below, the frequent inability to give explicit descriptions of characteristic values is an essential feature of evaluation as it occurs in science. The emphasis on A'-ness (which does need describing) rather than on the V-ness of A"s (where the Y may be only summarily indicated) is the root of the difference. I am indebted to Carl Hempe! and David Solomon for discussions of the topics of this section. 2. Nagel used the terms "characterize" and "appraise" instead of our "evaluate" and "value." The example he gives of "characterizing" is the evaluation of the degree of anemia a particular animal suffers from against a standard of "normality" in the red blood-corpuscle count (See "The Value-Oriented Bias of Social Inquiry," Nagel 1961, pp. 485-502.) 3. Some philosophers assimilate epistemic values to moral values, so that for them the values implicit in theory-appraisal are broadly moral ones. Putnam, for example, takes adherence to these values on the part of scientists to be "part of our idea of human cognitive flourishing, and in hence part of our idea of total human flourishing, of Eudaimonia" (1981, p. 134). The analysis of characteristic value given in section 1, and even more the discussion of the warrant for epistemic value in section 6 below, would lead me to question this assimilation of the epistemic to the moral under the very vague notion of "flourishing." To pursue this further would, however, require further analysis of the nature of moral knowledge. 4. A point already made by Richard Jeffrey (1956) in a response to the Rudner article. 5. In fairness, it should be added that Rudner drew attention in the same paper to the value-implications of the new directions that Carnap and Quine were just beginning to chart. But these consequences were obscured by his emphasis on the ethical aspects of theory-acceptance. He evidently supposed that all of these considerations would converge, but in fact, they did not, and could not. 6. Polanyi and Kuhn relate such skills as that of theory-assessment (and pattern-recognition, which is ultimately theory-dependent) in rather different ways to the learning experience of the apprentice scientist. I would lean more to Kuhn's analysis in this case, but in the context of my argument here, it is sufficient to note the affinity between these two authors rather than to press their differences. 7. Kuhn attaches a higher degree of fixity to the epistemic values of theory-choice then I would. He takes the five he describes to be "permanent attributes of science," provided the specification be left vague (1977, p. 335). 8. This is where I would diverge from Putnam (1981), who otherwise defends a view of the role of value-judgment in science similar to the one outlined here. In the spirit of Kant he wants to find a middle way between objectivism and subjectivism, between what he regards as the extremes of "metaphysical realism" and "cultural relativism." The former he defines as being based on "the notion of a transcendental match between our representation and the world" which he briskly characterizes as "nonsense" (1981, p. 134). Blocked from taking the epistemic values to be the means of gradually achieving such a correspondence, he is thus forced to make them in some sense ultimates. "Truth is not the bottom line; truth itself gets its life from our criteria of rational acceptability" (p. 130). What he wants to stress, he says, is "the dependence of the empirical world on our criteria of rational acceptability" (p. 134). Instead of merely holding that "our knowledge of the world presupposes values" (the thesis that I am Values in Science 537 arguing for in this essay), he is led then to "the more radical claim that what counts as the real world depends upon our values" (p. 137). But such a position leaves him (in my view) with no vantage-point from which it would be possible to correct, or gradually adjust, the epistemic values themselves. They constitute for him "part of our conception of human flourishing" (p. xi). But there can be many such conceptions; against Aristotle (whom he takes to defend a single ideal of human flourishing), he argues for a "diversity" of ways in which such flourishing might properly he construed (p. 148). But how then can he also reject some such ways as "wrong, as infantile, as sick, as one sided" (p. 198)? What grounds are available in his system for such a rejection? He says that "we revise our very criteria of rational acceptability 'in the light of our theoretical picture of the empirical world" (p. 134), but gives no hint as to how this is to be done in practice. He cites "coherence" as a sort of supercriterion which appears to be necessary to any ideal of human flourishing (p. 132). But what if someone were to reject such a criterion? Putnam says such a person is "sick." But are there arguments he could use to warrant this diagnosis? I do not think that in the end this "middle way" works. The tilt to idealism is obvious. But it would take a more elaborate analysis to show this. (This footnote and footnote 3 were added in proof. Had I seen Putnam's book before I wrote this text, I would have attempted a fuller discussion of it.) References Agassi, Joseph. (1974). "Modified Conventionalism Is More Comprehensive than Modified Essentialism." In Schilpp (1974), pp. 693-696. Carnap, Rudolf. (1932). "Überwindung der Metaphysik durch logische Analyse der Sprache." Erkenntnis 2:219-241. (Reprinted as "The Elimination of Metaphysics Through Logical Analysis of Language." (trans.) Arthur Pap. In Logical Positivism, edited by A. J. Ayer, 60-81. Glencoe, III.: Free Press, 1959. -. (1950). "Empiricism, Semantics, and Ontology." Revue internationale de philosophic 4: 20-40. (As reprinted in Meaning and Necessity. 2nd ed. Chicago: University of Chicago Press, 1956, pp. 205-221.) Duhem, Pierre. (1908). Eipgeiv r&