Forecasting Expected Policy Outcomes 129 4 Forecasting Expected Policy Outcomes Forecasting in Policy Analysis Types of Futures Approaches to Forecasting Extmpolative Forecasting Theoretical Forecasting Judgmental Forecasting Chapter Summary Learning Objectives Key Terms and Concepts Review Questions Demonstration Exercise References Case 4, Political Consequences of Forecasting: Environmental Justice and Urban Mass Rapid Transit The capacity to forecast expected policy outcomes is critical to the success of policy analysis and the improvement of policy making. Through forecasting, we can obtain a prospective vision, or foresight, thereby enlarging capacities for understanding, control, and societal guidance. As we shall see, however, forecasts of all kinds— whether based on expert judgment, on the simple extrapolation of historical trends, or on technically sophisticated econometric models—are prone to errors based on faulty or implausible assumptions, on the error-amplifying effects of institutional incentive systems, and on the accelerating complexity of policy issue areas ranging from health, welfare, and education to science, technology, and the environment (see Figure 4.1). INSTITUTIONAL CONTEXT AFFECTS ERROR Nuclear Energy Forecasts for 1975 Percentage Above Actual Energy Demand 350 r 300 - 250 200 - 150 100 - 1965 1970 Year Forecast Was Made Institution — Business Government Hi Non-profit (a) HISTORICAL CONTEXT AFFECTS ERROR Forecasts of GNP Growth Rates Percentage Above or Below Growth Rate 1975 1950 1955 1960 1965 Year Forecast Was Made Year Estimated ■11965 -1970 - -1975 P) 1970 Figure 4.1 Contexts affecting forecast accuracy. /Vore- Percentages are normalized errors. Source.- Adapted from William Ascher, Forecasting: An Appraisal for Policy Makers and Planners (Baltimore, MD: Johns Hopkins University Press. 1978). 10 Chapter 4 Forecasting Expected Policy Outcomes 131 We begin this chapter with an overview of the forms, functions, and performance of forecasting in policy analysis, stressing a range of criteria for assessing the strengths and limitations of different forecasting methods. We then compare and contrast three major approaches to creating information about expected policy outcomes: extrapolative forecasting, theoretical forecasting, and judgmental forecasting. We conclude with a presentation of methods and techniques of forecasting employed in conjunction with these three approaches. ORIX ASTING IN POLICY ANALYSIS Forecasting is a procedure for producing factual information about future states of society on the basis of prior information about policy problems. Forecasts take three principal forms: projections, predictions, and conjectures. 1. A projection is a forecast that is based on the extrapolation of current and historical trends into the future. Projections put forth designative claims based on arguments from method and parallel case, where assumptions about the validity of particular methods (e.g., time-series analysis) or similarities between cases (e.g., past and future policies) are used to establish the cogency of claims. Projections may be supplemented by arguments from authority (e.g., the opinions of experts) and cause (e.g., economic or political theory). 2. A prediction is a forecast based on explicit theoretical assumptions. These assumptions may take the form of theoretical laws (e.g., the law of diminishing utility of money), theoretical propositions (e.g., the proposition that civil disorders are caused by the gap between expectations and capabilities), or analogies (e.g., the analogy between the growth of government and the growth of biological organisms). The essential feature of a prediction is that it specifies the generative powers ("causes") and consequences ("effects"), or the parallel processes or relations ("analogs") believed to underlie a relationship. Predictions may be supplemented by arguments from authority (e.g., informed judgment) and method (e.g., econometric modeling). 3. A conjecture is a forecast based on informed or expert judgments about future states of society. These judgments may take the form of intuitive arguments, where assumptions about the insight, creative intellectual power, or tacit knowledge of stakeholders (e.g., "policy insiders") are used to support designative claims about the future. Judgments may also be expressed in the form of motivational arguments where present or future goals, values, and intentions are used to establish the plausibility of claims, as when conjectures about future societal values (e.g., leisure) are used to claim that ihe average work week will be reduced to thirty hours in the next twenty years. Conjectures may be supplemented by arguments from authority, method, and cause. Aims of Forecasting Policy forecasts, whether based on extrapolation, theory, or informed judgment, have several important aims. First, and most important, forecasts provide information about future changes in policies and their consequences. The aims of forecasting are similar to those of much scientific and social scientific research, insofar as the latter seek both to understand and control the human and material environment. Nevertheless, efforts to forecast future societal states are "especially related to control—that is, to the attempt to plan and to set policy so that the best possible course of action might be chosen among the possibilities which the future offers."1 Forecasting permits greater control through understanding past policies and their consequences, an aim that implies that the future is determined by the past. Yet forecasts also enable us to shape the future in an active manner, irrespective of what has occurred in the past. In this respect, the future-oriented policy analyst must ask what values can and should guide future action. But this leads to a second and equally difficult question: How can the analyst evaluate the future desirability of a given state of affairs? Even if the values underlying current actions could be neatly specified, would these values still be operative in the Future? As Ikle has noted, " 'guiding predictions' are incomplete unless they evaluate the desirability of the predicted aspects of alternative futures. If we assume that this desirability is to be determined by our future rather than our present preferences... then we have to predict our values before we can meaningfully predict our future."2 This concern with future values may complement traditional social science disciplines that emphasize predictions based on past and present values. While past and present values may determine the future, this will hold true only if intellectual reflection by policy stakeholders does not lead them to change their values and behavior, or if unpredictable factors do not intervene to create profound social changes, including irreversible processes of chaos and emergent order.3 limitations of Forecasting In the years since 1985, there have been a number of unexpected, surprising, and counterintuitive political, social, and economic changes—for example, the formal abandonment of socialism in the Soviet Union, the dissolution of communist parties in Eastern Europe, the fall of the Berlin Wall, the growing uncertainty surrounding policies to mitigate global warming. These changes at once call attention to the importance and the difficulties of forecasting policy futures under conditions of 1 Irene Taviss, "Futurology and the Problem of Values," International Social Science Journal 21, no. 4 (1969): 574. 2 Ibid.; and Fred Charles Ikle, "Can Social Predictions Be Evaluated?" Daedalus % (summer 1967): 747. '^i, _' 3 See Alasdair Maclntyte, "Ideology, Social Science, and Revolution," Comparative Politics 5, no. 3 (1973); t,' and I'yä Prigogine and Isabelle Stengers, Order Oat of Chaos (New York: Bantam Books, 1984). Chapter 4 t increasingly complex, rapid, and even chaotic changes. The growing difficulty of forecasting, however, should be seen in light of limitations and strengths of various types of forecasts over the past three decades and more.4 1. Forecast accuracy. The accuracy of relatively simple forecasts based on the «■ extrapolation of trends in a single variable, as well as relatively complex fore- 1 casts based on models incorporating hundreds of variables, has been limited. ; In the five-year period ending in 1983, for example, the Office of Management j and Budget underestimated the federal budget deficit by an annual average of I $58 billion. A similar record of performance tends to characterize forecasts of , the largest econometric forecasting firms, which include Chase Econometrics, \ Wharton Econometric Forecasting Associates, and Data Resources, Inc. For I example, average forecasting error as a proportion of actual changes in GNP ! was approximately 50 percent in the period 1971-83.5 2. Comparative yield. The accuracy of predictions based on complex theoretical models of the economy and of the energy resource system has been no ^ greater than the accuracy of projections and conjectures made, respectively, | on the basis of simple extrapolative models and informed (expert) judgment. [ If one of the important advantages of such models is sensitivity to surprising j or counterintuitive future events, simple models have a comparative advan- ; tage over their technically complex counterparts, because developers and | users of complex models tend to employ them mechanistically. Ascher's ques- j. tion is to the point: "If the implications of one's assumptions and hypotheses j are obvious when they are 'thought out' without the aid of a model, why use one? If the implications are surprising, the strength of the complex model is in drawing from a set of assumptions these implications.. .."6 Yet precisely these . assumptions and implications—for example, the implication that a predicted j large increase in gasoline consumption may lead to changes in gasoline taxes, f which are treated as a constant in forecasting models—are overlooked or dis- ! carded because they are inconsistent with the assumptions of models. ; 3. Context. The assumptions of models and their results are sensitive to three kinds of contexts: institutional, temporal, and historical. Variations in institutional incentive systems are a key aspect of differences in institutional contexts, as represented by government agencies, businesses, and nonprofit research institutes. Forecasting accuracy tends to be greater in nonprofit research institutes than in businesses or government agencies (Figure 4.1[a]). 4 A seminal work on forecasting is William Ascher, Forecasting: An Appraisal for Policy Makers and Planners (Baltimore and London: Johns Hopkins University Press, 1978). Also see Ascher, "The Forecasiing Potential of Complex Models," Policy Sciences 13 (1981): 247-67; and Robert McNown, "On the Use of Econometric Models; A Guide for Policy Makers," Policy Sciences 19 (1986): 360-80. 5 McNown, "On the Use of Econometric Models," pp. 362-67. 6 Ascher, "Forecasting Potential of Complex Models," p, 255, Forecasting Expected Policy Outcomes 133 In turn, the temporal context of a forecast, as represented by the length of time over which a forecast is made (e.g., one quarter or year versus five years ahead), affects forecast accuracy. The longer the time frame, the less accurate the forecast. Finally, the historical context of forecasts affects accuracy. The relatively greater complexity of recent historical periods diminishes forecast accuracy, a pattern that is evident in the growth of forecasting errors since 1965 (Figure 4,l[b]). Forecasting accuracy and comparative yield are thus closely related to the institutional, temporal, and historical contexts in which forecasts are made. The accuracy and comparative yield of forecasts are also affected, as we might expect, by the assumptions that people bring to the process. As Ascher notes, one of the difficulties of assessing the performance of forecasts lies in identifying the assumptions of forecast developers and users. In many forecasts, there is a serious problem of "assumption drag,"7 that is, a tendency among developers and users of forecasting models to cling to questionable or plainly implausible assumptions built into a model—for example, the assumption that the pricing policies as well as the governments of petroleum-producing countries will remain stable. An important implication may be derived from the phenomenon of assumption drag—namely, that the task of structuring policy problems is central to the performance of forecasters. Indeed, the correction of errors by dissolving or unsolving problems is quite as important to forecasting as it is to other phases of policy analysis. TYPES OF FUTURES Policy forecasts, whether made in the form of projections, predictions, or conjectures, are used to estimate three types of future societal states: potential futures, plausible futures, and normative futures.8 Potential futures (sometimes called alternative futures) are future societal states that may occur, as distinguished from societal states that eventually do occur. A future state is never certain until it actually occurs, and there are many potential futures. Plausible futures are future states that, on the basis of assumptions about causation in nature and society, are believed to be likely if policy makers do not intervene to redirect the course of events. By contrast, normative futures are potential and plausible futures that are consistent with an analyst's conception of future needs, values, and opportunities. The specification of normative futures narrows the range of potential and plausible futures, thus linking forecasts to specific goals and objectives (Figure 4.2). 7 See "William Ascher, Forecasting: An Appraisal for Policy Makers and Planners (Baltimore, MD: Johns Hopkins University Press, 1978). 8 See David C. Miller, "Methods for Estimating Societal Futures," in Methodology of Social Impact Assessment, ed. Kurt Finsterbusch and C. P. Wolf (Stroudsburg, PA; Dowden, Hutchinson & Ross, 1977), pp. 202-10. Chapter 4 Plausible Pasts (What happen ed)x Potential Pasts (What might ■ have happened) Past Present Future Plausible Futures (What , will be) Potential _ Futures (What might be) Normative Normative Pasts Futures (What should (What should have happened) be) Figure 4.2 Three types of societal futures: potential, plausible, and normative. Goals and Objectives of Normative Futures An important aspect of normative futures is the specification of goals and objectives. But today's values are likely to change in the future, thus making it difficult to define normative futures on the basis of existing preferences. The analyst must, therefore, be concerned with future changes in the ends as well as means of policy. In thinking about the ends of policy, it is useful to contrast goals and objectives. Although goals and objectives are both future oriented, goals express broad purposes while objectives set forth specific aims. Goals are rarely expressed in the form of operational definitions— that is, definitions that specify the set of operations necessary to measure something-while objectives usually are. Goals are not quantifiable; but objectives may be and often are. Statements of goals usually do not specify the time period in which policies are expected to achieve desired consequences, while statements of objectives do. Finally, goals define target populations in broad terms, while objectives define target populations specifically. Contrasts between goals and objectives are illustrated in Table 4.1. able 4.1 Contrasts between Goals and Objectives :haracteristic Goals Objectives pecification of purposes )efinition of terms rime period Measurement procedure treatment of target groups 1. Broadly stated (. . . to upgrade the quality of health care . . .) 2. Formal C. . . the quality of health care refers to accessibility of medical services . . ,) 3. Unspecified (. , . in the future . . .) 4. Nonquantitative (. . . adequate health insurance . . .) , persons in need 5. Broadly defined (. of care . . .) 1. Concrete (. . , to increase the number of physicians by 10 percent. . .) 2. Operational (. . . the quality of health care refers to the number of physicians per 100,000 persons . . .) 3. Specified (. . . in the period 1990-2000 . . .) 4. Frequently quantitative (. . . the number of persons covered per 1,000 persons . . .} 5. Specifically defined (. . . families with annual incomes below $19,000 . . .) Forecasting Expected Policy Outcomes 135 The definition of normative futures not only requires that we clarify goals and objectives, it also requires that we identify which sets of policy alternatives are relevant for the inachievement. Although these questions may appear simple, they are in fact difficult. Whose goals and objectives should the analyst use as a focal point of forecasts? How does an analyst choose among a large number of alternatives to achieve given goals and objectives? If analysts use existing policies to specify goals, objectives, and alternatives, they run the risk of applying a conservative standard. If, on the other hand, they propose new goals, objectives, and alternatives, they may be charged with imposing their own beliefs and values or making choices that are closer to the positions of one stakeholder than another. Sources of Goals, Objectives, and Alternatives One way to select goals, objectives, and alternatives is to consider their possible sources. Alternatives imply goals and objectives, just as goals and objectives imply policy alternatives. Sources of policy alternatives, goals, and objectives include the following: 1. Authority. In searching for alternatives to resolve a problem, analysts may appeal to experts. For example, the President's Commission on the Causes and Prevention of Violence may be used as a source of policy alternatives (registration of firearms, restrictive licensing, increased penalties for the use of guns to commit crime) to deal with the problem of gun controls.9 2. Insight. The analyst may appeal to the intuition, judgment, or tacit knowledge of persons believed to be particularly insightful about a problem. These "knowledgeables," who are not experts in the ordinary sense of the word, are an important source of policy alternatives. For example, various stakeholders from the Office of Child Development, a division of the Department of Education, have been used as a source of informed judgments about policy alternatives, goals, and objectives in the area of child welfare.10 3. Method. The search for alternatives may benefit from innovative methods of analysis. For example, new techniques of systems analysis may be helpful in identifying alternatives and rank-ordering multiple conflicting objectives.11 4. Scientific theories. Explanations produced by the natural and social sciences are also an important source of policy alternatives. For example, social psychological 9 See National Commission on the Causes and Prevention of Violence, Final Report, To Establish Justice, to Ensure Domestic Tranquility (Washington, DC: U.S. Government Printing Office, 1969). 10 See, for example, Ward Edwards, Marcia Guttentag, and Kurt Snapper, "A Decision-Theoretic Approach to Evaluation Research," in Handbook of Evaluation Research, vol. 1, ed. Elmer L. Struening and Marcia Guttentag (Beverly Hills, CA: Sage Publications, 1975), pp. 159-73. 11 An excellent example is Thomas L. Saaty and Paul C. Rogers, "Higher Education in the United States (1985-2000): Scenario Construction Using a Hierarchical Framework with Eigenvector Weighting," Socio-Economic Planning Sciences, 10 (1976): 251-63. See also Thomas 1. Saaty, The Analytic Hierarchy Process (New York: Wiley, 1980). Chapter 4 theories of learning have served as one source of early childhood education programs, such as Head Start and Follow Through. 5. Motivation. The beliefs, values, and needs of stakeholders may serve as a source of policy alternatives. Alternatives may be derived from the goals and objectives of particular occupational groups, for example, workers whose changing beliefs, values, and needs have created a new "work ethic" involving demands for leisure and flexible working hours. 6. Parallel case. Experiences with policy problems in other countries, states, and cities are an important source of policy alternatives. The experiences of New York and California with financial reforms have served as a source of financial policies in other states. 7. Analogy. Similarities between different kinds of problems are a source of policy alternatives. Legislation designed to increase equal employment opportunities for women has been based on analogies with policies adopted to protect the rights of minorities. 8. Ethical systems. Another important source of policy alternatives is ethical systems. Theories of social justice put forward by philosophers and other social thinkers serve as a source of policy alternatives in a variety of issue areas." 'ROACHES TO FORECASTING Once goals, objectives, and alternatives have been identified, it is possible to select an approach to forecasting. By selecting an approach we mean three things. The analyst must CD decide what to forecast, that is, determine what the object of the forecast is to be; (2) decide how to make the forecast, that is, select one or more bases for the forecast; and (3) choose techniques that are most appropriate for the object and base selected. Objects The object of a forecast is the point of reference of a projection, prediction, or conjecture. Forecasts have four objects:13 1. Consequences of existing policies. Forecasts may be used to estimate changes that are likely to occur if no new government actions are taken. The status quo, that is, doing nothing, is an existing policy. Examples are population 12 John Rawls, A Theory of Justice (Cambridge, MA: Harvard University Press, 1971). On ethical systems as a source of policy alternatives, see Duncan MacRae Jr., The Social Function of Social Science (New Haven, CT: Yale University Press, 19761. ^ See William D. Coplin, Introduction to the Analysis of Public Policy Issues from a Problem-Solving Perspective (New York: Learning Resources in International Studies, 1975), p. 21. Forecasting Expected Policy Outcomes 137 projections of the U.S. Bureau of the Census and projections of female labor force participation in 1985 made by the U.S. Bureau of Labor Statistics.14 2. Consequences of new policies. Forecasts may be used to estimate changes in society that are likely to occur if new policies are adopted. For example, energy demand in 1995 may be projected on the basis of assumptions about the adoption of new policies to regulate industrial pollution.1'' 3. Contents of new policies. Forecasts may be used to estimate changes in the content of new public policies. The Congressional Research Service, for example, forecasts the possible adoption of a four-week annual paid vacation on the assumption that the government and labor unions will follow the lead of European countries, most of which have adopted four- or five-week annual paid vacations for workers.16 4. Behavior of policy stakeholders. Forecasts may be used to estimate the probable support Cor opposition) to newly proposed policies. For example, techniques for assessing political feasibility may be used to estimate the probability that different stakeholders will support a policy at various stages of the policy process, from adoption to implementation.57 Bases The basis of a forecast is the set of assumptions or data used to establish the plausibility of estimates of consequences of existing or new policies, the content of new policies, or the behavior of stakeholders. There are three major bases of forecasts: trend extrapolation, theoretical assumptions, and informed judgment. Each of these bases is associated with one of the three forms of forecasts previously discussed. Trend extrapolation is the extension into the future of trends observed in the past. Trend extrapolation assumes that what has occurred in the past will also occur in the future, provided that no new policies or unforeseen events intervene to change the course of events. Trend extrapolation is based on inductive logic, that is, the process of reasoning from particular observations Ce.g., time-series data) to general conclusions or claims. In trend extrapolation, we usually start with a set of time-series data, project past trends into the future, and then invoke assumptions about regularity and persistence that justify the projection. The logic of trend extrapolation is illustrated in Figure 4.3. 14 See, for example, Howard N. Fullerton Jr. and Paul O. Flaim, "New Labor Force Projections to 1990," Monthly Labor Review (December 1976), ,5 See Barry Hughes, US. Energy, Environment and Economic Problems; A Public Policy Simulation (Chicago: American Political Science Association, 1975). 16 See Everett M. Kassalow, "Some Labor Futures in the United States," Congressional Research Service, Congressional Clearing House on the Future, Library of Congress (January 31, 1978). 17 See Michael K. O'Leary and William D. Coplin, Everyman's "Prince" (North Scituate, MA: Duxbury Press, 1976). Chapter 4 Forecasting Expected Policy Outcomes 139 Between 1865 and 1973 the number of federal agencies grew from less than 50 to more than 350. i i Why? SINCE (Probably) By the year 2000 there will be more than 600 federal agencies in the United States. The average annual increase in new agencies in the period 1974-2000 will be the same as in the 1855-1973 period. \ Why? BECAUSE Patterns observed in the past will repeat themselves in the future. Figure 4.3 The logic of extrapolation: inductive reasoning. Since 1950 there has been a marked increase in the number of policy analysts in federal, state, and local governments. (Probably) Policy analysts will have more power than policy makers in coming years. Why? SINCE In "postindustrial" society professional knowledge Is an increasingly scarce resource which enhances the power of experts. t Why? BECAUSE B The complexity of social problems requires more and more professional knowledge. Figure 4.4 The logic of theoretical prediction; deductive reasoning. Theoretical assumptions are systematically structured and empirically testable sets of laws or propositions that make predictions about the occurrence of one event on the basis of another. Theoretical assumptions are causal in form, and their specific role is to explain and predict. The use of theoretical assumptions is based on deductive logic, that is, the process of reasoning from general statements, laws, or propositions to particular sets of information and claims. For example, the proposition that in "postindustrial" society the knowledge of policy analysts is an increasingly scarce resource which enhances their power may be used to move from information about the growth of professional policy analysis in government to the predictive claim that policy analysts will have more power than policy makers in coming years (Figure 4.4). Informed judgments refer to knowledge based on experience and insight, rather than inductive or deductive reasoning. These judgments are usually expressed by experts or knowledgeables and are used in cases where theory and/or empirical data are unavailable or inadequate. Informed judgments are often based on retroductive logic, that is, the process of reasoning that begins with claims about the future and then works backward to the information and assumptions necessary to support claims. A good example of informed judgment as a basis of forecasts is the use of scientists or other knowledgeables to make conjectures about future changes in technology. Through retroductive logic, experts may construct a scenario that claims that there will be automated highways with adaptive automobile autopilots in the year 2000. Experts then work backward to the information and assumptions needed to establish the plausibility of the claim (Figure 4.5). In practice the boundaries between inductive, deductive, and retroductive reasoning are often blurred. Retroductive reasoning is often a creative way to explore ways in which potential futures may grow out of the present, while inductive and Chapter 4 J Research on highway automation is now under way in several "think tanks." (Probably) By the year 2000 there will be automated highways with adaptive automobile autopilots. Why? SINCE w Scientific experts have the insight needed to make a "breakthrough." t Why? BECAUSE Scientific experts have tacit knowledge, experience, or special powers to produce such insights, Figure 4.5 The logic of informed judgment: retroductive reasoning. deductive reasoning yield new information and theories that lead to claims about future societal states. Nevertheless, inductive and deductive reasoning are potentially conservative, because the use of information about past events or the application of established scientific theories may restrict consideration of potential (as distinguished from plausible) futures. A good example of the restrictive influence of past events and established scientific theories comes from the well-known astronomer William H. Pickering (1858-1938): The popular mind pictures gigantic flying machines speeding across the Atlantic and carrying innumerable passengers in a way analogous to our modern steamships.... It seems safe to say that such ideas must be wholly visionary, and even if a machine could get across with one or two passengers the expense would be prohibitive.18 > Quoted in Brownlee Haydon, The Year 2000 (Santa Monica, CA: Rand Corporation, 1967). Forecasting Expected Policy Outcomes 141 Table 4.2 Three Approaches to Forecasting with Their Bases, Appropriate Methods and Techniques, and Products Approach Basis Appropriate Techniques) Product Extrapolative forecasting Trend extrapolation Classical time-series analysis Linear trend estimation Exponential weighting Data transformation Catastrophe methodology Projections Theoretical forecasting Theory Theory mapping Causal modeling Regression analysis Point and interval estimation Correlational analysis Predictions Judgmental forecasting Informed judgment Conventional Delphi Policy Delphi Cross-impact analysis Feasibility assessment Conjectures Choosing Methods and Techniques While the selection of an object and basis helps guide the analyst toward appropriate methods and techniques, there are literally hundreds of forecasting methods and techniques to choose from.19 A useful way to think about these methods and techniques is to group them according to the bases of forecasts discussed earlier. Table 4.2 outlines the three approaches^ to forecasting, their bases, appropriate methods, and products. This table serves as an overview of the rest of the chapter. EXTRAPOLATIVE FORECASTING Methods and techniques of extrapolative forecasting enable analysts to make projections of future societal states on the basis of current and historical data. Extrapolative forecasting is usually based on some form of time-series analysis, that is, on the analysis of numerical values collected at multiple points in time and presented sequentially. Time-series analysis provides summary measures (averages) of the amount and rate of change in past and future years. Extrapolative forecasting has been used to project economic growth, population decline, energy consumption, quality of life, and agency workloads. 19 See, for example, Daniel P. Harrison, Social Forecasting Methodology (New York: Russell Sage Foundation, 197CÖ; Denis Johnston, "Forecasting Methods in the Social Sciences," Technological Forecasting and Social Change 2 (1970): 12(M3; Arnold Mitchell and others, Handbook of Forecasting Techniques (Fort Belvoir, VA: U.S. Army Engineer Institute for Water Resources, December 1975); and Miller, "Methods for Estimating Societal Futures." Chapter 4 When used to make projections, extrapolative forecasting rests on three basic assumptions: 1. Persistence. Patterns observed in the past will persist in the future, If energy consumption has grown in the past, it will do so in the future. 2. Regularity. Past variations in observed trends will regularly recur in the future. If wars have occurred every twenty or thirty years in the past, these cycles will repeat themselves in the future. 3. Reliability and validity of data. Measurements of trends are reliable (i.e., relatively precise or internally consistent) and valid (i.e., measure what they purport to be measuring). For example, crime statistics are relatively imprecise measures of criminal offenses. When these three assumptions are met, extrapolative forecasting may yield insights into the dynamics of change and greater understanding of potential future states of society. When any one of these assumptions is violated, extrapolative forecasting techniques are likely to yield inaccurate or misleading results.20 Classical Time-Series Analysis When making extrapolative forecasts, we may use classical time-series analysis, which views any time series as having four components: secular trend, seasonal variations, cyclical fluctuations, and irregular movements. Secular trend is a smooth long-term growth or decline in a time series. Figure 4.6 shows a secular trend in the growth of crimes per 1,000 persons in Chicago over a thirty-year period. By convention, the time-series variable is plotted on the K-axis (also called the ordinate) and years are plotted on the X-axis (also called the abscissa). A straight-line trend has been used to summarize the growth of total arrests per 1,000 persons between 1940 and 1970. In other cases (e.g., mortality), the straight-line trend shows a long-term decline, while still other cases (e.g., consumption of coal oil) shows a curvilinear trend, that is, a trend where the numerical values in a time-series display a convex or concave pattern. Seasonal variation, as the term suggests, is the variation in a time series that recurs periodically within a one-year period or less. The best examples of seasonal variations are the ups and downs of production and sales that follow changes in weather conditions and holidays. The workloads of social welfare, health, and public utilities agencies also frequently display seasonal variations as a result of weather conditions and holidays. For example, the consumption of home heating fuels increases in the winter months and begins to decline in March of each year. Forecasting Expected Policy Outcomes 20 Fidelity to these and other methodological assumptions has no guarantee of accuracy. In this author's experience, the iess of two or more forecasts is frequently based on strict adherence to technical assumptions. This is why judgment is so important to all forms of forecasting, including complex modeling. 143 fee 3 a. a a. o o a S 30 < Time Series Straight-Line Trend 1940 1945 1950 1955 1960 Years (X) 1965 1970 Figure 4.6 Demonstration of secular trend; total arrests per 1,000 population in Chicago, 1940-70. Source: Adapted from Ted R. Gurr, "The Comparative Analysis of Public Order." in The Politics of Crime and Conflict, ed. Ted R. Gurr, Peter M Grabosky, and Richard C. Hula (Beverly Hills, CA: Sage Publications, 1977), p. 647. Cyclical fluctuations are also periodic but may extend unpredictably over a number of years. Cycles are frequently difficult to explain, because each new cyclical fluctuation may be a consequence of unknown factors. Total arrests per 1,000 population in Chicago, displayed over a period of more than a hundred years, show at least three cyclic fluctuations (Figure 4.7). Each of these cycles is difficult to explain, although the third cyclical fluctuation coincides with tire Prohibition Era (1919-33) and the rise of organized crime. This example points to the importance of carefully selecting an appropriate time frame, because what may appear to be a secular trend may in fact be part of some larger long-term pattern of cyclical fluctuations. Note also that total arrests per 1,000 persons were higher in the 1870s and 1890s than in most years from 1955 to 1970. The interpretation of cyclical fluctuations is frequently made more difficult by the presence of irregular movements, that is, unpredictable variations in a time series that appear to follow no regular pattern. Irregular movements may be the result of many factors (changes in government, strikes, natural disasters). As long as these factors are unaccounted for, they are treated as random error, that is, unknown sources of variation that cannot be explained in terms of secular trend, seasonal variations, or cyclical fluctuations. For example, the irregular movement in total arrests that occurred after 1957 (Figure 4.7) might be regarded as an unpredictable variation in the time series that follows no regular pattern. On closer inspection, however, the sharp temporary upswing in arrests may be explained by .44 Chapter 4 Appointment of Chief Wilson 1868 1883 189S 1912 1926 Year (X) 1941 1955 1970 Figure 4.7 Demonstration of cyclical flucaiations: total arrests per 1,000 population in Chicago, 1868-1970. Source: Ted R. Gurr, "The Comparative Analysis of Public Order," In The Politics of Crime and Conflict, ed. Ted R. Gurr, Peter N. Grabosky, and Richard c. Hula (Beverly Hills, CA: Sage Publications, 1977), p. 647. the changes in record keeping that came into effect when Orlando Wilson was appointed chief of police.21 This example points to the importance of understanding the sociohistorical and political events underlying changes in a time series. linear Trend Estimation A standard technique for extrapolating trends is linear trend estimation, a procedure that uses regression analysis to obtain statistically reliable estimates of future societal states on the basis of observed values in a time series. Linear regression is based on assumptions of persistence, regularity, and data reliability. When linear regression is used to estimate trend, it is essential that observed values in a time series are not curvilinear, because any significant departure from linearity will produce forecasts with sizable errors. Nevertheless, linear regression may also be used to remove the linear trend component from a series that displays seasonal variations or cyclical fluctuations. 21 See Donald T. Campbell, "Reforms As Experiments," In Readings in Evaluation Research, ed. Francis G. Caro (New York; Russell Sage Foundation, 1971), pp. 240-41. Forecasting Expected Policy Outcomes 145 There are two important properties of regression analysis: 1. Deviations cancel. The sum of the differences between observed values in a time series and values lying along a computed straight-line trend (called a regression line) will always equal zero. Thus, if the trend value (Yt) is subtracted from its corresponding observed value (Y) for all years in a time series, the total of these differences (called deviations) will equal zero. When the observed value for a given year lies below the regression line, the deviation (Y - yp is always negative. By contrast, when the observed value is above the regression line, the deviation (Y- Y) is always positive. These negative and positive values cancel each other out, such that "L(Y- Yt) = 0. 2. Squared deviations are a minimum. If we square each deviation (i.e., multiply each deviation value by itself) and add them all up, the sum of these squared deviations will always be a minimum or least value. This means that linear regression minimizes the distances between the regression line and all observed values of Y in the series. In other words, it is the most efficient way to draw a trend line through a series of observed data points. These two properties of regression analysis are illustrated with hypothetical data in Figure 4.8. Time-series Values 00 Y-Y,= positive y-Y,= y t negative ) y Time Series (Y) Time Periods (X) Figure 4.8 Two properties of linear regression. Note.- X(K- yp = 0; X(F- V/ = a minimum (least) value. Chapter 4 The computation of linear trend with regression analysis is illustrated in Table 4.3 with data on energy consumption. In column (3) note that years in the series may be coded by calculating the numerical distance (x) of each year from the middle of the entire period (i.e., 1973). The middle of this series is given a coded value of zero and treated as the origin. The values of x range from -3 to +3 and are deviations from the origin. Observe also the computational procedures used in columns (4) and (5) of Table 4.3. The value of xY is calculated by multiplying values of energy consumption {¥) by each coded time value {x), as shown in column (4). The value of x2, shown in column (5), is calculated by multiplying each value of x by itself, that is, squaring all the x values in column (3). Finally, column (6) contains trend values of the time-series variable (Yt). These trend values, which form a straight line, are calculated according to the following equation: a + b(x) where Y( = the trend value for a given year a = the value of Y; when X = 0 b = the slope of the trend line representing the change in Yt for each unit of time x = the coded time value for any year, as determined by its distance from the origin Once the values of a and b have been calculated, estimates of total energy consumption may be made for any year in the observed time series or for any future year. For example, Table 4.3 shows that the trend value for energy consumption in 1972 is 70.27 quadrillion BTUs. To project total energy consumption for 1980, we set the value of x at 7 (i.e., seven time periods away from 1973, the origin) and solve the equation Yt(xm = a + Kx). The formula for computing the value of a is a — — where £7 = the sum of observed values in the series n — the number of years in the observed time series The formula for computing the value of b is 2(xK) SCx2) b = where ZCrt) = the sum of the products of the coded time values and the observed values in die series [see column (4) of Table 4.31 lix2) = the sum of the squared coded time values [see column (5) of Table 4.3] Forecasting Expected Policy Outcomes 147 Table 4.3 Time-Series Data on Total Energy Consumption Used in linear Regression Coded Trend Value Energy Time Columns (2) Column (3) for Energy Years Consumption Value By (3) Squared Consumption GO (10 (*) Cx¥) 00 CD (2) (3) (4) (5) (6) 1970 66.9 -3 -200.7 9 68.35 1971 68.3 -2 -136.6 4 69.31 1972 71.6 -1 -71.6 1 70.27 1973 74.6 0 0 0 71.24 1974 72.7 1 72.7 1 72.20 1975 70.6 2 141.2 4 7.3.17 1976 74.0 3 222.0 9 74.13 M=7 SF= 498.7 Lx = 0 T.(xY) = 27.0 20k2) = 28 = 498.7 y, - a + bOc) %Y 498.7 ■ = 71.243 XOcY) SO2) 27.0 28.0 ' 0.964 Y, = 71.24 + 0.964(x) m = 71.24 + 0.964(7) = 77.99 quadrillion BTUs The calculations in Table 4.3 not only permit us to compute trend values for energy consumption in the observed series, they also enable us to project total energy consumption for any given year in the future. Thus, the estimate of total energy consumption in 1980 !7<19gm] is 77,99 quadrillion BTUs. In illustrating the application of linear regression, we have used a time series with an odd number of years. We treated the middle of the series as the origin and coded it as zero. Obviously, there are many time series that contain an even number of years, and a different procedure may be used for determining each coded time value (x), because there is no middle year. The procedure used with even-numbered time series is to divide the series into two equal parts and code the time values in intervals of two (rather than in intervals of one, as in odd-numbered series). Each year in Table 4.4 is exactly two units away from its neighbor, and the highest and lowest coded time values are +7 and -7, rather than +3 and -3. Note that the size of the interval need not be 2; it could be 3, 4, 5, or any number provided that each year is equidistant from the next one in the series. Increasing the size of the interval will not affect the computation of results. Despite its precision in extrapolating secular trend, linear regression is limited by several conditions. First, the time series must be linear, that is, display a constant increase or decrease in values along the trend line. If the pattern of observations is Chapter 4 Table 4.4 Linear Regression with an Even-Numbered Series Trend Value Energy Coded Time Columns (2) Column (3) for Energy Years Consumption Value By (3) Squared Consumption (JO (H (*) o» OP CD (2) (3) (4) (5) 1969 64.4 -7 -450.8 49 66.14 1970 66.9 -5 -334.5 25 67.36 1971 68.3 -3 -204.9 9 68.57 1972 71.6 -1 -71.6 1 69.78 1973 74.6 1 74.6 1 71.00 1974 72.7 3 218.1 9 72.21 1975 70.6 5 353.0 25 73.43 1976 74.0 7 518.0 49 74.64 n - 8 EK = 563.1 &c = 0 Er>iO = 101.9 Hx2) = 168 IK, = 563.1 Yt = a + «*) 27 563.1 SCscKj = 70.39 1 101.9 = 0.607 SCx2) 168 F( = 70.39 + 0.607(a) m, = 70.39 + 0.607(15) = 79.495 quadrillion BTUs nonlinear (i.e., where the amounts of change increase or decrease from one time period to the next), other techniques must be used. Some of these techniques require fitting various types of curves to nonlinear time series. Second, plausible arguments must be offered to show that historical patterns will persist in the future, that is, continue in much the same form in subsequent years as they have in past ones. Third, patterns must be regular, that is, display no cyclical fluctuations or sharp discontinuities. Unless all of these conditions are present, linear regression should not be used to extrapolate trend. The SPSS output for the regression analyses computed by hand in Tables 4.3 and 4.4 is displayed in Exhibit 4.1, As may be seen by comparing Table 4.3 in the text with the SPSS output (Table 4.3 in Exhibit 4.1), the results are identical. The SPSS output identifies the dependent variable (which was named "ENCONS" for energy consumption), the number of observations in the sample (n = 7), the correlation coefficient expressing the strength of the relationship between time and energy consumption (R:J29), and the squared correlation coefficient (SQUARED MULTIPLE R.-.53D, which expresses the proportion of variance in the dependent variable ("ENCONS") explained by the independent variable ("TIMECODE"). Forecasting Expected Policy Outcomes 149 Exhibit 4.1 SPSS Output for Tables 4.3 and 4.4 Model Summaryb Model R R Square Adjusted K Square Std. Error of the Estimate Durbin Watson 1 .7291 .531 .437 2.14575994 1.447 a. Predictors: (Constant), TIMECODE b. Dependent Variable: ENCONS ANOVA* Model Sum of Squares df Mean Square F Slg. 1 Regression Residual Total 26.036 23.021 49 057 1 5 6 26.036 4.604 5.655 ,063a a. Predictors: (Constant), TIMECODE b. Dependent Variable: ENCONS Coefficients3 TJnstandardlzed Coefficients Standardized Coefficients Mode B Std. Error Beta t Sig. 1 (Constant) TIMECODE 71.243 .964 .811 .406 .729 87.843 2.378 .000 .063 a. Dependent Variable: ENCONS Residuals Statistics" Minimum Maximum Mean Std. Deviation N Predicted Value 68.349998 Residual -2,571429 Std. Predicted Value -1.389 Std. Residual -1.198 74.135712 3.3571429 1.389 1.565 71.242857 8.120E-15 .000 .000 2.08309522 1.95880187 1.000 .913 7 7 7 7 a. Dependent Variable: ENCONS For present purposes, the most important part of the SPSS output gives the coefficient for the constant, a, which is 71.243—this is the value of Y, energy consumption, when X equals zero. Line 5 gives the coefficient for the slope of the regression line, b, which is 0.964—this is the amount of change in Y, energy consumption, for each unit change in X, the years. This part of the output enables us to write the regression equation and, using the coefficients in this equation, forecast the value of energy consumption for any future year. The regression equation for Table 4.3 is 71.243 + 0.964«) Chapter 4 The regression equation for Table 4.4, which has eight rather than seven years in the time series, is Yt = 70.388 + 0.Ó07OO Note how much of a difference there is in the slopes after adding only one year to the series. Many time-series data of concern to policy analysts—for example, data on crime, pollution, public expenditures, urbanization—are nonlinear. A variety of techniques have been developed to fit nonlinear curves to patterns of change that do not display a constant increase or decrease in the values of an observed time series. Although these techniques are not presented in detail, we discuss some of their properties and underlying assumptions. Readers who wish to investigate these forecasting techniques in greater depth should consult advanced texts on time-series analysis.22 Nonlinear Time Series Time series that do not meet conditions of linearity, persistence, and regularity fall into five main classes (Figure 4.9): 1. Oscillations. Here there are departures from linearity, but only within years, quarters, months, or days. Oscillations may be persistent and regular (e.g., most police arrests occur between 11 p.m. and 2 p.m. throughout the year) but not show a constant increase or decrease within the period under examination (Figure 4.9[a]). Oscillations within years may occur in conjunction with long-term secular trends between years. Examples include seasonal variations in unemployment, monthly variations in agency workloads, and daily variations in levels of pollutants. 2. Cycles. Cycles are nonlinear fluctuations that occur between years or longer periods of time. Cycles may be unpredictable or occur with persistence and regularity. While the overall pattern of a cycle is always nonlinear, segments of a given cycle may be linear or curvilinear (Figure 4.9[b]). Examples are business cycles and the "life cycles" of academic fields, scientific publications, and civilizations. 3. Growth curves. Departures from linearity occur between years, decades, or some other unit of time. Growth curves evidence cumulative increases in the rate of growth in a time series, cumulative decreases in this rate of growth, or some combination of the two (Figure 4.9(d). In the latter case, growth curves are S-shaped and called sigmoid or logistic curves. Growth curves, which developed out of studies of biological organisms, have been used to forecast the growth of 22 See G. E. P. Box and G. M. Jenkins, Time Series Analysis: Forecasting and Control (San Francisco, CA: Hoiden-Day, 1969); and S. C. Wheelwright and S. Makridakis, Forecasting Methods for Management (New York: Wiley, 1973). Forecasting Expected Poiicy Outcomes 151 J_i_I_L_J_ J_I_I_L I I 1 I J_i_L (X) (X) (a) _L__I_L_ (X) (b) Figure 4.9 Five classes of nonlinear time series. industry, urban areas, population, technology, and science. Although growth curves are not linear, they are nevertheless persistent and regular. 4. Decline curves. Here departures from linearity again occur between years, decades, or longer periods. In effect, decline curves are the counterpart of growth curves. Decline curves evidence either cumulative increases or Chapter 4 decreases in the rate of decline in a time series (Figure 4.9[d]). Increasing and decreasing rates of decline may be combined to form curves with different shapes. Patterns of decline are sometimes used as a basis for various dynamic or "life-cycle" perspectives of the decline of civilizations, societies, and urban areas. Decline curves are nonlinear but regular and persistent. 5. "Catastrophes." The main characteristic of time-series data that are "catastrophic" is that they display sudden and sharp discontinuities. The analysis of catastrophic change, a field of study founded by the French mathematician René Thorn, not only involves nonlinear changes over time, it also involves patterns of change that are discontinuous (Figure 4.9[e]). Examples include sudden shifts in government policy during war (surrender or withdrawal), the collapse of stock exchanges in times of economic crisis, and the sudden change in the density of a liquid as it boils,23 The growth and decline curves illustrated in Figures 4.9(c) and (d) cannot be described by a straight-line trend. Patterns of growth and decline display little or no cyclical fluctuation, and the trend is best described as exponential growth or decline, that is, growth or decline where the values of some quantity increase or decrease at an increasing rate. The growth of federal government organizations between 1789 and 1973 (Figure 4.10) is an example of exponential growth. After 1789 the total number of organizations began to grow slowly, until about I860 when the growth rate began to accelerate. Clearly, this growth trend is very different from the secular trends and cyclical variations examined so far.24 Techniques for fitting curves to processes of growth and decline are more complex than those used to estimate secular trend. While many of these techniques are based on linear regression, they require various transformations of the time-series variable (_K). Some of these transformations of the time-series variable (T) involve roots Wy), while others require logarithms (log T) or exponents (e*). In each case, the aim is to express mathematically changes in a time series that increase by increasing or decreasing amounts (or, conversely, decrease by increasing or decreasing amounts). While these techniques will not be presented here, their logic is sufficiently important for public policy analysis to warrant further illustration. A simple illustration will serve best to clarify techniques for curve fitting. Recall the model of compound interest. This model states that Sn = (1 + r)»50 J3 See C. A. Isnard and E. C. Zeeman, "Some Models from Catastrophe Theory in the Social Sciences," in The Use of Models in the Social Sciences, ed. Lyndhurst Collins (Boulder, CO; Westview Press, 1976), pp. 44-100. 24 The outstanding work on processes of growth in science and other areas is Derek de Solla Price, Little Science, Big Science (New York: Columbia University Press, 1963). Forecasting Expected Policy Outcomes 153 17801800 1820 1840 1860 1880 1900 1920 1940 19601973 Presidential Term (X) Figure 4.10 Growth of federal government organizations in the United States by presidential term, 1789-1973. Source: Herbert Kaufman, Are Government Organizations Immortal? (Washington, DC: Brookings Institution, 1976), p. 62. where Sn — the amount to which a given investment will accumulate in a given (n) number of years S0 = the initial amount of the investment (1 + r)n = a constant return on investment (1.0) plus the rate of interest (r) in a given («) number of years Imagine that the manager of a small municipality of 10,000 persons is confronted by the following situation. In order to meet immediate out-of-pocket expenses for emergencies, it was decided in 1970 that a sum of $1,000 should be set aside in a special checking account. The checking account is very handy but earns no interest. In 1971 and subsequent years, because of rising inflation, the manager began to increase the sum in the special account by $100 per year. The increase in funds, illustrated in Figure 4.11(a), is a good example of the kind of linear trend we have been examining. The time series increases by constant amounts ($100 per year) and by 1980, there was $2,000 in the special account. In this case, the time-series values (y) are identical to the trend values {yt = y), because all values of y are on the trend line. Now consider what would have occurred if the manager had placed the funds in a special interest-bearing account (we will assume that there are no legal restrictions and that withdrawals can be made without penalties). Assume that the annual Chapter 4 Linear Trend w < u, d 2,600 2,400 2,200 2,000 1,800 1,600 1,400 1,200 1,000 1971 1973 1975 1977 1979 Years (X) (a) Growth Trend V,= 1,753.30 + 82.31 {x) V,. 1971 1973 1975 1977 1979 Years (X) (b) Figure 4.11 Linear versus growth trends rate of interest is 10 percent compounded annually. Assume also that only the original $1,000 was left in the account for the ten-year period, that is, no further deposits were made. The growth of city funds, illustrated in Figure 4.11(b), is a good example of the kind of growth trend we have been discussing. City funds increased by increasing amounts over the ten-year period (but at a constant interest rate compounded annually), and the total funds available at the end of 1980 was $2,594, compared with $2,000 in the non-interest-bearing account (Figure 4.1 l[a]). Note that in the first case (no interest) it takes ten years to double the original amount. No accumulation above the original $1,000 comes from interest. In the second case, it takes a little more than seven years to double the original amount, and all of the additional ($1,594) accumulation comes from interest. The values used to calculate accumulation at the end of 1980 are S„ = (1 + r)"50 = (1 + r)10($ 1,000) Sw = $2,594 (2.5937)($1,000) Forecasting Expected Policy Outcomes 155 Note that accumulation for any given year (e.g., 1975 as the fifth year) may be calculated simply by substituting the appropriate values in the formula S5 = (1 + r)5 ($1,000) = (1.6l05)($l,000) =$1,610.51. Consider now the limitations of linear regression in estimating growth trends such as that illustrated in Figure 4.11(b). The linear regression equation of Figure 4.11(b) [Yt = 1,753-30 + 82.31(x)] will produce an inaccurate forecast. For example, a 1990 forecast using the linear regression equation yields a trend estimate of $4,140.29 [Ym9G) - 1,753-30 + 82.31(29) = $4,140.29]. By contrast, the compound interest formula, which exacdy represents the nonlinear growth in accumulated funds, produces a 1990 estimate of $6,727.47 [520 = (1.1)20($1,000) = $6,727.47],25 The linear regression estimate is therefore highly inaccurate. Fortunately, the linear regression technique may be adapted to estimate growth trends. For regression to do this, however, it is necessary to change our original linear equation [Y; = a + bQx)] to a nonlinear one. Although there are many ways to do this, two of the most common procedures are exponential weighting and data transformation. Exponential Weighting In exponential weighting the analyst may raise one of the terms in the regression equation to a power, for example, square the value of (x), or add another term, also raised to a power, to the equation [e.g., c(x2)]. The more pronounced the increase (or decrease) in the observed growth pattern, the higher the power necessary to represent it. For example, if we have slowly increasing amounts of change, such as those illustrated by the compound interest formula (Figure 4.11[bD, we might add to the original linear regression equation a third term IcCx2)] that has been raised to a power by squaring the coded time value (x). The equation (called a second-degree parabola) then reads Yt = a + b(x) + cOe2) = 1753-30 + 82.3100 + 1.80c2) Note that the only part of this equation that is not linear is the squared time value Oc2),- This means that any value of x computed for given years will increase by a power of 2 (e.g., coded values of -5 and +9 become 25 and 81, respectively). The higher the original x value, the greater the amount of change, because the squares of larger numbers produce disproportionately larger products than do the squares of smaller numbers. Whereas a coded time value (x) of 9 is three times the value of 3 in the linear regression equation, in the second-degree parabola the same year is nine times the value of the corresponding year -5 Note that the coded time values for this even-numbered series are two units apart (see Table 4.6 below). The year 1990 is 29 units away from 1976. Note that there are other ways to code time (e.g., 1; 2;...h) that are cumbersome for the hand-calculations required here. Chapter 4 9 9 - = 3 and — = 9 -> 3 It is now easier to visualize what it means to say that growth processes exhibit increasing increases in a time series. Data Transformation A second procedure used to adapt linear regression techniques to processes of growth and decline is data transformation. Whereas exponential weighting involves the writing of a new and explicitly nonlinear regression equation [e.g., Yt = a + Kx) + cix2)], the transformation of data permits analysts to work with the simple linear equation [Yt — a + Kx)], but only after values of the time-series variable (Y) have been appropriately transformed. One way to transform the time-series variable is to take its square root (V?)- Another way to transform data is to take the natural logarithm of values of the time-series variable. The common (base 10) logarithm of a number is the power to which 10 must be raised to produce that number. For example, 2 is the logarithm of 100, because 10 must be raised to the power of 2 to produce 100 (10 x 10 = 102 = 100). The natural (base e) logarithm of a number is the power to which the base e, which equals 2.71828, must be raised to produce that number. For example, the natural logarithm of 100 is 4.6052, because 2.71828 raised to the power 4.6052 equals 100. The abbreviation for base 10 logarithms is log, and the abbreviation for base e logarithms is In. For readers with no experience with logarithms and roots, it is easier to grasp the nature of these transformations if we study a hypothetical time series that exhibits extremely rapid growth (Table 4.5). We can then observe the consequences of taking roots and logarithms of the time-series variable before solving the simple linear regression equation. Note that all computations used in Table 4.5 are identical to those already used in applying linear regression to estimate trend. The only difference is that values of Y have been transformed, either by taking their square roots or by taking their common (base 10) logarithms. Observe that the linear regression procedure is wholly inappropriate for describing the sharp growth trend that progresses by multiples of 10 in each period [column (2), Table 4,51. The linear equation [Y; = a + Kx)] produces a 1981 forecast of 85,227, when the actual value of the series will be 1,000,000! (We are assuming, of course, that the series will grow by a factor of 10 in 1981 and subsequent years.) The linear equation based on the square root transformation [VY^ = a + Kx)] helps very little with this explosive time series. The transformation results in a 1981 estimate of 94,249, hardly much of an improvement. Only the logarithmically transformed linear equation [log Y = a + Kx)] gives us the exact value for 1981. In effect, what we have done is "straighten out" the growth trend by taking the common logarithm of Y, thus permitting the use of linear regression to extrapolate trend. This does not mean that the trend is actually linear (obviously, it is not). It only means that we can use linear regression to make a precise forecast of the value of the nonlinear trend in 1981 or any other year. 9 9 9 PA rCi t-* O q\ CD V> tC, OJ M f»l N f-T r\f "sr" M ^ y; " 212 5 X o 9 ° ,-i ii ii ii ii ii ii 5 -° ii + >,i . employee in public —- centralization of public employees organizations management (a) Divergent increasing alienation Increasing Declining output per of public employees -«-centralization of public —«- employee in public management organizations (b) Serial Increasing Declining output per Increasing costs of centralization of public —» employee in public -— public services management organizations (c) Cyclic Increasing Increasing alienation Declining output per centralization of public —- of public employees-- employee in public management organizations t_1 (d) Figure 4.12 Four types of causal arguments. suggests that major problems of American government (inefficiency, inequity, unresponsiveness) are partly a consequence of the teachings of public administration, a discipline that for fifty and more years has emphasized centralization, hierarchy, and the consolidation of administrative powers. In an effort to explain the inefficiency of governmental institutions, one public choice theorist, Vincent Ostrom, offers the following argument, which has been mapped by underlining key words and supplying logical indicators in brackets: Gordon Tullock in The Politics of Bureaucracy (1965) analyzes the consequences which follow when [IT IS ASSUMED THAT] rational, self-interested individuals pursue maximizing strategies in very large public bureaucracies. Tullock's "economic man" is [THEREFORE] an ambitious public employee who seeks to advance his career opportunities for promotions within a bureaucracy. Since career advancement depends upon favorable recommendations by his ■ superiors, a career-oriented public servant will act so as to please his superiors. [THEREFORE] Favorable information will be forwarded; unfavorable information Chapter 4 will be repressed. [SINCE] Distortion of information wili diminish control and create expectations which diverge from events generated by actions. Large-scale bureaucracies will thus become error-prone and cumbersome in adapting to rapidly changing conditions. [SINCE] Efforts to correct the malfunctioning of bureaucracy by tightening control will simply magnify errors. A decline in return to scale can [THEREFORE] be expected to result. [BECAUSE] The larger the organization becomes, the smaller the percent of its activities will relate to output and the larger the proportion of its efforts will be expended on management.35 In the argument just quoted, we have already used procedures (2) and (3) of theory mapping. We have underlined words that indicate claims or assumptions underlying claims and supplied missing logical operators in brackets. Observe that the argument begins with an assumption about human nature ("economic man"), an assumption so fundamental that it is called an axiom. Axioms are regarded as true and self-evident; that is, they are believed to require no proof. Note also that this is a complex argument whose overall structure is difficult to grasp simply by reading the passage. The structure of the argument can be more fully described when we complete steps (1) and (4). Using procedure (1), we separate and number each claim and its warranting assumption, changing some of the words to improve the clarity of the original argument. [SINCE] 1. Public employees working in very large public bureaucracies are rational, self-interested individuals who pursue maximizing strategies ("economic man"). [AND] 2. The desire for career advancement is a consequence of rational self-interest. [THEREFORE] 3. Public employees strive to advance their career opportunities. [SINCE] 4. The advancement of career opportunities is a conse- quence of favorable recommendations from superiors. [AND] 5. Favorable recommendations from superiors are a conse- quence of receiving favorable information from subordinates. [THEREFORE] 6. Subordinates striving to advance their careers will forward favorable information and suppress unfavorable information. [SINCE] 7. The repression of unfavorable information creates errors by management, reducing their flexibility in adapting to rapidly changing conditions. 35 Ibid., p. 60. Underlined words and bracketed insertions have been added for purposes of illustrating theory-mapping procedures. Forecasting Expected Policy Outcomes [AND] 8. [THEREFORE] 9. [AND] 10. [SINCE] 11. [THEREFORE] 12. 165 The repression of unfavorable information diminishes managerial control. Managers compensate for their loss of control by attempting to tighten control. Compensatory attempts to tighten control further encourage the repression of unfavorable information and (6) the magnification of management errors (7). Further (8) loss of control produces attempts to tighten control. Compensatory tightening of control is a consequence of the size of public bureaucracies. Compensatory tightening of control requires the expenditure of a larger proportion of effort on management activities, and a smaller proportion of output activities. The larger the public bureaucracy, the smaller the proportion of effort expanded on output activities in relation to size, that is, the less the return to scale. By separating and numbering each claim and warranting assumption, we have begun to expose the logical structure of the argument. Note that there are different types of causal arguments in this theory of governmental inefficiency. The first part of the quotation contains a serial argument that emphasizes the influence of employee motivation on information error and managerial control. The second part of the quotation contains another serial argument, but one that stresses the importance of the size of public bureaucracies. In addition, there is one cyclic argument (error magnification) and several divergent and convergent arguments. The structure of tire overall argument cannot be satisfactorily exposed until we complete procedure (4) and draw an arrow diagram that depicts the causal structure of the argument (Figure 4.13). Observe, first, that there are two serial arguments: (1, 2, 3) and (5, 4, 3). The second of these (5, 4, 3) can stand by itself and does not require the first (1, 2, 3), which rests exclusively on deductions from the axiom of "economic man." Second, there is one divergent argument (5, 4, 6) that indicates that the same factor (5) has multiple consequences, There are also three convergent arguments (4, 2, 3), (3, 5, 6), and (8, 10, 9), which suggest in each case that there are two factors that explain the occurrence of the same event. The arrow diagram also exposes two central features of this theoretical argument. It makes explicit an important potential relationship, denoted by the broken line between (10) and (6), which suggests that the size of public bureaucracies may independently affect the tendency to forward or suppress information. At the same time, the arrow diagram helps identify a cyclic argument (6, 7, 8, 9) that is crucial to this theory. This part of the theory argues, in effect, that we may expect a cumulative increase in the amount of management error as a result of the self-reinforcing relationship among information suppression (6), management error (7), loss of 166 Chapter 4 Forecasting Expected Policy Outcomes 167 1. Rational Self-interest (RSA) 2. Desire for Career Advancement (DCA) 3. Striving to Advance Career Opportunities (SAC) 4. Favorable Recommendations for Superiors (FRS) 5. Favorable Information from Subordinates (FIS) 6. Suppression of Unfavorable Information (SUf) 7. Management Error (ME) 8. Loss of Managerial Control (LMC) >■ 9. Compensatory Tightening of Control (CTC) 10. Size of Public Bureaucracies (SPB) 11. Expenditures on Management (EM) Figure 4.13 Arrow diagram illustrating the 12. Return to Scale (RS) causal structure of an argument. management control (8), compensatory tightening of control (9), consequent further encouragement of information suppression (6), and so on. This cyclic argument, also known as a positive feedback loop (positive because the values of variables in the cycle continue to grow), suggests that any forecast based on public choice theory will predict a curvilinear pattern of growth in management error and government inefficiency. If we had not used theory mapping procedures, it is doubtful that we would have uncovered the mixture of convergent, divergent, serial, and cyclic arguments contained in the structure of the theory. Particular causal assumptions—for example, the assumption that the size of public bureaucracies may combine with compensatory tightening of control to repress unfavorable information—are not clearly stated in the original passage. Because theory mapping makes the structure of claims and assumptions explicit, it provides us with opportunities to use techniques of theoretical modeling. Theoretical Modeling Theoretical modeling refers to a broad range of techniques and assumptions for constructing simplified representations (models) of theories. Modeling is an essential part of theoretical forecasting, because analysts seldom make theoretical forecasts directly from theory. While analysts may begin with theories, they must develop models of these theories before they can actually forecast future events. Theoretical modeling is essential because theories are frequently so complex that they must be simplified before they may be applied to policy problems, and because the process of analyzing data to assess the plausibility of a theory involves constructing and testing models of theories, not the theories themselves. In the last chapter, we compared and contrasted models in terms of their aims (descriptive, normative), forms of expression (verbal, symbolic, procedural), and methodological functions (surrogates, perspectives). The majority of theoretical forecasting models are primarily descriptive, because they seek to predict rather than optimize some valued outcome. For the most part, these models are also expressed symbolically, that is, in the form of mathematical symbols and equations. Note that we have already used several symbolic models in connection with extrap-olative forecasting, for example, the regression equation or model [Yt = a + b Ox)]. Although it is not a causal model (because no explicit causal arguments are offered), it is nevertheless expressed symbolically. In public policy analysis, there are a number of standard forms of symbolic models which assist in making theoretical forecasts: causal models, linear programming models, input-output models, econometric models, microeconomic models, and system dynamics models,36 As it is beyond the scope of this book to detail each of these model forms, we confine our attention to causal models. In our review, we outline the major assumptions, strengths and limitations, and applications of causal modeling. Causal Modeling Causal models are simplified representations of theories that attempt to explain and predict the causes and consequences of public policies. The basic assumption of causal models is that covariations between two or more variables—for example, covariations that show that increases in per capita income occur in conjunction with increases in welfare expenditures in American states—are a reflection of underlying generative powers (causes) and their consequences (effects). The relation between cause and effect is expressed by laws and propositions contained within a theory and modeled by the analyst. Returning to our illustration from public choice theory, observe that the statement "The proportion of total effort invested in management activities is determined by the size of public organizations" is a theoretical proposition. A model of that proposition might be Y = a + b 00, where Y is the ratio of management to nonmanagement personnel, a and b are constants, and X is the total number of employees in public organizations of different sizes. i For a discussion of these and other standard model forms, see Martin Greenberger and others, Models in the Policy Process (New York: Russell Sage Foundation, 1976), Chapter 4; and Saul I. Gass and Roger L. Sisson, eds., A Guide to Models in Governmental Planning and Operations (Washington, DC: U.S. Environmental Protection Agency, 3974). While these models are here treated as descriptive and symbolic (using the definitions of these terms provided in Chapter 3), some (e.g., linear pror^amming) are often treated as normative. Similarly, others (e.g., system dynamics) are typically treated as procedural (or simulation) models. This should accentuate the point that the distinctions among types of models are relative and not absolute. Chapter 4 The strength of causal models is that they force analysts to make causal assumptions explicit. The limitation of causal models lies in the tendency of analysts to confuse covariations uncovered through statistical analysis with causal arguments. Causal inferences always come from outside a model, that is, from laws, propositions, or assumptions within some theory. In the words of Sewall Wright, one of the early pioneers in causal modeling, causal modeling procedures are "not intended to accomplish the impossible task of deducing causal relations from the values of the correlation coefficients."37 Causal modeling has been used to identify the economic, social, and political determinants of public policies in issue areas ranging from transportation to health, education and welfare.38 One of the major claims of research based on causal modeling is that differences in political structures (e.g., single versus multiparty polities) do not directly affect such policy outputs as educational and welfare expenditures. On the contrary, differences in levels of socioeconomic development (income, industrialization, urbanization) determine differences in political structures, which in turn affect expenditures for education and welfare. This conclusion is controversial because it appears to contradict the commonly shared assumption that the content of public policy is determined by structures and processes of politics, including elections, representative mechanisms, and party competition. One of the main statistical procedures used in causal modeling is path analysis, a specialized approach to linear regression that uses multiple (rather than single) independent variables. In using path analysis, we hope to identify those independent variables (e.g., income) that singly and in combination with other variables (e.g., political participation) determine changes in a dependent variable (e.g., welfare expenditures). An independent variable is presumed to be the cause of a dependent variable, which is presumed to be its effect. Estimates of cause and effect are called path coefficients, which express one-directional (recursive) causal relationships among independent and dependent variables. A standard way of depicting causal relationships is the path diagram. A path diagram looks very much like the arrow diagram used to map public choice theory, except that a path diagram contains estimates of the strength of the effects of independent on dependent variables. A path diagram has been used to model part of public choice theory in Figure 4.14. The advantage of path analysis and causal modeling is that they permit forecasts that are based on explicit theoretical assumptions about causes (the number of employees in individual public organizations) and their consequences (the ratio of managerial to nonmanagerial staff and costs in tax dollars per unit of public service). The limitation of these procedures, as already 37 Sewall Wright, "The Method of Path Coefficients," Annals of Mathematical Statistics 5 (1934): 193; quoted in Fred N. Kerlinger and Elazar J. Pedhazur, Multiple Regression in Behavioral Research (New York: Holt, Rinehart and Winston, 1973), p. 305. 38 For a review, see Thomas R. Dye and Virginia H. Gray, "Symposium on Determinants of Public Policy: Cities, States, and Nations," Policy Studies Journal 7, no. 4 (summer 1979): 279-301. Forecasting Expected Policy Outcomes 1 .Employees Per Public Organization (EPO) 2. Ratio of Managers to Non-Managers (RMN) 3. Costs tn Tax Dollars Per Unit of Service (CUS) 169 Figure 4.14. Path diagram illustrating a model of public choice theory. Note: The symbol p designates a causal path and the subscripts specify tiie direction of causation. p31 means thaE variable 3 is caused by variable 1, Variables that have no antecedent cause are called exogenous variabiles (i.e., their cause is external to the system), while all others are called endogenous (i.e.. their cause Is internal to the system). The symbol e (sometimes designated fis ti) is an error term, defined as the unexplained (residual) variance in an endogenous variable that is left over after taking into account the effects of the endogenous variable that precedes it in the path diagram. Error terms should be uncorrected with each other and with other variabiles. noted, is that they are not designed for the impossible task of inferring causation from estimates of relationships among variables. Although the absence of a relationship may be sufficient to infer that causation is not present, only theory permits us to make causal inferences and, hence, predictions. Regression Analysis A useful technique for estimating relationships among variables in theoretical forecasting models is regression analysis. Regression analysis, which has already been considered in slightly modified form in our discussion of trend estimation, is a general statistical procedure that yields estimates of the pattern and magnitude of a relationship between a dependent variable and one or more independent variables. When regression analysis is performed with one independent variable, it is called simple regression; if there are two or more independent variables, it is called multiple regression. While many theoretical forecasting problems require multiple regression, we limit ourselves to simple regression in the remainder of this section.39 Regression analysis is particularly useful in theoretical modeling. It provides summary measures of the pattern of a relationship between an independent and dependent variable. These summary measures include a regression line, which permits us to estimate values of the dependent variable simply by knowing values of the independent variable, and an overall measure of the vertical distances of observed values from the regression line. A summary measure of these distances, as we shall see, permits us to calculate the amount of error contained in a forecast. Because 35 For a thorough and readable treatment of multiple regression and related techniques, see Kerlinger and Pedhazur, Multiple Regression in Behavioral Research. Chapter 4 regression analysis is based on the principle of least squares, it lias a special advantage already noted in connection with linear trend estimation. It provides the one best "fit" between data and the regression line by ensuring that the squared distances between observed and estimated values represent a minimum or least value.* A second advantage of regression analysis is that it forces the analyst to decide which of two (or more) variables is the cause of the other, that is, to specify the independent (cause) and dependent (effect) variable. To make decisions about cause and effect, however, analysts must have some theory as to why one variable should be regarded as the cause of another. Although regression analysis is particularly well suited to problems of predicting effects from causes, the best that regression analysis can do (and it does this well) is to provide estimates of relationships predicted by a theory. Yet it is the theory and its simplified representation (model), and not regression analysis, that do the predicting. Regression analysis can only provide estimates of relationships between variables that, because of some theory, have been stated in the form of predictions,41 For this reason, analysts should employ theory-mapping procedures before using regression analysis. To illustrate the application of regression analysis to problems of theoretical forecasting, let us suppose that municipal policy makers wish to determine the future maintenance costs of police patrol vehicles under two alternative policies. One policy involves regular police patrols for purposes of traffic and crime control. In 1980 the total maintenance costs for the ten patrol cars were $18,250, or $1,825 per vehicle. The total mileage for the ten vehicles was 535,000 miles, that is, 53,500 miles per vehicle. A new policy now being considered is one that would involve "high-impact" police patrols as a way to create greater police visibility, respond more rapidly to citizens' calls for assistance, and, ultimately, deter crime by increasing the probability that would-be offenders are apprehended.'*2 Local policy makers are interested in any means that will reduce crime. Yet there are increasing gaps between revenues and expenditures, and several citizens' groups have pressed for a cutback in municipal employees. Policy makers need some way to forecast, on the basis of their own mileage and maintenance records, how much it will cost if several of the ten vehicles are driven an additional 15,000 miles per year. This forecasting problem may be effectively dealt with by using regression analysis. The relationship between cause and effect is reasonably clear, and the primary determinant of maintenance costs is vehicle usage as measured by miles 40 If this point is no! clear, you should return to the section on extrapolative forecasting and review Figure 4.8. 41 Recall the distinction between surrogate and perspective models in Chapter 3 and die example of the annual rainfall rate and reservoir depth, This and other examples show vividly that regression analysis, apart from its superiority in making estimates, cannot answer questions about which variable predicts another. 42 Under the sponsorship of the Law Enforcement Assistance Administration, high-impact patrolling lias been used, with mixed success, in a number of large municipalities. See Elinor Chelimsky, "The Need for Better Data to Support Crime Control Policy," Evahmlion Quarterly 1, no. 3 (1977): 439-74, Forecasting Expected Policy Outcomes 171 driven.'*3 A municipal policy analyst might therefore plot the values of the independent (JO and dependent (Y) variables on a scatter diagram (scatterplot), which will show the pattern of the relationship (linear-nonlinear), the direction of the relationship (positive-negative), and the strength of the relationship (strong— moderate-weak) between annual mileage per vehicle and annual maintenance costs per vehicle (Figure 4.15). We assume that the pattern, direction, and strength of the relationship is linear, positive, and strong, as shown in Figure 4.15(a). Regression analysis assumes that variables are related in a linear pattern. While linear regression analysis may also be used with negative linear relationships (Figure 4.15fbD, it will produce serious errors if applied to curvilinear patterns such as those illustrated in Figures 4.15(c) and (d). In such cases, as we saw earlier in the discussion of curve fitting, we must either use nonlinear regression or transform values of variables (e.g., by taking their logarithms) prior to applying conventional linear regression techniques. Regression analysis will yield less reliable estimates when data are widely scattered, as in Figure 4.15(e). If data indicate no pattern or relationship (Figure 4.15[fj), the best estimate of changes in Y due toXis the average (mean) of values of Y. Recall from our discussion of trend estimation that the straight line describing the relationship between X and Y variables is called a regression line. The equation used to fit the regression line to observed data in the scatter diagram is identical to that used to estimate linear trend, with several minor differences. The symbol Yf (the subscript t refers to a trend value) is replaced with Yc (the subscript c refers to a computed, or estimated, value). These different subscripts remind us that here, regression is applied to two substantive variables, whereas in linear trend estimation one of the variables-is time. The formula for the regression equation is Yr = a + KX) where a = the value of Yc when X=0, called the Y intercept because it shows where the computed regression line intercepts the I-axis b = the value of changes in Yc due to a change of one unit in X, called the slope of the regression line because it indicates the steepness of the straight line X = a given value of the independent variable A second minor difference between the regression and trend equations is the computation of values of a and b. In regression analysis, we do not work with original values of Y and coded time values of X but with mean deviations. A mean deviation is simply the difference between a given value of X or Y and the average 43 The question of causation is never certain, as illustrated by this example. Under certain conditions, maintenance costs may affect miles driven, for example, if cost-conscious managers or policemen limit vehicle use in response to knowledge of high expenses. Similarly, annual mileage for certain vehicles may mean larger patrol areas, which in turn may be situated where road conditions are better (or worse) for cars. 172 Chapter 4 Forecasting Expected Policy Outcomes 173 Linear/Positive/Strong Linear/Negative/Strong Y Y (a) Curvilinear/Positjve/Strong Curvilinear/Negative/Strong Y Y (c) Linear/Positive/Weak (d) No Pattern or Relationship • mean of Y(Y) = 0], we must square them before we divide by the number of cases to find the average. Column (4) of Table 4.8 shows that the average squared error (also called the variance of an estimate) is 1.304. This expression of error, while useful for certain calculations to be described in a moment, is difficult to interpret. Fortunately, the square root of this value is a good approximation of the error that will occur about two-thirds of the time in a regression estimate.44 The square root of the average squared error, called the standard error of estimate, is calculated with the following formula;45 Wf- y)2 44 For readers who recall the normal curve, the standard error is the standard deviation of the average squared error. One standard error is one standard deviation unit to the left or right of the mean of a normal distribution and therefore includes the middle 68.3 percent of ail values in the distribution. In the example above, we have assumed for purposes of illustration that data are normally distributed. ,5 Note that we divide by n, because the 10 vehicles constitute the entire population of vehicles under analysis. If we were working with a sample, we would divide by « - 1. This is a way of providing an unbiased estimate of the variance in the population sampled. Most packaged computer programs use n - 1, even though sampling may not have occurred. 176 Chapter 4 Forecasting Expected Policy Outcomes 177 Table 4.8 Calculation of Standard Error of Estimated Maintenance Costs Observed Observed Minus Observed Estimated Minus Estimated Estimated Costs Maintenance Costs Maintenance Costs Costs Squared Vehicle Number (W (y- yp2 (13 (2) 0) (4) (5) 1 0.45 0,838 -0.388 0.151 2 1.25 0.806 0.394 0.155 5 0.85 1.030 -0.230 0.053 4 1.15 1.400 -0.300 0.090 5 1.95 1.606 0.294 0.086 6 1.85 2.150 -0.350 0.123 7 2.35 2.150 0.150 0.023 8 2.65 1990 0.610 0.372 9 2.55 2.950 -0.450 0.203 10 3.25 2.982 0.218 0.048 n = 10 17= 17.85 2KC= 17.85 £(}'-y4J = 0 1.304 10 yKH0) = 4-87 ± = 487 ± 072222 1.304 10 0.36m = 4.14778 to 5.59222 = $4,147.78 to $5,592.22 where (y- Y)2 = the squared difference between observed and estimated values of the dependent variable n = the number of cases In the example above, the standard error of estimate is ^yx 'ziy ~ n2 1.304 10 = V.1304 = 0.3611 thousand = $361.11 in annual maintenance costs What this means is that any actual value of maintenance costs is likely to be $361.11 above or below an estimated value about two-thirds of the time. The figure of $361.11 is one standard error (1 x $361.11); two standard errors are $722.22 (2 x $361.11); and three standard errors are $1,083.33 (3 x $361.11). The standard error gives us a probability interpretation of these values, because one standard error unit will occur about two-thirds (actually 68.3 percent) of the time; two standard error units will occur about 95 percent (actually 95.4 percent) of the time; and three standard error units will occur about 99 percent (actually 99.7 percent) of the time. The standard error of estimate allows us to make estimates that take error into account systematically. Rather than make simple point estimates—that is, estimates that produce a single value of Yc—we can make interval estimates that yield values of Yc expressed in terms of one or more standard units of error. If we want to be 95 percent confident that our estimate is accurate (this is usually regarded as a minimum standard of accuracy), we will want to express our estimate in terms of two intervals above and below the original point estimate. We can use the following formula to find out how much error we might expect in our point estimate of $4,870 [F150=0,07+0.032(150)=4.87 thousand dollars] 95 percent of the time: Yt = Yc + ZiSyx) = 4,87 ± 2(0.3611) = 4,87 ± 0.72222 = 4.14778 to 5.59222 = $4,147.78 to $5,592.22 : the point estimate of Y ■ a standard error unit, which takes the value of 2 (95% confidence) in this case : the value (0.36H) of one standard error unit ■- an interval estimate of y (±0.72222) where z ■■ y-x Correlational Analysis Regression analysis has an additional feature of importance to theoretical forecasting: It permits us to use correlational analysis to interpret relationships. Recall that different scatter diagrams not only exhibit the pattern of a relationship but also its direction and strength (see Figure 4.15). Yet it is desirable that we have measures of the direction and strength of these relationships, not just the visual images contained in scatter diagrams. Two measures that yield such information can be calculated from the worksheet already prepared for estimating future maintenance costs (Table 4.7). The first of these measures is the coefficient of determination (r2), which is a summary measure or index of the amount of variation in the dependent variable explained by the independent variable. The second is the coefficient of correlation (r), which is the square root of the coefficient of determination. The coefficient of correlation, which varies 178 Chapter 4 Forecasting Expected Policy Outcomes 179 between —1.0 and +1.0, tells us whether the direction of the relationship is positive or negative and how strong it is. If r is 0, there is no relationship, whereas a value of ± 1.0 (i.e., positive or negative) indicates a maximal relationship. Unlike the coefficient of correlation (r), the coefficient of determination (r2) takes a positive sign only and varies between 0.0 and 1.0. The formulas for both coefficients are applied below to our data on maintenance costs and mileage (see Table 4.7). r2 = K^xy) _ 0.032(179.54) 2(j2) " 7.02 = 0.818 or 0.82 Vr5 = VO.818 = 0.90 The analysis carried out by hand in Tables 4.7 and 4.8 has been done with SPSS (Exhibit 4.2). Identical results have been obtained, but with greater accuracy and efficiency. When we inspect the SPSS output, we see that the correlation coefficient (r), designated in the output as it, is 0.905. We also see that the coefficient of determination (r2), designated as R SQUARE, is 0,818. All coefficients are in agreement with our hand calculations. The output also provides the value of the intercept (CONSTANT = 0,07), which conforms to the output in Table 4.7, as does the slope of the regression line (fi = 0.032) for the variable annual miles per vehicle. Finally, the ANOVA (Analysis of Variance) table provides a breakdown of that part of the total sum of squares due to the effects of regressing the dependent variable on the independent variable (regression = 5.J4G). the output also provides that part of the sum of squares due to error, that is, residual = 1.275. If we add the regression and residual sum of squares and divide this total into the regression sum of squares, we obtain the coefficient of determination (R). Thus, 5.746 5.746 + 1.275 0.818 = 0.82 These coefficients tell us that 82 percent of the variance in annual maintenance costs is explained by annual mileage (r2 = 0.82) and that the direction and strength of the relationship between these two variables is positive and strong (r = 0.90). Regression analysis, when supplemented by interval estimation and coefficients of determination and correlation, provides more information of direct relevance to policy makers than other forms of estimation. For example, simple estimates of average costs per mile, apart from their relative inaccuracy, neither provide summary measures of the direction and strength of relationships nor anticipate forecasting errors in a systematic way. In this and many other cases, regression analysis can assist policy makers in dealing with the uncertainties that accompany efforts to predict the future. Exibit4.2 SPSS Output for Table 4.7 Variables Entered/Removedb Model Variables Entered Variables Removed Method 1 Annual Miles Per Vehicle (000s)3 Enter a. All requested variables entered b. Dependent Variable: Annual Maintenance Costs Per Vehicle ($000) Model Summary Model R R Square Adjusted R Square Std. Error of the Estimate 1 ,905a 818 .796 .39917721 a. Predictors: (Constant), Annual Miles Per Vehicle (000s) ANOVAb Model Sum of Squares df Mean Square F Sig. 1 Regression 5.746 1 Residual 1.275 8 Total 7.020 9 5.746 .159 36.058 .000* a. Predictors: (Constant), Annual Miles Per Vehicle (000s) b. Dependent Variable: Annual Maintenance Costs Per Vehicle ($000) Coefficients1' Unstandardized Coefficients Standardized Coefficients Model B Std. Error Beta t Sig. 1 (Constant) 6.973E-02 .312 Annual Miles Per Vehicle (000s) 3.2O0E-02 .005 .905 .223 -829 6.005 ■«» a. Dependent Variable: Annual Maintenance Costs Per Vehicle ($000) JUDGMENTAL FORECASTING In contrast to extrapolative and theoretical forecasting techniques, where empirical data and/or theories play a central role, judgmental forecasting techniques attempt to elicit and synthesize informed judgments, judgmental forecasts are often based on arguments from insight, because assumptions about the creative powers of persons making the forecast (and not their social positions per se) are used to warrant claims about the future. The logic of intuitive forecasting is essentially retroductive, because 180 Chapter 4 Forecasting Expected Policy Outcomes 181 analysts begin with a conjectured state of affairs (e.g., a normative future such as world peace) and then work their way back to the data or assumptions necessary to support the conjecture. Nevertheless, inductive, deductive, and retroductive reasoning are never completely separable in practice, judgmental forecasting is therefore often supplemented by various extrapolative and theoretical forecasting procedures.4^ In this section, we review three intuitive forecasting techniques: Delphi technique, cross-impact analysis, and the feasibility assessment technique. These and other techniques, most of which have been widely used in government and indus- ; try, are particularly well suited to the kinds of problems we described (Chapter 3) as j messy, ill structured, or squishy. Because one of the characteristics of ill-structured | problems is that policy alternatives and their consequences are unknown, it follows j that in such circumstances there are no relevant theories and/or empirical data to j make a forecast, Under these conditions, judgmental forecasting techniques are par- j ticularly useful and even necessary, i The Delphi Technique j Delphi technique is a judgmental forecasting procedure for obtaining, exchanging, i and developing informed opinion about future events. Delphi technique (named I after Apollo's shrine at Delphi, where Greek oracles sought to foresee the future) was developed in 1948 by researchers at the Rand Corporation and has since been used in many hundreds of forecasting efforts in the public and private sectors. Originally, the technique was applied to problems of military strategy, but its application gradually shifted to forecasts in other contexts: education, technology, marketing, transportation, mass media, medicine, information processing, research and development, space exploration, housing, budgeting, and the quality of life.47 While the technique originally emphasized the use of experts to verify forecasts based on empirical data, Delphi began to be applied to problems of values forecasting in the J 1960s.48 The Delphi technique has been used by analysts in countries ranging from the United States, Canada, and the United Kingdom to Japan and the Soviet Union. '■ Early applications of Delphi were motivated by a concern with the apparent $ ineffectiveness of committees, expert panels, and other group processes. The tech- ; nique was designed to avoid several sources of distorted communication found in groups: domination of the group by one or several persons; pressures to conform to | peer group opinion; personality differences and interpersonal conflict; and the difficulty of publicly opposing persons in positions of authority. To avoid these * In fact, even large-scale econometric models are dependent on judgment. The best sources on this point are Ascher, Forecasting,- Ascher, 'The Forecasting Potential of Complex Models"; and McNown, "On the Use of Econometric Models." 47 Thorough accounts may be found in Harold Sackman, Delphi Critique (Lexington, MA.: D. C. Heath and Company, 1975); and Juri Pill, "The Delphi Method: Substance, Contexts, a Critique and an Annotated Bibliography," Socio-Economic Planning Sciences 5 (1971): 57-71. 48 See, for example, Nicholas Rescher, Delphi and Values (Santa Monica, CA: Rand Corporation, 1969). problems, early applications of the Delphi technique emphasized five basic principles: (1) anonymity—all experts or knowledgeables respond as physically separated individuals whose anonymity is strictly preserved; (2) iteration—the judgments of individuals are aggregated and communicated back to all participating experts in a series of two or more rounds, thus permitting social learning and the modification of prior judgments; (3) controlled feedback—the communication of aggregated judgments occurs in the form of summary measures of responses to questionnaires; (4) statistical group response—summaries of individual responses are presented in the form of measures of central tendency (usually the median), dispersion (the interquartile range), and frequency distributions (histograms and frequency polygons); and (5) expert consensus—the central aim, with few exceptions, is to create conditions under which a consensus among experts is likely to emerge as the final and most important product. These principles represent a characterization of conventional Delphi. Conventional Delphi, which dominated the field welt into the late 1960s, should be contrasted with policy Delphi. Policy Delphi is a constructive response to the limitations of conventional Delphi and an attempt to create new procedures that match the complexities of policy problems. In the words of one of its chief architects, Delphi as it originally was introduced and practiced tended to deal with technical topics and seek a consensus among homogeneous groups of experts. The Policy Delphi, on the other hand, seeks to generate the strongest possible opposing views on the potential resolution of a major policy issue...a policy issue is one for which there are no experts, only informed advocates and referees,49 While policy Delphi is based on two of the same principles as conventional Delphi (iteration and controlled feedback), it also introduces several new ones: 1. Selective anonymity. Participants in a policy Delphi remain anonymous only during the initial rounds of a forecasting exercise. After contending arguments about policy alternatives have surfaced, participants are asked to debate their views publicly. 2. Informed multiple advocacy. The process for selecting participants is based on criteria of interest and knowledgeableness, rather than "expertise" per se. In forming a Delphi group, investigators therefore attempt to select as representative a group of informed advocates as may be possible in specific circumstances, 3. Polarized statistical response. In summarizing individual judgments, measures that purposefully accentuate disagreement and conflict are used. While conventional measures may also be used (median, range, standard deviation), " Murray Turoff, "The Design of a Policy Delphi," Technological Forecasting and Social Change 2, no, 2 (1970); 149-71. See also Harold A. Linstone and Murray Turoff, eds., 77je Delphi Method: Techniques and Applications (New York: Addison-Wesley, 1975). 182 Chapter 4 Forecasting Expected Policy Outcomes 183 policy Delphi supplements these with various measures of polarization among individuals and groups. 4. Structured conflict. Starting from the assumption that conflict is a normal feature of policy issues, every attempt is made to use disagreement and dissension for creatively exploring alternatives and their consequences. In addition, efforts are made to surface and make explicit the assumptions and arguments that underlie contending positions. The outcomes of a policy Delphi are nevertheless completely open, which means that consensus as well as a continuation of conflict might be results of the process. 5. Computer conferencing. Where possible, computer consoles are used to structure a continuous process of anonymous interaction among physically separated individuals. Computer conferencing eliminates the need for a series of separate Delphi rounds. A policy Delphi may be conducted in a number of different ways, depending on the context and the skill and ingenuity of the persons using the technique, Because policy Delphi is a major research undertaking, it involves a large number of technical questions, including sampling, questionnaire design, reliability and validity, and data analysis and interpretation. Although these questions are beyond the scope of this chapter,50 it is important to obtain an overall understanding of the process of conducting a policy Delphi. A policy Delphi can best be visualized as a series of interrelated steps.51 Step 1: Issue specification. Here the analyst must decide what specific issues should be addressed by informed advocates. For example, if the area of concern is national drug abuse policy, one of the issues might be "The personal use of marijuana should or should not be legalized." One of the central problems of this step is deciding what proportion of issues should be generated by participants, and what proportion should be generated by the analyst. If the analyst is thoroughly familiar with the issue area, it is possible to develop a list of issues prior to the first round of the Delphi. These issues may be included in the first questionnaire, although respondents should be free to add or delete issues. Step 2: Selection of advocates. Here the key stakeholders in an issue area should be selected. To select a group of advocates who represent conflicting positions, however, it is necessary to use explicit sampling procedures. One 50 Unfortunately, there are few shortcuts to developing methodologically sound questionnaires for use in policy Delphis. On questionnaire construction, the best short introduction is Earl R. Babbie, Survey Research Methods (Belmont, CA: Wadsworth, 1973). On reliability and validity, see Fred N. Kerlinger, Foundations of Behavioral Research, 3d ed. (New York: Holt, Rinehart and Winston, 1985), pp. 442-78.' A useful general-purpose handbook on these questions is Delbert C. Miller, Handbook of Research Design and Social Measurement, 5th ed. (Newbury Park, CA: Sage Publications, 1991). 51 See Turoff, "The Design of a Policy Delphi," pp, 88-94. way to do this is to use "snowball" sampling. Here the analyst begins by identifying one advocate, usually someone who is known to be influential in the issue area, and asking that person to name two others who agree and disagree most with his or her own position. These two persons are asked to do the same thing, which results in two more persons who agree and disagree maximally, and so on (hence the term "snowball" sample). Advocates should be as different as possible, not only in terms of positions attributed to them but also in terms of their relative influence, formal authority, and group affiliation. The size of the sample might range from ten to thirty persons, although this depends on the nature of the issue. The more complex the issue, and hence the more heterogeneous the participants, the larger the sample must be to be representative of the range of advocates. Step 3- Questionnaire design. Because a policy Delphi takes place in a series of rounds, analysts must decide which specific items will go in questionnaires to be used in the first and subsequent rounds, Yet the second-round questionnaire can only be developed after the results of the first round are analyzed; the third-round questionnaire must await results of the second round; and so on. For this reason, only the first-round questionnaire can be drafted in advance. Although the first-round questionnaires may be relatively unstructured (with many open-ended items), they may also be relatively structured, provided the analyst has a good idea of the major issues. First-round questionnaires may include several types of questions: (1) forecasting items requesting that respondents provide subjective estimates of the probability of occurrence of particular events, (2) issue items requesting respondents to rank issues in terms of their importance, (3) goal items that solicit judgments about the desirability and/or feasibility of pursuing certain goals, and (4) options items requesting that respondents identify alternative courses of action that may contribute to the attainment of goals and objectives. Several types of scales are available to measure responses to each of these four types of items. One procedure is to use different scales with different types of items. For example, a certainty scale may be used primarily with forecast items; an importance scale with issue items; desirability and feasibility scales with goal items; and some combination of these scales with options items. The best way to show what is involved is to illustrate the way in which items and scales are presented in a policy Delphi questionnaire. This has been done in Table 4.9- Observe that the scales in Table 4,9 do not permit neutral answers, although "No Judgment" responses are permitted for all items. This restriction on neutral responses is designed to bring out conflict and disagreement, an important aim of the policy Delphi. An important part of the construction of questionnaires is pretesting among a sample of advocates and determining the reliability of responses. Forecasting Expected Policy Outcomes 185 ^ o : a .a S .2 1-1 iz; U I*! (2 .g 8 6 Is 8 « H I o c I _ E E d •6 3 G 00 " ON r! S " o 8 c * u Ü a I 2 o «J a O -D m °- ü B "S" ft-o 'S ü ^ p ' 3 2 oi p 3 'I § o § 6 ft u "9 ° = o Z 8. s I la 6 o 5 1 G '8 Q > 8 a 's s 3 n a 1 » 1 1 111 s a ■ " a _o ^18- '§ si .y i) fu O 4) 3 ftfc; £ flj c ft Sog U ft ft O s rt w a 13 « « 8 O g P od S 2 Tl O § S c .2 o Is Step 4: Analysis of first-round results. When the questionnaires are returned after the first round, analysts attempt to determine the initial positions on forecasts, issues, goals, and options. Typically, some items believed to be desirable or important are also believed to be unfeasible, and vice versa. Because there will be conflicting assessments among the various advocates, it is important to use summary measures that not only express the central tendency in the set of responses but also describe the extent of dispersion or polarization. These summary measures not only eliminate items that are uniformly important, undesirable, unfeasible and/or uncertain, but also serve in the second-round questionnaire as a means to communicate to participants the results of the first round. The calculation and subsequent presentation of these summary measures of central tendency, dispersion, and polarization are best illustrated graphically, Assume for purposes of illustration that ten advocates in the first round of a hypothetical policy Delphi provided different assessments of the desirability and feasibility of two drug-control goals: to reduce the supply of illicit drugs and to increase public awareness of the difference between responsible and irresponsible drug use. Let us imagine that the responses were those presented in Table 4.10. Table 4.10 Hypothetical Responses in First-Round Policy Delphi: Desirability and Feasibility of Drug-Control Objectives Goal 1 (Reduce Supply) Goal 2 (Public Awareness) Advocate Desirability Feasibility Desirability Feasibility 1 1 4 I 1 2 4 1 2 2 i 5 3 2 1 4 4 2 1 2 5 1 4 2 1 6 2 3 2 1 1 1 4 1 1 8 4 2 1 ? 9 4 1 2 2 10 1 4 I 2 Z= 25 £ = 28 £=15 2= 15 Md = 2.5 Md = 3.0 Md = 1.5 Md = 1.5 Md = 2.5 Md = 2.8 Md = 1.5 Md = 1.5 Range = 3.0 Range =3.0 Range = 1.0 Range = 1.0 Note; The median (Md) in a set of scores is the value of the score thai, fails in the middle when the scores are arrang' in order of magnitude. If the number of scores is even (as above), the median is the value of the score which is halfway between the two middle scores. The median is normally used in place of the mean (Mn) when we do not know if the intervals between measures (e.g., the intervals between 1 and 2 and 3 and 4) are equidistant. 186 Chapter 4 Forecasting Expected Policy Outcomes 187 Observe that some respondents (advocates 2, 8, and 9) believe that the goal of reducing the supply of illicit drugs is very undesirable but possibly or definitely feasible, while others (advocates 1, 5, 7, and 10) believe that this goal is very desirable but definitely unfeasible. When we compare these inconsistencies between desirability and feasibility with responses under goal 2 (public awareness), we find much less inconsistency in the latter set of ,i scores. All of this suggests that the responses to goal 1, while much lower in average desirability and feasibility, also reflect the kinds of important conflicts ? that policy Delphi is specially designed to address. In this case, analysts would -.| not want to eliminate this item. Rather they would want to report these con- j flicts as part of the second-round instrument, requesting that respondents pro- j vide the reasons, assumptions, or arguments that led them to positions that .i were so different. Another way to bring out such disagreements is to construct i and report an average polarization measure, which may be defined as the absolute difference among scores for all combinations of respondents answering a particular question.52 Step 5: Development of subsequent questionnaires. Questionnaires must be developed for second, third, fourth, or fifth rounds (most policy Delphis involve three to five rounds). As indicated previously, the results of prior rounds are used as a basis for subsequent ones. One of the most important aspects of policy Delphi occurs in these rounds, because it is here that advocates have an opportunity to observe the results of immediately preceding rounds and offer explicit reasons, assumptions, and arguments for their respective judgments. Note that later rounds, do not simply include information about central tendency, dispersion, and polarization-, they also include a summary of arguments offered for the most important conflicting judgments, In this way, the policy Delphi promotes a reasoned debate and maximizes the probability that deviant and sometimes insightful judgments are not lost in the process. By the time the last round of questionnaires is completed, all advocates have had an opportunity to state their initial positions on forecasts, issues, goals, and options; to examine and evaluate the reasons why their positions differ from those of others; and to reevaluate and change their positions. s Step 6: Organization of group meetings. One of the last tasks is to bring advocates together for a face-to-face discussion of the reasons, assumptions, I and arguments that underlie their various positions. This face-to-face meeting, 52 Ibid., p. 92; and jerry B. Schneider, "The Policy Delphi: A Regional Planning Application," Technological Forecasting and Social Change 3, no. 4 (1972). The total number of combinations (O is calculated by the formula c7 = Kk - l)/2, where k is the number of responses to a particular item. The average difference is obtained by computing the numerical distance between all combinations, adding them up, and dividing by the number of respondents. This requires that we ignore the signs (plus or minus). Another procedure is to retain the signs, square each difference (eliminating minus signs), sum, and average. because it occurs after all advocates have had a chance to reflect on their positions and those of others, may create an atmosphere of informed confidence that would not be possible in a typical committee setting. Face-to-face discussions also create conditions where advocates may argue their positions intensely and receive immediate feedback. Step 7: Preparation of final report. There is no guarantee that respondents will have reached consensus but much reason to hope that creative ideas about issues, goals, options, and their consequences will be the most important product of a policy Delphi. The report of final results will therefore include a review of the various issues and options available, taking care that all conflicting positions and underlying arguments are presented fully. This report may then be passed on to policy makers, who may use the results of the policy Delphi as one source of information in arriving at decisions. Cross-Impact Analysis Delphi technique is closely related to another widely used judgmental forecasting technique called cross-impact analysis. Cross-impact analysis, developed by the same Rand Corporation researchers responsible for early applications of conventional Delphi,53 is a technique that elicits informed judgments about the probability of occurrence of future events on the basis of the occurrence or nonoccurrence of related events. The aim of cross-impact analysis is to identify events that will facilitate or inhibit the occurrence of other related events. Cross-impact analysis was expressly designed as a supplement to conventional Delphi, In the words of two of its early developers, A shortcoming of [Delphi] and many other forecasting methods... is that potential relationships between the forecasted events may be ignored and the forecasts might well contain mutually reinforcing or mutually exclusive items. [Cross-impact analysis] is an attempt to develop a method by which the probabilities of an item in a forecasted set can be adjusted in view of judgments relating to the potential interactions of the forecasted items.54 The basic analytical tool used in cross-impact analysis is the cross-impact matrix, a symmetrical table that lists potentially related events along row and column headings (Table 4.11). The type of forecasting problem for which cross-impact analysis is particularly appropriate is one involving a series of interdependent events. Table 4.11, for example, expresses the interdependencies among events that follow the mass production of automobiles. Observe the string of direct positive effects 53 These researchers include Olaf Helmer, T. J, Gordon, and H. Hayward. Helmer is credited with coining the term cross-impact. See T. J. Gordon and H. Hayward, "Inidal Experiments with the Cross-Impact Matrix Method of Forecasting," Futures 1, no. 2 (1968): 101. 54 Ibid., p. 100. 188 Chapter 4 Forecasting Expected Policy Outcomes 189 Table 4.11 Cross-Impact Matrix Illustrating Consequences of Mass Automobile Use Events C© E1 B2 £3 E4 E$ E6 E7 Ex = mass production of automobiles El — ease of travel Ei = patronization of large suburban stores E4 - alienation from neighbors E^ = high social-psychological dependence on immediate family members E6 = inability of family members to meet mutual social-psychological demands E-j - social deviance in form of divorce, alcoholism, juvenile delinquency Note: A plus (+) indicates direct one-way effects; a zero (0) indicates no effect; a circled plus sign indicates positive feedback effects. Source: Adapted from Joseph Coales, "Technology Assessment: The Benefits, the Costs, die Consequences," Futurist 5, no. 6 (December 1971). (represented by "+" signs) directly above the blank cells in the main diagonal. These effects are first-, second-, third-, fourth-, fifth-, and sixth-order impacts of mass auto- I mobile production. Note also the positive feedback effects of (E2 - EJ, (E5 - £j), (Ej - E4), and (E7 - £5). These positive feedback effects suggest that ease of travel and patronization of large suburban stores may themselves affect the mass produc- ! tion of automobiles, for example, by increasing demand. Similarly, various forms of social deviance may intensify existing levels of alienation from neighbors and create even greater social-psychological dependence on family members. This illustration purposefully oversimplifies linkages among events. In many J other situations, the linkage of one event with another is not unambiguously positive; nor do events follow one another so neatly in time. Moreover, many events may be negatively linked. For this reason, cross-impact analysis takes into account three aspects of any linkage: 1. Mode (direction) of linkage. This indicates whether one event affects the occur- • rence of another event and, if so, whether the direction of this effect is positive or negative. Positive effects occur in what is called the enhancing mode, while negative ones fall into a category called the inhibiting mode. A good example of linkages in the enhancing mode is increased gasoline prices provoking research and development on synthetic fuels. The arms race and its effects on the availability of funds for urban redevelopment are an illustration of linkages in the inhibiting mode. The unconnected mode refers to unconnected events. 2. Strength of linkage. This indicates how strongly events are linked, whether in the enhancing or inhibiting mode. Some events are strongly linked, meaning that the occurrence of one event substantially changes the likelihood of another's occurring, while other events are weakly linked. In general, the weaker the linkage, the closer it comes to the unconnected mode. 3. Elapsed time of linkage. This indicates the amount of time (weeks, years, decades) between the occurrence of linked events. Even though events may be strongly linked, either in the enhancing or inhibiting modes, the impact of one event on the other may require a considerable period of time. For example, the linkage between the mass production of automobiles and social deviance required an elapsed time of several decades. Cross-impact analysis works on the principle of conditional probability. Conditional probability states that the probability of occurrence of one event is dependent on the occurrence of some other event, that is, the two events are not independent. Conditional probabilities may be denoted by P(Ei/E2), which is read "The probability of the first event (E,), given the second event (E2)." For example, the probability (P) of being elected president (E^> after a candidate has received the party nomination CE2) may be 0.5, that is, there is a fifty-fifty chance of winning the election [PiE^/E^ = 0.50], However, the probability {P) of being elected (£\) without the party's nomination (£3) is low, because party nomination is almost a prerequisite for the presidency [FiE^/E^ = 0.20]. This same logic is extended to cross-impact analysis. The construction of a cross-impact matrix begins with the question: "What is the probability that a certain event (E) will occur prior to some specified point in time?" For example, an extrap-olative forecast using time-series analysis may provide an interval estimate that claims that there is a 90 percent (0.9) probability that total energy consumption will exceed 100.0 quadrillion BTUs by 1995. The next question is "What is the probability that this event (£2) will occur, given that another event (£j) is certain to precede it?" For example, if presently unpredictable factors (new agreements, political turmoil, accidents) result in a doubling of oil prices (fij) by 1995, the probability of energy consumption at the level of 100.0 quadrillion BTUs may be reduced to 0.5. In this case, note that "objective" data used to make the original extrapolative forecast are combined with a "subjective" judgment about the conditional probability of the originally projected event, given the prior occurrence of the first. The construction of a cross-impact matrix for any reasonably complex problem involves many thousands of calculations and requires a computer. Many applications of cross-impact analysis in areas of science and technology policy, environmental policy, transportation policy, and energy policy have involved more than 1,000 190 Chapter 4 Forecasting Expected Policy Outcomes 191 separate iterations (called "games" or "plays") to determine the consistency of the cross-impact matrix, that is, to make sure that every sequence of conditional probabilities has been taken into account before a final probability is calculated for each event. Despite the technical complexity of cross-impact analysis, the basic logic of the technique may be readily grasped by considering a simple illustration. Suppose that a panel of experts assembled for a conventional Delphi provides estimates of the probability of occurrence of four events (Ej ...E4) for future years. Suppose further that these four events are an increase in the price of gasoline to $3 per gallon the "gentrification" of central city neighborhoods by former suburbanites (£2), a doubling of reported crimes per capita (£3), and the mass production of short-distance battery-powered autos (E4). The probabilities attached to these four events are, respectively, Px = 0.5, P2 = 0.5, P} = 0.6, and P4 = 0.2. Given these subjective estimates, die forecasting problem is this: Given that one of these events occurs d e ; p = i,o, or 100%), how will the probabilities of other events change?55 Table 4.12 illustrates the first round (play) in the construction of a cross-impact matrix. Observe that the assumption that gasoline per gallon will go to $3 produces revised subjective probabilities for "gentrification" (an increase from 0.5 to 0,7), reported crimes per capita (an increase from 0.6 to 0.8), and the mass manufacture of Table 4.12 Hypothetical Illustration of the First Hound (Play) in a Cross-Impact Matrix Then the Changed Probability of Occurrence of these Events Is: If this Event Occurs (P=1.0) E, E3 % Ei Gas to $3 per gallon 0.7 0.8 0.5 E2 "Gentrification" 0.4 0.7 0.4 E5 Crime doubles 0.5 0.4 ~ 0.1 £4 Electric autos 0.4 0,5 Events Original Probabilities (P) Pj = 0,5 P2 = 0.5 P3 = 0.6 *4 P4 = 0.2 55 Alternatively, we can ask the same question about the nonoccurrence of an event. Therefore, it is usually necessary to construct two matrices: one for occurrences and one for nonoccurrences. See James F, Dalby, "Practical Hefinements to the Cross-Impact Matrix Technique of Technological Forecasting," in Industrial Applications of Technological Forecasting Oiew York: Wiley, 1971), pp. 259-73- electric autos (an increase from 0.2 to 0.5). These changes reflect the enhancing linkages discussed earlier. By contrast, other linkages are inhibiting. For example, the gentrification of central cities makes it less likely that gasoline will increase to $3 per gallon (note the decrease from 0,5 to 0.4), on the assumption that oil companies will become more price competitive when former suburbanites drive less. Finally, there are also unconnected linkages, increased crime and mass-produced electric autos exert no influence on the probability of gas prices increasing to $3 per gallon or on the process of gentrification (note that original probabilities remain constant at 0.5). The advantage of the cross-impact matrix is that it enables the analyst to discern interdependencies that otherwise may have gone unnoticed. Cross-impact analysis also permits the continuous revision of prior probabilities on the basis of new assumptions or evidence. If new empirical data become available for some of the events (e.g., crime rates), the matrix may be recalculated. Alternatively, different assumptions may be introduced—perhaps as a consequence of a policy Delphi that yields conflicting estimates and arguments—to determine how sensitive certain events are to changes in other events. Finally, information in a cross-impact matrix may be readily summarized at any point in the process. Cross-impact matrices may be used to uncover and analyze those complex interdependencies that we have described as ill-structured problems. The technique is also consistent with a variety of related approaches to intuitive forecasting, including technology assessment, social impact assessment, and technological forecasting.56 As already noted, cross-impact analysis is not only consistent with conventional Delphi but actually represents its adaptation and natural extension. For example, while cross-impact analysis may be done by single analysts, the accuracy of subjective judgments may be increased by using Delphi panels. Cross-impact analysis, like the other forecasting techniques discussed in this chapter, has its limitations. First, the analyst can never be sure that all potentially interdependent events have been included in the analysis, which again calls attention to the importance of problem structuring (Chapter 3). There are other techniques to assist in identifying these events, including variations of theory mapping (see Table 4.9) and the construction and graphic presentation of networks of causally related events called relevance trees (see Figure 3.14). Second, the construction and "playing" of a cross-impact matrix is a reasonably costly and time consuming process, even with the advent of packaged computer programs and high-performance computer technology. Third, there are technical difficulties associated with matrix calculations (e.g., nonoccurrences are not always analyzed), although many of these problems have been resolved.57 Finally, and most important, existing applications of cross-impact analysis suffer from one of the same weaknesses as conventional Delphi, namely, an unrealistic emphasis on consensus among experts. Most of the forecasting ° On these related approaches, see Francois Hetman, Society and the Assessment of Technology (Paris: Organization for Economic Cooperation and Development, 1973); and Kurt Finsterbusch and C. P. Wolf, eds., Methodolog)'of Social Impact Assessment (Suonasburg, PA: Dowden, Hutchinson & Ross, 1977). 3' See Dalby, "Practical Refinements to Cross-Impact Matrix Technique," pp. 265-73. 192 Chapter 4 problems for which cross-impact analysis is especially well suited are precisely the kinds of problems where conflict, and not consensus, is widespread. Problem-structuring methods are needed to surface and debate the conflicting assumptions and arguments that underlie subjective conditional probabilities. Feasibility Assessment The final judgmental forecasting procedure we consider in this chapter is one that is expressly designed to produce conjectures about the future behavior of policy stakeholders. This procedure, most simply described as the feasibility assessment technique, assists analysts in producing forecasts about the probable impact of stakeholders in supporting or opposing the adoption and/or implementation of different policy alternatives.58 The feasibility assessment technique is particularly well suited for problems requiring estimates of the probable consequences of attempting to legitimize policy alternatives under conditions of political conflict and the I unequal distribution of power and other resources. The feasibility assessment technique may be used to forecast the behavior of stakeholders in any phase of the policy-making process, including policy adoption and implementation. What makes this technique particularly useful is that it responds to a key problem that we have already encountered in reviewing other intuitive forecasting techniques: Typically, there is no relevant theory or available empirical data that permit us to make predictions or projections about the behavior of policy stakeholders. While social scientists have proposed various theories of policy-making behavior that are potentially available as a source of predictions, most of these theories are insufficiently concrete to apply in specific contexts.59 The feasibility assessment technique is one way to respond to a number of concerns about the lack of attention to questions of political feasibility and policy implementation in policy analysis. While problems of policy implementation play a large part in most policy problems, much of contemporary policy analysis pays little attention to this question. What is needed is a systematic way to forecast "the J capabilities, interests, and incentives of organizations to implement each alterna- J tive."6o tn practical terms, this means that the behavior of relevant stakeholders must '{ be forecasted along with the consequences of policies themselves. Only in this way j 58 The following discussion draws from but modifies Michael K. O'Leary and William D. Coplin, "Teaching Political Strategy Skills with 'The Prince,'" Policy Analysis 2, no. 1 (winter 1976): 145-60. See also O'Leary and Coplin, Everyman's "Prince." 59 These theories deal with elites, groups, coalitions, and aggregates of individuals and leaders. See, for example, Raymond A. Bauer and Kenneth J. Gergen, eds., The Study of Policy Formation (New York: Free Press, 1968); and Daniel A. Mazmanian and Paul A. Sabatier, Implementation and Public Policy (Lanham, MD: University Press of America, 1989). 60 Graham T. Allison, "Implementation Analysis: 'The Missing Chapter' in Conventional Analysts: A Teaching Exercise," in Benefit-Cost and Policy Analysis: 1974, ed. Richard Zeckhauser (Chicago: Aldine Publishing Company, 1975), p. 379. Forecasting Expected Policy Outcomes 193 can the analyst ensure that organizational and political factors which may be critical to the adoption and implementation of a policy are adequately accounted for. The feasibility assessment technique, like other intuitive forecasting procedures, is based on subjective estimates. The feasibility assessment technique may be used by single analysts, or by a group of knowledgeables, much as in a Delphi exercise. Feasibility assessment focuses on several aspects of political and organizational behavior; 1. Issue position. Here the analyst estimates the probability that various stakeholders will support, oppose, or be indifferent to each of two or more policy alternatives. Positions are coded as supporting ( + 1), opposing (-1), or indifferent (0). A subjective estimate is then made of the probability that each stakeholder will adopt the coded position. This estimate (which ranges from 0 to 1.0) indicates the saliency or importance of the issue to each stakeholder. 2. Available resources. Here the analyst provides a subjective estimate of the resources available to each of the stakeholders in pursuing their respective positions. Available resources include prestige, legitimacy, budget, staff, and access to information and communications networks. Because stakeholders nearly always have positions on other issues for which part of their resources are necessary, available resources should be stated as a fraction of total resources held by the stakeholder. The resource availability scale, expressed as a fraction, varies from 0 to 1.0. Note that there may be a high probability that a given stakeholder will support a policy (e.g., 0.9), yet that same stakeholder may have little capability to affect the policy's adoption or implementation. This is typically the result of an overcommitment of resources (prestige, budget, staff) to other issue areas. 3. Relative resource rank. Here the analyst determines the relative rank of each stakeholder with respect to its resources. Relative resource rank, one measure of the "power" or "influence" of stakeholders, provides information about the magnitude of political and organizational resources available to each stakeholder. A stakeholder who commits a high fraction (say, 0.8) of available resources to support a policy may nevertheless be unable to affect significantly a policy's adoption or implementation. This is because some stakeholders have few resources to begin with. Because the purpose of feasibility assessment is to forecast behavior under conditions of political conflict, it is essential to identify as representative and powerful a group of stakeholders as possible. The analyst may identify representative stakeholders from various organizations and organizational levels with different constituencies and with varying levels of resources and roles in the policy-making process. An illustration of the feasibility assessment technique has been provided in Table 4.13. In this example, a policy analyst in a large municipality has completed a study that shows that local property taxes must be raised by an average of 1 percent in order to cover expenditures for the next year. Alternatively, expenditures for municipal services must be cut by a comparable amount, an action that will result in Chapter 4 Table 4.13 Feasibility Assessment of Two Fiscal Policy Alternatives in a Hypothetical Municipality (a) Alternative 1 (Tax Increase) Fraction of Coded Resources Resource Feasibility Stakeholder Position Probability Available Rank Score (V (2) G) (4) (5) (6) Mayor +1 0.2 0.2 0.4 0.016 Council -1 0.Ě 0.7 0.8 -0.336 Taxpayers' -1 0.9 0.8 1.0 -0.720 association Employees' +1 0.9 0.6 0.6 0.324 union Mass meida +1 0.1 0.5 0.2 0.010 £F = -0.706 Index of total feasibility (TF) = -0.706 n 5 = -0.14 Adjusted total feasibility (TF^p = TF(5/2) = - 0.14(2.5) = -0.35 (b) Alternative 2 (Budget Cut) Fraction of Coded Resources Resource Feasibility Stakeholder Position Probability Available Rank Score (V (2) (3) (4) (5) (6) Mayor +1 0.8 0.2 0.4 0.192 Council +1 0.4 0.5 0.8 0.160 Taxpayers' +1 0.9 0.7 1.0 0,630 association Employees' -1 0.9 0.8 0.6 -0.432 union Mass media -1 0.1 0.5 0.2 -0.010 XF = 0.54 Index if total feasibility (TF) = ^=^=0.11 n 5 Adjusted total feasibility (TFADJ)TF(5/3) = 0.11(5/3) = 0.11(1.66) = 0.183 the firing of fifteen hundred employees. Because most public employees are unionized, and there has been a threat of a strike for more than two years, the mayor is extremely hesitant to press for a cutback in services. At the same time, local taxpayers' groups are known to be strongly opposed to yet another tax increase, even if it means the loss of some services. Under these conditions, the mayor has requested that the analyst provide a feasibility assessment of the policy alternatives. Forecasting Expected Policy Outcomes 195 Table 4.13 shows that a tax increase is unfeasible. Tn fact die negative sign of the indices (simple and adjusted) at the bottom of the table indicates that there is more opposition than support for a tax increase. By contrast, the index of total feasibility (adjusted) for the budget cut is positive and in the weak range. The index of total feasibility ranges from -1,0 to +1.0, which means that the feasibility of different policy alternatives is directly comparable. Observe, however, that there is an adjusted total feasibility index. The adjustment is necessary because the maximum value that the index can take depends on its sign (positive or negative) and the number of positive (or negative) positions taken. In Table 4.13(a), for example, the maximum negative value of the index depends on the number of negative positions in relation to positive ones. Imagine that the two negative feasibility scores were "perfect" (i.e., -1.0) and that all other stakeholders were indifferent, giving them feasibility scores of 0.0. In this case, the maximum value of the sum of individual feasibility scores would be -2.0. If we then divide by the total number of stakeholders (n = 5), we produce an index value of -0.40, even though there is maximum opposition to the alternative. For this reason, we must compute a maximum value of TF, which in this case is TF,^^ = 2/5 = 0.40. To find the adjusted value of the index (TF^j), we simply divide the original value of TF by its maximum value, that is, TFAF^^ = TF^j = - 0.40/0.40 = -1.0. The same procedure was used to find the maximum and adjusted positive values of TF in Table 4.13(b). The feasibility assessment technique forces the analyst to make explicit subjective judgments, rather than treat political and organizational questions in a loose or arbitrary fashion. Feasibility assessment also enables analysts to systematically consider the sensitivity of issue positions and available resources to changes in policy alternatives. In the previous example, a smaller tax increase, combined with austerity measures and a plan to increase the productivity of municipal employees, would probably evoke altogether different levels of feasibility. The limitations of the feasibility assessment technique are similar to those already encountered with other judgmental forecasting techniques. The feasibility assessment technique, like conventional Delphi and cross-impact analysis, provides no systematic way of surfacing the assumptions and arguments that underlie subjective judgments. Perhaps the best way to resolve this difficulty is to adopt procedures from policy Delphi, or use assumptiona! analysis (Chapter 3). A second limitation of the technique is that it assumes that the positions of stakeholders are independent and that they occur at the same point in time. These assumptions are unrealistic, because they ignore processes of coalition formation over time and the fact that one stakeholder's position is frequently determined by changes in the position of another. To capture the interdependencies of issue positions as they shift over time, we can employ some adaptation of cross-impact analysis. Finally, the feasibility assessment technique, like other judgmental forecasting techniques discussed in this chapter, is most useful under conditions where the complexity of a problem cannot easily be grasped by using available theories or empirical data. For this reason, any attempt to outline its limitations should also consider its potential as a source of creative insight and surprising or counterintuitive findings. 196 Chapter 4 Forecasting Expected Policy Outcomes 197 In concluding this chapter, it is important to stress that different approaches to forecasting are complementary. The strength of one approach or technique is very often the weakness or limitation of another, and vice versa. All of this is to say that the logical foundations of each approach are interdependent. Improvements in forecasting are therefore likely to result from the creative combination of different approaches and techniques, that is, from multimethod forecasting. Multimethod forecasting combines multiple forms of logical reasoning (inductive, deductive, retroductive), multiple bases (extrapolation, theory, judgment), and multiple objects (the content and consequences of new and existing policies and the behavior of policy stakeholders). Multimethod forecasting recognizes that neither precision nor creativity is an end in itself. What appears to be a creative or insightful conjecture may lack plausibility and turn out to be pure speculation or quackery, while highly precise projections or predictions may simply answer the wrong question. The ultimate justification for a forecast is whether it provides plausibly true results. In the words of an accomplished policy analyst and former assistant secretary in the Department of Defense, "It is better to be roughly right than exactly wrong »6l CHAPTER SUMMARY This chapter has provided an overview of the process of forecasting, highlighting the nature, types, and uses of forecasting in policy analysis. After comparing approaches based on inductive, deductive, and retroductive reasoning, the chapter presents specific methods and techniques. These include methods and techniques of extrapolative, theoretical, and judgmental forecasting. The ultimate justification for a forecast is whether it yields plausibly true beliefs about the future, not whether it is based on a particular type of method, quantitative or qualitative. In this and other respects, it is better to be approximately right than exactly wrong. LEARNING OBJECTIVES distinguish projections, predictions, and conjectures understand the effects of temporal, historical, and institutional contexts on forecast accuracy contrast potential, plausible, and normative futures describe objects, bases, methods, and products of forecasts contrast and evaluate extrapolative, theoretical, and judgmental forecasting methods and techniques use statistical software to make point and interval estimates of future policy outcomes analyze a case in policy forecasting involving issues of environmental justice 61 Alain C. Enthoven, "Ten Practical Principles for Policy and Program Analysis," in Benefit-Cost and Policy Analysis: 1974, ed. Zeckhauser, p. 459. KEY TERMS AND CONCEPTS catastrophe (158) chaos theory (161) conjecture (130) cross-impact matrix (187) deductive reasoning (139) extrapolative forecasting (141) goal (134) inductive reasoning (138) judgmental forecasting (179) linearity (136) nonlinearity (150) normative futures (133) objective (134) plausible futures (133) political feasibility (192) potential futures (133) prediction (130) projection (130) retroductive reasoning (139) theoretical forecasting (161) REVIEW QUESTIONS 1. What are the three forms of forecasting and how are they related to bases of forecasts? 2. In addition to promoting greater understanding of the future, what other aims can be achieved through forecasting? 3. To what extent are econometric methods more accurate than methods of extrapolative and judgmental forecasting? Explain. 4. How do the institutional, temporal, and historical contexts of forecasts affect their accuracy? 5. Distinguish potential, plausible, and normative futures. 6. List the main differences between goals and objectives. Provide examples. 7. Contrast inductive, deductive, and retroductive reasoning. Do the same for theoretical and judgmental forecasting. 8. List and describe techniques used in the three main types of forecasts. Provide examples. 9. Whether they are linear or nonlinear, most forecasts are based on assumptions of persistence and regularity. To what extent are these assumptions plausible? 10. Many forecasts employ linear regression analysis, also known as the classical linear regression (CLR) model. What are the main assumptions of the CLR model when used to make forecasts? 11. What corrective actions can be taken when assumptions of the CLR model are violated? 12. Is it better to be approximately right, rather than exactly wrong? How does your answer affect the choice of a forecasting method? DEMONSTRATION EXERCISE 1. After reading Case 4, use SPSS or a similar statistical program (e.g., Excel) to perform a forecast in which you estimate the value of the Metropolitan Atlanta Rapid Transit Authority (MARTA) receipts in the years 1997-2001. Assume you are conducting the analysis in January 1997, as a consultant to MARTA. The client has hired you because MARTA needs an accurate estimate of future receipts. The estimate will be used to j 198 Chapter 4 make decisions about fare increases that have become a focal point of debates about environmental justice. Although you could not have known it at the time, the Atlanta City Council opposed a fare increase in 2000 and 2001, amidst considerable political turmoil. MARTA's actual operating budget in 2001 was $307 million, which required a significant fare increase and generated opposition from community groups, who contended that the fare increase violated principles of environmental justice. Perform the following SPSS analyses, which are based on the procedures for extrapolative forecasting covered in the chapter. • Enter the data from Table 4.14 (Payments to MARTA, 1973-96) into the SPSS Data Editor. Click on Variable View and complete the rows by entering the Table 4.14 Paymente to MARTA, 1973-96 (000s) Econometric Average Change Tear Receipts Forecast Forecast 1973 $43,820 1974 $50,501 1975 $50,946 1976 $52,819 1977 $57,933 1978 $66,120 1979 $75,472 1980 $88,342 1981 $99,836 1982 104,685 1983 112,008 3984 123,407 1985 134,902 1986 147,149 1987 148,582 1988 158,549 1989 162,543 1990 165,722 1991 168,085 1992 167,016 1993 181,345 1994 198,490 1995 222,475 1996 251,668 239,100 1997 272,407 257,000 1998 281,476 276,300 1999 290,548 297,100 2000 306,768 319,400 2001 322,573 343,000 Note: The actual operating budget adopted for 2001 was $307 million. Sowve: Guess and Farnham (2000), pp. 185,187, 204. Payments composed of receipts and user fees. The econometric forecast was done in October 1996 by the Economic Forecasting Center of Georgia State University, The average (proportionate) change forecast was done by Guess and Farnham (Table 4.6) on the basis of 1973-95 data. Forecasting Expected Policy Outcomes 199 Names, Types, Labels, and so on, for the variables YEAR and PAYMENTS. Be sure to indicate that the value of PAYMENTS is in thousands of dollars. • On the pull-down menu, click on Graphs and Sequence to create a sequence (time-series) graph. Move PAYMENTS into the Variables: box. You do not need to fill in the Time Axis Label box. Click OK and inspect and interpret the graph. Does the series meet assumptions of the linear regression model? Click on the Natural Log Transform button. Click OK and reinterpret, • On the pull-down menu, click on Analyze, Regression, and Linear. Enter the dependent (PAYMENTS) and independent (YEAR) variables. Click on Statistics and on Durbin-Watson. Also click on Save and then on Unstandardized Predicted Values, Residuals, and the 95 percent Confidence Internal for Mean. Click OA'to run the regression. Calculate by hand the point estimate for 2001. Interpret the interval estimate printed on the SPSS output. • Rerun the regression after using a logarithmic transformation of PAYMENTS (use the menu for Transform and Compute). Rerun the regression again. Recalculate. Interpret the interval estimate. You must take the antilog first. • Interpret the point and interval estimates provided in the output. 2. How accurate are your estimates? How do they compare with those of the econometric and average change forecasts in Table 4.14? Identify and describe your "best" five-year forecast for 2001. What does it suggest about the need to raise fares? How sure are you? 3. Write a policy memo to MARTA in which you explain and justify your five-year forecast for the year 2001, and make a policy recommendation about options for increasing fares (including no increase). Relate your memo to the issue of environmental justice. REFERENCES Alien, T. Harrell. New Methods in Social Science Research: Policy Sciences and Futures Research. New York; Frederick A. Praeger, 1978. Ascher, William. Forecasting: An Appraisal for Policy Makers and Planners. Baltimore, MD: Johns Hopkins University Press, 1978. --—. "The Forecasting Potential of Complex Models." Policy Sciences 13 (1981): 247-67. Barüett, Robert V., ed. Policy through Impact Assessment: Institutionalized Analysis as a Policy Strategy. New York: Greenwood Press, 1989. Box, G. E. P., and G. M. Jenkins. Time Series Analysis: Forecasting and Control. San Francisco, CA: Holden-Day, 1969. Dror, Yehezkel. Policymaking under Adversity. New Brunswick, NJ: Transaction Books, 1986, Finsterbusch, Kurt, and C. P. Woif, eds. Methodology of Social Impact Assessment. Stroudsburg, PA: Dowden, Hutchinson & Ross, 1977. Gass, Saul I., and Roger L. Sisson, eds. A Guide to Models in Governmental Planning and Operations. Washington, DC: U.S. Environmental Protection Agency, 1974. Guess, George M., and Paul G. Farnham. Cases in Public Policy Analysis. New York: Longman, 1989, ch. 3, "Forecasting Policy Options," pp. 49-67. Harrison, Daniel P. Social Forecasting Methodology. New York: Russell Sage Foundation, 1976. 200 Chapter 4 Forecasting Expected Policy Outcomes 201 Liner, Charles D, "Projecting Local Government Revenue," In Budget Management: A Header in Local Government Financial Management. Edited by W. Dartley Hildreth and Gerald J. Miller. Athens: University of Georgia Press, 1983, pp. 83-92. Linstone, Harold A., and Murray Turoff, eds. The Delphi Method: Techniques and Applications. Reading, MA: Addison-Wesley, 1975. Marien, Michael. Future Survey Annual: A Guide to the Recent Literature of Trends, Forecasts, and Policy Proposals. Bethesda, MD: World Future Society, published annually. - "The Scope of Policy Studies: Reclaiming LasswelPs Lost Vision." In Advances in Policy Studies since 1950, Vol. 10 of Policy Studies Review Annual. Edited by William N. Dunn and Rita Mae Kelly. New Brunswick, NJ: Transaction Books, 1992, pp. 445-88. McNown, Robert. "On the Use of Econometric Models: A Guide for Policy Makers." Policy Sciences 19 (1986): 360-80. O'Leary, Michael K., and William D. Coplin. Everyman's "Prince." North Scituate, MA: Duxbury Press, 1976. Schroeder, Larry D,, and Roy Bahl. "The Role of Multi-year Forecasting in the Annual Budgeting Process for Local Governments." Public Budgeting and Finance 4, no. 1 (1984): 3-14. Thomopoulos, Nick T. Applied Forecasting Methods. Englewood Cliffs, NJ: Prentice Hall, 1980. Toulmin, Llewellyn M., and Glendal E. Wright, "Expenditure Forecasting." In Handbook on Public Budgeting and Financial Management. Edited by Jack Rabin and Thomas D. Lynch. New York: Marcel Dekker, 1983, pp. 209-87. U.S. General Accounting Office. Prospective Evaluation Methods: The Prospective Evaluation Synthesis. Washington, DC: U.S. General Accounting Office, Program Evaluation and Methodology Division, July 1989. Case 4. Political Consequences of Forecasting: Environmental Justice and Urban Mass Rapid Transit In the case that follows, policy makers responsible for the performance of the MARTA have frequently hired consultants who have used simple as well as complex forecasting techniques. Even with reasonably accurate estimates of future revenues from sales tax receipts and user fees (a reasonably accurate forecast will lie within +10 percent of the observed value), MARTA policy makers could not avoid making politically unpopular and controversial policy decisions that involved raising fares and cutting back services to cover shortfalls. In the MARTA case, the accuracy of forecasts is directly related to issues of environmental justice raised primarily by relatively poor and disadvantaged minorities in the Atlanta region (see below). The following materials are excerpted from Vol. 2, No. 1 (summer 2000) of Transportation Equity: A Newsletter of the Environmental Justice Resource Center at Clark Atlanta University (see www.ejrc,cau.edu.), Box 4.1 Transit Equity: A Look at MARTA* What Is Transportation Important? What Counties Support MARTA? Who Pays for MARTA? Who Rides MARTA? Where Do MARTA Riders Live? Where Are Weekday MARTA Riders Headed? Other than housing, Americans spend more on transportation than any other household expense. The average American household spends one-fifth of its income on transportation. The Metropolitan Atlanta Rapid Transit Authority (MARTA) serves just two counties, Fulton and DeKalb, in the ten-county Atlanta region. In the 1960s, MARTA was hailed as the solution to the region's growing traffic and pollution problems. The first referendum to create a five-county rapid rail system failed in 1968. However, in 1971, the City of Atlanta, Fulton County and DeKalb County, approved a referendum for a 1-percent sales tax to support a rapid rail and feeder bus system. Cobb County and Gwinnett County voters rejected the MARTA system. MARTA's operating budget comes from sales tax (46%), fares (34%), and the Federal Transit Administration and other sources (20%). Only Fulton and DeKalb County residents pay for the upkeep and expansion of the system with a one-cent MARTA sales tax. Revenues from bus fares generated $5 million more revenue than taken in by rail in 1997. In 1999, the regular one-way fare on MARTA was $1.50, up from $1 in 1992. A recent rider survey revealed that 78 percent of MARTA's rail and bus riders are African American and other people of color. Whites make up 22 percent of MARTA riders. Over 45 percent of MARTA riders live in the city of Atlanta, 14 percent live in the remainder of Fulton County, 25 percent live in DeKalb County, and 16 percent of MARTA riders live outside MARTA's service area. The majority (58%) of MARTA's weekday riders are on their way to work. The second highest use of MARTA was for getting to medical centers and other services (21%). Other MARTA riders use the system for attending special events (8%), shopping (7%), and school. (continued) 202 Chapter 4 Forecasting Expected Policy Outcomes 203 Box 4.1 Transit Equity: A Look at MARTA* (continued) How Much Is MARTA's Proposed Fare Increase? Who Would Be Most Impacted by the Proposed MARTA Fare Increase? How Can the Public Comment on the Proposed MARTA Fare Increase? How Has MARTA Grown? Who Uses MARTA's Parking Spaces? MARTA proposed raising one-way fares from $1.50 to $1.75, a 17-percent increase, The increase is proposed to offset a $10 million shortfall associated with the openings of the Sandy Springs and North Springs stations. The proposa! also calls for increasing the weekly transit pass from $12 to $13 and the monthly pass from $45 to $52.50. While the increase of $7.50 a month may not seem like a lot at first glance, it could do irreparable harm to a $5.25 per hour minimum-wage transit user. These fare increases would fall heaviest on the transit dependent, low-income households, and people of color who make up the lion's share of MARTA users. Because MARTA receives federal transportation dollars, it is required to hold public hearings before any fare increase takes effect, MARTA has grown from thirteen rail stations in 1979 to thirty-six rail stations in 2000. Two additional stations (Sandy Springs and North Springs) along the north line were under construction. These two new northern stations opened in early 2001. With its $270,4 million annual budget, MARTA operates 700 buses and 240 rail cars. The system handles over 534,000 passengers on an average weekday. MARTA operates 154 bus routes that cover 1,531 miles and carry 275,000 passengers on an average weekday. MARTA's rail lines cover 46 miles with rail cars carrying 259,000 passengers on an average weekday. MARTA provides nearly 21,000 parking spaces at twenty-three of its thirty-six transit stations. Parking at MARTA lots is free except for the overnight lots that cost $3 per day. MARTA provides 1,342 spaces in four overnight lots. All of the overnight lots are MARTA's North Line. It is becoming increasingly difficult to find a parking space in some MARTA lots. A recent license tag survey, "Who Parks-and-Rides," covering the period 1988-97, revealed that 44 percent of the cars parked at MARTA lots were from outside the MARTA Fulton/DeKalb County service area. What Are the Similarities between Atlanta and Los Angeles? Where Can I Get More Information on Transportation Equity? A similar transit proposal in Los Angles sparked a grassroots movement. In 1996, the Labor Community Strategy Center and the Bus Riders Union (a grassroots group of transit users) sued the Los Angeles MTA over its plan to raise bus fares and build an expensive rail system at the expense of bus riders, who made up 95 percent of transit users. The MTA bus system, comprised largely of low-income persons and people of color, only received 30 percent of the MTA's transit dollars. Grassroots organizing and the Bus Riders Union's legal victory resulted in $1.5 billion for new clean-fuel buses, service improvements, lower fares, a landmark Civil Rights Consent Decree, and a vibrant multiracial grassroots organization of over 2,000 dues-paying members. Contact the Environmental Justice Resource Center at Clark Atlanta University, 223 James P. Brawley Drive, Atlanta, GA 30314, (404) 880-6911 (ph), (404) 880-6909 (fx), E-mail: ejrc@cau.edu. Web site: http//:www.ejrc.cau.edu. "Prepared by the Environmental justice Resource Center at Clark Atlanta University under its Atlanta Transportation Equity Project CATEP), The ATEP is made possible by grants from the Turner Foundation and Ford Foundation. See http//:www,ejrc.cau.edu. Race and Public Transportation in Metro Atlanta: A look at MARTA By Robert D. Bullard Race still operates at the heart of Atlanta's regional transportation dilemma. For years, 1-20 served as the unofficial racial line of demarcation in the region, with blacks located largely to the south and whites to the north. The bulk of the region's growth in the 1990s occurred in Atlanta's northern suburbs, areas where public transit is inadequate or nonexistent. The ten-county Atlanta metropolitan area has a regional public transit system only in name. In the 1960s, the Metropolitan Atlanta Rapid Transit Authority or MARTA was hailed as the solution to the region's growing traffic and pollution problems, but today, MARTA serves just two counties, Fulton and DeKalb. For years, MARTA's acronym was jokingly referred to as "Moving Africans Rapidly Through Adanta." African Americans currently make up 75 percent of MARTA's riders. The first referendum to create a five-county rapid rail system failed in 1968, and the vote was largely along racial lines. However, in 1971, the City of Adanta, 204 Chapter 4 Forecasting Expected Policy Outcomes 205 Fulton and DeKalb counties (political jurisdictions with the largest minority concentration in the region), approved a referendum for a 1-percent sales tax to support a rapid rail and feeder bus system, but the mostly white suburban Cobb County and Gwinnett County voters rejected the MARTA system. People of color represent 29 percent of the population in the ten-county region. Nevertheless, MARTA has since grown from thirteen rail stations in 1979 to thirty-six rail stations in 1999, and two additional stations—Sandy Springs and North Springs— along the north line were under construction and opened in early 2001. MARTA operates 154 bus routes that cover 1,541 miles. Its rail lines cover 46 miles, In 1999, MARTA was responsible for 553,000 daily passenger trips (50.1% bus and 49.9% rail). Just how far MARTA lines extend has proved to be a thorny issue. Politics will likely play a major role in determining where the next MARTA lines go. Even who pays the tab for MARTA is being debated. MARTA's operating budget comes from sales tax (46%), fares (34%), and the Federal Transit Administration and other sources (20%). But only Fulton and DeKalb County residents pay for the upkeep and expansion of the system with a one-cent MARTA sales tax. A recent rider survey revealed that 78 percent of MARTA's rail and bus riders are African Americans and other people of color. Whites make up 22 percent of MARTA riders. More than 45 percent of MARTA riders live in the city of Atlanta, 14 percent live in the remainder of Fulton County, 25 percent live in DeKalb County, and 16 percent live outside MARTA's service area. Parking at MARTA's twenty-three lots is free except for the 1,342 overnight parking slots that cost $3 per day. All of the overnight lots are located on MARTA's North Line where they serve affluent, mostly white suburban communities. For example, the far-north stops on the orange lines (Doraville and Dunwoody Stations) have proven to be popular among suburban air travelers, It is becoming increasingly difficult to find a parking space in some MARTA lots. A license tag survey covering the period 1988-97 revealed that 44 percent of the cars parked at MARTA lots were from outside the Fuiton/DeKalb County service area. It appears that Fulton and DeKalb County tax payers are subsidizing people who live in outlying counties and who park their cars at the park-and-ride lots and ride on MARTA trains into the city and to Hartsfield Atlanta Airport, the busiest airport in the nation. Both the Doraville and Dunwoody stations (and the Sandy Springs and North Springs stations opened in early 2001) provide fast, comfortable, traffic-free rides to Hartsfield Atlanta International Airport. By paying only $1.50 (the fare increased to $1.75 on January 1, 2001) for the train ride (and no one-cent MARTA sales tax), many suburbanites who live outside Fulton and DeKalb Counties get an added bonus by not having to park in airport satellite parking lots that range from $6 and up. Atlanta and Fulton County Adopt Resolutions Opposing Fare Increase On May 1, the Atlanta City Council voted unanimously to approve a resolution opposing MARTA's fare increase. The Fulton County Commission, on April 5, 2000, also adopted a resolution in opposition to the fare hike. Both resolutions expressed that the fare increase would have a disproportionately negative impact on iow-income MARTA riders, the vast majority of whom reside in Atlanta, Fulton and - DeKalb counties, and who already pay a 1-percent sales tax, which subsidizes the other metro users. The MARTA board was urged to consider alternatives to the fare increase in order to "provide equity to the residents of Atlanta, Fulton and DeKalb counties" and to "more effectively spread the financial burden of operating the system to other users in the Metropolitan Atlanta area." Several MARTA Board appointees representing Atlanta and Fulton County apparently ignored the resolutions passed by the elected officials from these districts. These appointees voted for a resolution adopting the MARTA budget with the fare increase on May 25, 2000, which failed by one vote and again on June 18, 2000, which was passed by the MARTA Board. The resolution adopted by the MARTA Board called for implementation of the fare increase on January 1, 2001. Atlanta and Fulton County appointees to the MARTA Board who voted for the fare hike are as follows: Dr. Walter Branch (Atlanta), Amos Beasley (Atlanta), Frank Steinemann (Atlanta), and Arthur McClung (Fulton County). South DeKalb Leaders Hold Transportation Town Hall Meeting May 9, 2000, Atlanta, GA—Nearly a hundred residents turned out for a town hall meeting held at the Georgia Perimeter College—South Campus. The meeting was organized and chaired by State Senator Connie Stokes and Representative Henrietta Turnquest. Senator Stokes explained why the town hall meeting was called. "The purpose of this meeting is to provide South DeKalb residents with the initial goals, objectives, and future transportation plans in the Atlanta region, including those of MARTA and GRTA" stated Stokes. Representative Turnquest registered her concern with the historical treatment of the South DeKalb area: "South DeKalb residents have not been active participants in the Atlanta metropolitan transportation decisionmaking process. This must change." The meeting led off with a panel that included Robert Bullard (Clark Atlanta University), Catherine Ross (Georgia Regional Transportation Authority), William Mosley (MARTA Board chair), and Arthur Barnes (Georgia Rail Passenger Authority). Much of the panel discussion revolved around existing public transit service inequities, the MARTA fare increase, and regional transportation investments. During the question and answer period, South DeKalb residents expressed concern about the growing North-South economic divide in their county and the role of transportation agencies in developing plans that are inclusive of low-income and people of color communities, Jennifer Parker, editor of the South DeKalb Crossroads magazine, expressed the view of many local residents: "You can't get there on MARTA." Parker urged MARTA to concentrate on improving services and take care of its loyal customers. Residents Stage Mass Walkout at MARTA Board Meeting May 25, 2000, Atlanta, GA—Atlanta residents waited more than an hour for the MARTA Board members to show up for the scheduled 1:15/P.M. board meeting. The meeting room at MARTA headquarters was filled with customers whose yellow signs urged the board to "Vote NO" on the fare hike. The residents complained that this was not a way to run a $300 million business. William R. Moseley, chair of the MARTA Board, opened the meeting with a resolution to adopt the Fiscal Year 2001 operating and capital funds budget and was immediately interrupted by John Evans, 206 Chapter 4 President of DeKalb NAACP. Evans asked the board to move to the public comment period before the vote on the fare increase. Moseley refused to do so. He also asked Evans to not disrupt the meeting or he would have him removed from the room. While Moseley and Evans were engaged in a heated debate, most of the residents walked out of the meeting. After about ten minutes, a MARTA staff person came outside and told the residents that the MARTA board voted on the proposed fare hike. The budget measure was defeated by one vote. Two-Mile Kail Extension Costs Whopping $464 Million MARTA officials claim the fare hike is needed to offset a projected $10 million shortfall associated with the opening of two new suburban train stations on its North line. The two new Sandy Springs and North Springs stations, scheduled to open in December, 2000, add just two miles of track at the cost of $464 million in construction. They will cost an additional $4 million to operate annually. The $464 million North Springs and Sandy Springs stations are projected to increase MARTA's ridership by approximately 5 percent. On the other hand, the fare increase is projected to decrease ridership by 5 percent. Compared with the "new" riders, the "lost" riders are far more likely to be low-income, transit-dependent, and people of color who reside and pay sales tax in Fulton and DeKalb Counties. i: Currently, African Americans make up 75 percent of MARTA's riders, I MARTA. Board Approves Fare Increase j June 19, 2000, Atlanta, GA—MARTA recently approved a $307 million operating bud- I get that raises its one-way cash fare from $1,50 to $1.75—a 17-percent increase. The weekly transit pass will jump from $12 to $13, monthly passes will increase from $45 = to $52.50, and half-price senior citizens passes will go from 75 cents to 85 cents. A similar budget proposal came before the MARTA board on May 25, but failed by one vote. Although split, the board approved its FY01 budget, which included the whopping fare hike. In an effort to save face, the board passed a "rabbit-out-of-the-hat" amendment i that instructed the MARTA staff to seek alternatives over the next 120 days to cut $2 million from the administration, take $2 million from its reserve fund, and request ; another $2 million from various city, county, state, and federal governments. j MARTA officials will be hard pressed to get any extra funds from the Atlanta j City and Fulton County governments since both entities voted unanimously against | the fare hike. Citizens of Atlanta, Fulton, and DeKalb have invested twenty-five ) years in a one-cent sales tax in building the Metropolitan Atlanta Rapid Tranist : Authority or MARTA. For the past four months, a host of community leaders, civil rights activists, academics, local elected officials, and transit riders have called for MARTA to increase ridership, trim its bloated administrative staff, and seriously consider alternative revenue sources such as advertising, concession, and parking fees. |- Equity Coalition Holds Press Conference on "Juneteenth" June 19, 2000, Atlanta, GA—A dozen local community leaders, representing the j Metropolitan Atlanta Transportation Equity Coalition or MATEC, held a press conference ; in front of the MARTA headquarters (2424 Piedmont Road, N.E.) on Monday June 19th J (before the MARTA board meeting). It is ironic that the questionable fare increase tfr passed on "Juneteenth," a date celebrated by millions of African Americans across the J Forecasting Expected Policy Outcomes 207 nation. Although formed less than a year ago, MATEC has steadily expanded its membership. It has also sharpened its focus on transportation inequities in the region, The ethnic coalition now includes an an'ay of environmental justice organizations, civil rights groups, churches, service organizations, labor unions, homeowners associations, neighborhood planning units, elected officials, and academic institutions who have come together under a common theme of dismantling transportation racism in metropolitan Atlanta. What People Are Saying about the Fare Increase There are clear equity impacts related to the proposed MARTA fare hike. Without a doubt, poor people and black people will be hit the hardest. In lieu of fare hikes, the MARTA should initiate an aggressive plan to expand ridership, streamline its administrative staff, acquire more state, regional, and federal funding for public transit, and explore the feasibility of a fare reduction" (Dr. Robert D. Bullard, Director of the Environmental Justice Resource Center at Clark Atlanta University and author of the 1997 book Just Transportation: Dismantling Race and Class Barriers to Mobility). "MARTA is not in serious need to increase the fares, but because of the two new stations (Sandy Springs and North Springs) soon to be on line, now would be the perfect timing for a fare increase" (Terry L. Griffis, VP of Finance and Administration for MARTA; statement made at a Metropolitan Atlanta Rapid Transit Overview Committee [MARTOC] meeting held on May 12, 2000). "MARTA will not be in violation of either the 35% Operating Ratio nor the 50% Operating Funding tests for its fiscal year 2001 (FY01) or Fiscal Year (FY02) Budget if it does not impose the recommended fare increase. Atlantans already pay a high one-way cash fare. Going to a $1.75 fare would definitely set MARTA apart from the pack. Therefore, if the objective is to carry more passengers, a fare decrease is likely to be the most productive and cost-effective methodology available by a wide margin" (Thomas A. Rubin, CPA and nationally known transportation consultant). "I find it unfortunate that MARTA feels they must increase their fare at a time when we axe trying to get more and more people out of their cars and on to mass transit. In addition to the negative effects on our air quality and traffic congestion, a fare increase at this time could limit some low-income people's access to the transit system and their places of employment. While I pledge to continue working for more federal money for MARTA, 1 believe that a certain percentage of state gas tax revenue should be dedicated to MARTA, and I hope that MARTA will work to find other fiscal solutions to their budget without increasing fares at this time" (Cynthia McKinney, U.S. Cbngresswoman). "This whole thing is about race. Cobb, Gwinnett and Clayton don't want MARTA and poor or black people. I can't imagine MARTA saying we will raise the fare on the core of this proposed regional transportation system we plan to have" (James E. "Billy" McKinney, State Senator), "MARTA should ensure that all avenues have been exhausted before initiating a fare increase. There presently are many questions as to whether that has been done" (Vernon Jones, State Representative). "South DeKalb residents have not been active participants in the Atlanta metropolitan transportation decision-making process. This must change." (Henrietta Turnquest, State Representative). 208 Chapter 4 "The impact of a MARTA fare increase on the working poor would be disastrous, MARTA should work to ensure that all alternatives have been exhausted, and work with state, local and federal officials to find resources for MARTA" (Vincent Fort, State Senator). "1 can appreciate the fiscal situation that MARTA finds itself in. But in all fairness to working class and poor people who are the financial mainstay of the MARTA System, I along with my other Atlanta City Council colleagues urge MARTA not to raise their fare but to look at alternative ways of financing. Some of these initiatives could include parking fees or other creative initiatives" ("Able" Mable Thomas, Atlanta City Counctlwoman). "MARTA's fare increase is a complete disrespect for those who depend on mass transportation for their mode of transportation. I would like to see our state legislators take the leadership to remove Gwinnett and Clayton members off the J MARTA Board" (John Evans, DeKalb NAACP). "By insisting on this fare increase, MARTA has violated the trust placed in it by the | citizens of Fulton and DeKalb counties to have an affordable and safe transit system. We l) had to resort to public protests to alert citizens about the proposed fare increase" (Flora I M. Tommie, a MARTA rider since 1983 and an active member of Atlanta's NPU-X). j "MARTA riders have already adjusted to the $1.50 fare. People on fixed | incomes definitely cannot afford a 25 cents increase in one-way cash fare. The j MARTA Board needs to address the existing inadequate services in minority com- f munities especially in Southwest Atlanta e.g., no benches at bus stops, no kiosks j during inclement weather, overcrowded buses, buses running late, and old broken- 1 down buses" (Ester B, McCrary, a transit-dependent rider). ] "Environmental Defense opposes the proposed fare increase for MARTA I because it would be inefficient, environmentally destructive and inequitable. Rather 1 than increasing fares, MARTA should be contemplating lower fares to encourage I people to use transit. Higher transit use would result in less congestion, cleaner air, I and less consumption of natural resources like gasoline" (Robert Garcia, Senior 1 Attorney and Director, Environmental Defense—formerly known as Environmental 1 Defense Fund—Los Angeles Project Office). I The fare increase reportedly will raise approximately $12 million per year for I MARTA. However, any benefit to the agency is likely to be outweighed by the sub- I stantial losses of income and mobility for the transit-dependent that will result in the j loss of employment and housing, and the inability to reach medical care, food 1 sources, educational opportunities, and other basic needs of life. } The fare hike would have the greatest impact on low-income riders who pay with cash, MARTA's transit-dependent riders are typically children and elderly, lower- J income, earless, large households, residents who live near transit stations, and gener-ally people of color. On the other hand, MARTA's northern-tier ridership will likely l increase regardless of fare hikes. MARTA should encourage affluent white suburban- » ites to get out of their cars and use its system, At the same time, the agency should not ;i balance this initiative on the backs of those customers who can least afford to pay. 1 Many Atlantans do not take kindly when people compare their city with Los s Angeles' smog, traffic, gridlock, or sprawl problem, However, Los Angeles beats out ■ r Forecasting Expected Policy Outcomes 209 Atlanta when it comes to affordable public transit. Despite the higher median household income and higher cost of living in Los Angeles, Atlantans now pay more to ride public transit than Los Angeles residents. MARTA's one-way cash fare was $1.50 (scheduled to increase to $1.75 in January 1, 2001). At that time, the Los Angeles MTA one-way cash fare was $1.35. The Environmental Justice Resource Center (EJRC) retained the services of Thomas A. Rubin, an Oakland, California-based transportation consultant, to assist with the analysis of MARTA's fare structure and financial status. Rubin has worked on a number of national transportation equity and civil rights cases, including the Los Angeles MTA case. On June 1, 2000 the EJRC submitted its legal analysis and Rubin's budget analysis to the MARTA board. The fare increase raises serious equity questions under Title VI of the Civil Rights Act of 1964 and its regulations for the following reasons: (1) The fare increase would adversely impact African Americans, (2) there is no business necessity for a fare increase, and (3) MARTA has less discriminatory alternatives to a fare increase. The Title VI regulations prohibit an agency that receives federal funding from engaging in actions that have an adverse disparate impact on protected classes (race, color, national origin) for which there is no business necessity and for which there are less discriminatory alternatives, MARTA's 2000 fare of $1.50 is the second highest of all major U.S. urban transit operators. An increase of one-quarter would make MARTA's $1.75 adult cash fare the highest in the nation. Considering costs of living, MARTA's cash fare is the highest in the nation now. MARTA's 34.2-percent farebox recoveiy ratio is the highest of comparable Sun Belt transit systems. Since the 15-cent fare ended in 1979, MARTA's adult cash fare has increased tenfold. Factoring in cost of living changes, it has increased over 300 percent. MARTA will be in full compliance with all statutory requirements for both FY01 and FY02 without a fare increase. MARTA has absolutely no shortage of cash. Rubin concludes that the fare increase is not necessary, justified, or wise. Federal Court Blocks Regional Transportation Plan Atlanta, GA, July 20, 2000—The 11th U.S. Circuit Court of Appeals of Atlanta on Tuesday blocked metro Atlanta's effort to regain use of federal road-building funds. The region is a nonattainment area for ground level ozone. The action, which places metro Atlanta's regional transportation plan on hold, was a result of a lawsuit filed by a coalition of environmental groups. The regional transportation plan proposed to spend $36 billion over twenty-five years on roads, transit, sidewalks, bike paths, and HOV lanes in the thirteen-county region. The court granted a stay to the coalition who sued the EPA in April, charging the agency with illegally extending the region's deadline for meeting federal clean air standards. The coalition also charged the regional transportation plan with using flawed data. The court is not expected to hear the case at least until September, thereby delaying the final approval of metro Atlanta's regional transportation plan until later in the fall. 210 Chapter 4 J MARTA Ignores Its Spanish-Speaking Customers ! By A ngel O. Torres j The Atlanta metro area looks a lot different today than it did twenty-five years ago J when MARTA first began its service. However, one would not know it from observing i the May 3rd public hearings held at MARTA headquarters. Two other public hearings I were held the same day at Atlanta City Hall and in Decatur. The lack of Latino partici- I pation in the events was alarming. During the three hearings, not a single member of I Atlanta's Latino community testified. The Latino community was not informed about j the meeting. Ironically, the largest Latino community within the city limits is located j within close proximity to the MARTA headquarters at the Lindbergh station. 4 MARTA has not responded proactively to the changing demographics of its I Fulton and DeKalb service district. For example, the public hearing notices ran in 1 the Atlanta Journal-Constitution in English only. The advertisement was also posted on MARTA's Web page, also in English only. MARTA did not take the time to properly alert Latinos in metro Atlanta of the proposed fare changes. Even calling MARTA's customer service Hot Line for information in Spanish proved fruitless. -After maneuvering through the available choices in Spanish on the Hot Line, the customer is offered an opportunity to listen to the Olympic route bus schedule. [ The translation equipment for Sparush-speaking individuals was requested well ' | in advance of the May 3 and June 19 MARTA board meetings. A MARTA customer service representative assured me that the agency was prepared to handle Spanish-English translation. However, while at the June 19th MARTA board meeting, I was ; informed that if individuals needed this kind of assistance, it is MARTA's policy that the Individual should bring his or her own translator to the meeting. Several Latino (non-English speaking) MARTA customers who attended the June 19th board meeting J needed translation. I volunteered. Soon after beginning my new duties as a translator, | I was quickly informed that I could not talk while the Board was in session. This ended my tenure as a translator after less than two minutes. This policy must change. It effectively renders MARTA's Spanish-speaking customers invisible and voiceless. [ There are measures that MARTA could take to remedy this problem. MARTA could | begin by requesting a meeting with Latino community leaders and attempt to understand its next-door neighbors. This task should not be hard to achieve because many organizations serving Latinos can be easily found in the Atlanta's Hispanic Yellow Pages. Another step toward this goal could be establishing a meaningful and inclusive community outreach program to the Latino community. It would also help if the Latino representation were added to the MARTA board. Other cities have faced the same challenge that Atlanta faces today. City and county leaders need to follow the examples set by cities such as Los Angeles, Chicago, Miami, and New York, to name a few. Atlanta is no ■ longer the city "too busy to hate"; it is just "too busy to care" about its new residents. Resources ! Books | Bullard, Robert D,, Glenn S. Johnson, and Angel O. Torres, eds. Sprawl City: Race, "",,] Politics, and Planning in Atlanta. Washington, DC: Island Press, 2000; $30. Forecasting Expected Policy Outcomes 211 A serious but often overlooked impact of the random, unplanned growth— commonly known as "sprawl"—that has come to dominate the American landscape is its effect on economic and racial polarization. Sprawl-fueled growth pushes people further apart geographically, politically, economically, and socially. Atlanta, Georgia, is experiencing one of the most severe cases of sprawl in the country, and offers a striking example of sprawl-induced stratification. Sprawl City: Race, Politics, and Planning in Atlanta uses a multidisciplinary approach to analyze and critique the emerging crisis resulting from urban sprawl in the ten-county Atlanta Metropolitan region. Local experts, including sociologists, lawyers, urban planners, economists, educators, and health care professionals, consider sprawl-related concerns as core environmental justice and civil rights issues. All the contributors examine institutional constraint issues that are embedded in urban sprawl, considering how government policies, including housing, education, and transportation policies, have aided and in some cases subsidized separate but unequal economic development, segregated neighborhoods, and spatial layout of central cities and suburbs. Contributors offer analysis of the causes and consequences of urban sprawl, and outline policy recommendations and an action agenda for coping with sprawl-related problems, both in Atlanta and around the country. The book illuminates the rising class and racial divisions underlying uneven growth and development, and provides an important source of information for anyone concerned with these issues, including the growing environmental justice movement as well as planners, policy analysts, public officials, community leaders, and students of public policy, geography, planning, and related disciplines. Sprawl City (Island Press, Summer 2000, ISBN: 1-55963-790-0) is edited by Robert D. Bullard, Glenn S. Johnson, and Angel O. Torres. To view book description, use http://www.islandpress.org/books/ bookdata/sprawlcity.html. The book can be ordered from Island Press at 1-800-828-1302 or orders@islandpress.org. Bullard, Robert D., and Glenn S. Johnson, eds. Just Transportation: Dismantling Race and Class Barriers to Mobility. Gabriola Island, BC: New Society Publishers, 1997, $15.95. Racism continues to dominate America, media and public debate, but the subtle ways in which institutionalized racism affects us are still unexamined. Does our public transportation reinforce segregation and discrimination? How do transportation policies affect where we live and work, the health, education, and public service benefits we have access to—our social and economic mobility? Just Transportation moves beyond denouncing gross bigotry and offers provocative insight into the source of pervasive racism and social apartheid in America. From Harlem to Los Angeles, and cities in-between, Just Transportation exam-s ines how the inequitable distribution of transportation benefits creates subtle, yet profound obstacles to social and economic mobility for people of color and those on the lower end of the socioeconomic spectrum, While the automobile culture has been spurred on by massive government investments in roads and highways, federal commitment to public transportation—which serves mostly the poor and minorities—appears to have reached an all-time low, allowing urban mass transit systems to fall into disrepair. 212 Chapter 4 With a Foreword by Congressman John Lewis, an original Freedom Rider and a champion of civil rights in the U.S. Congress, and essays by a wide range of environmental and transportation activists, lawyers, and scholars, Just Transportation traces the historical roots of transportation struggles in our civil rights history. From Rosa Parks and the Freedom Riders to modern-day unjust transportation practices, Just Transportation persuasively illustrates how the legacy of "separate but equal" is still with us. Just Transportation (New Society Publishers, 1997, ISBN 0-86571-357-X) is edited by Robert D. Bullard and Glenn S. Johnson. See book description at http://www.newsociety.com/aut.html. The book can be ordered from New Society Publishers at 1-800-567-6772 or info@newsociety.com. Web Sites Community Transportation Association of America, http://www.ctaa.org/ The Community Transportation Association of American (CTAA) is a nonprofit membership association with members who are devoted to mobility for everyone, regardless of economic status, disability, age, or accessibility. Community transportation is a practical alternative that picks up where the private auto and mass transit leave off. Environmental Justice Resource Center. http://www/ejrc.cau.edu. The Environmental justice Resource Center (EJRC) at Clark Atlanta University was formed in 1994 to serve as a research, policy, and information clearinghouse on issues related to environmental justice, race and the environment, civil rights, facility siting, land use planning, brownfields, transportadon equity, suburban sprawl, and Smart Growth. The center is multidisciplinary in its focus and approach. It serves as a bridge among the social and behavioral sciences, natural and physical sciences, engineering, management, and legal disciplines to solve environmental problems. Federal Transit Administration, http://www.fta.dot.gov/wtw. The federal government through the Federal Transit Administration (FTA) provides financial and technical assistance to the local transit systems. The National Transit Library is a repository of reports, documents, and data generated by professionals and lay persons from around the country. The library is designed to facilitate documents sharing among people interested in transit and transit-related topics. Southern Resource Center, http://www.fhwa.dot.gov/resourcecenters/ southern. The Southern Resource Center (SRC) is designed to facilitate transportation decision making and choices. The choices made by SRC will help the Federal Highway Administration, state, and local officials in achieving FHA's National Strategic Plan goals and performance measures. TEA-21. http://xvww.fhwa.dot/tea21/index.htm. The Transportation Equity Act for the twenty-first cenairy was enacted June 9, 1998, as Public Law 105-178. TEA-21 authorizes the federal surface transportation programs for highways, highway safety, and transit for the six-year period 1998-2003, and provided technical corrections to the original law. This Web site is a reflection of the combined effect of these two laws and refers to this combination as TEA-21. 5 Recommending Preferred Policies Recommendation in Policy Analysis Approaches to Recommendation Methods and Techniques for Recommendation Chapter Summary Learning Objectives Key Terms and Concepts Review Questions Demonstration Exercise References Case 5. Saving Time, Lives, and Gasoline: Benefits and Costs of the National Maximum Speed Limit Forecasting does not offer explicit reasons why we should value one expected outcome over another. While it answers questions of the form, What is likely to occur? it does not permit answers to the question, What should be done?1 To answer this kind of question, we require methods of recommendation, which help produce information about the likelihood that future courses of action will result in consequences that are valuable to some individual, group, or society as a whole. RECOMMENDATION IN POLICY ANALYSIS The procedure of recommendation involves the transformation of information about expected policy outcomes into information about preferred policies, To recommend a preferred policy requires prior information about the future consequences of acting 1 As we saw in Chapter 1, answers to this question require a normative approach, or normative (versus descriptive) policy analysis.