Political Science Research Methods One good example of a content analysis of this typp ;s a study of presidential campaign coverage in 1980 by Michael J. Robinson and Margaret A, Sheehan,18 We discuss the procedures they followed and some of the strengths and weaknesses of their analysis. At the beginning Robinson and Sheehan had to select the news coverage to be included in their study. Given the overwhelming amount of print and broadcast coverage that a presidential election campaign stimulates, there s was no way that they could carefully analyze it - all. In 1980 there were more than 1,000 daily A Simple Computer Content Analysis newspapers and 6,000 broadcast stations in the You can use your Internet browser to find and analyze TT ., 0, . ^ ^ ^ t. > ± , , . . . j , .„ .„. United States. Consequently, they had to select, speeches or other printed records if you are willing to keep careful records of your work. As an example, or sample, a portion of the news coverage to an- suppose you wanted to compare how presidents have alyze. Six different decisions were involved in defined the role of government over time. To do this choosing the sample. you could analyze presidential inaugural addresses. first, the researchers decided what type of When you locate a particular address you could use medium tQ ^ pr}maril because of theJr the browser s Find in page feature to look for, say, ,. , , ,. , ,. "we must" or a similar phrase. The sentence or ideas estimates of the audience reached by different that follow may indicate the presidents'feelings about media, they chose national network television the role of government, since in saying "we" the pres- and newspaper wire service copy. In the ident is usually referring to government or society. process, they decided not to select several re- You can then make a count of the types of references gional dai]y newSpapers and the news weeklies, to see how they have changed over time. i_ j i. j • ^ j x ^ _°_ as had been done m a study of the 1976 campaign, and not to draw a representative sample of daily newspapers, as had been done in a study of the 1974 congressional elections.19 Second, because Robinson and Sheehan's resources were limited, they had to decide which of the media outlets to select. In other words, which television network and which wire service would be chosen? Based again on audience size, as well as professional prestige, they selected CBS and the Associated Press (AP). But AP refused to cooperate—an example of one of those disturbing yet all too frequent developments that cause the best laid research plans to go awry. Consequently, the researchers switched reluctantly to United Press International (UPI), even though it had far fewer clients and generally placed fewer stories in daily newspapers. CBS and UPI, then, became their case studies for 1980. What products of these two media outlets should be included in the study? This was the third decision facing Robinson and Sheehan. CBS produces several versions of the nightly news, as well as morning news shows, midday news shows, news interviews, and news specials. And UPI offers several news services, among them an "A" wire, which is the national wire; a city wire; and a radio wire. The "A" wire itself has two versions: the night cycle, which runs from noon to midnight, and the day cycle, which runs from mid- 286 Document Analysis night to noon. The researchers decided to use the day "A" wire, for reasons of scope of coverage as well as accessibility, and the CBS nightly news (the 7:00 p.m. Eastern time edition), primarily for financial reasons and convenience. Fourth, they had to decide which of the material from these news shows and wire copy to include. They decided to include only campaign or campaign-related stories. Thus they used any story that "mentioned the presidential campaign, no matter how tangentially; mentioned any presidential candidate in his campaign role; mentioned any presidential candidate or his immediate family in a noncampaign, official role (almost always a story about the president); or discussed to a substantial degree any campaign lower than the presidential level."20 Just over 5,500 stories on UPI and CBS—22 percent of UPI and CBS total news coverage—met these selection criteria. Fifth, Robinson and Sheehan had to decide what time period to include in their study. Although a presidential campaign has a fairly clear ending point, election day, the beginning date of the campaign is uncertain. The researchers decided to include weekday coverage throughout 1980 (that is, from January 1 to December 31). They gave no justification for excluding the weekend news. Finally, Robinson and Sheehan made an important decision to exclude some of the content of both CBS's and UPFs news coverage. They decided not to include any photographs, film, videotape, or live pictures and to rely exclusively on verbal (CBS) or written (UPI) expression. They defended this decision on the grounds that it is more difficult to interpret the meaning of visuals and that the visual message usually supports the verbal message. Moreover, they thought that comparing the visual component of CBS with that of UPI would be difficult. Having selected the news content to be analyzed, Robinson and Sheehan then decided on the unit of analysis to use when coding news content. Generally, they analyzed the story, although at times they analyzed the content sentence by sentence and word by word. Most content analyses of this type have also used the story as the unit of analysis, but it is unfortunate that Robinson and Sheehan did not explain this choice in any detail or discuss how difficult it was to tell where one story ended and another began. Having selected the news content to be used in the measurement of campaign news coverage, Robinson and Sheehan then had to decide the content categories to be encoded and the definitions of the values for these content categories. They coded some twenty-five different aspects of each 1980 campaign story. Some of these were straightforward, such as the story's date, length, and reporter. Other categories that pertained to the central subject matter of the study were not as readily defined or measured. The researchers were primarily concerned with five characteristics: Were CBS and UPI (1) objective, (2) equitable in providing access, (3) fair, (4) serious, Political Science Research Methods and (5) comprehensive? Consequently, they needed to decide how to measure each of these attributes of news coverage. Robinson and Sheehan measured the objectivity of the press's coverage in four ways: by the number of explicit and unsupported conclusions drawn by journalists about the personal qualities of the candidates; by the number of times the journalists expressed personal opinions concerning the issues of the campaign; by counting the number of sentences that were either descriptive, analytical, or judgmental; and by counting the number of verbs used by journalists that were either descriptive, analytical, or insinuative. Clearly, each of these content categories involved judgments by the researchers concerning what constituted an explicit and unsupported conclusion, what constituted a personal opinion, and what constituted a descriptive versus an analytical sentence. The researchers provided examples of different types of coded content, and they also gave some brief definitions of what each of the categories meant to them: for example, descriptive sentences "present the who, what, where, when of the day's news, without any meaningful qualification or elaboration," analytical sentences "tell us why something occurs or predicts as to whether it might," and judgmental sentences "tell us how something ought to be or ought not to be."21 To determine whether the press granted appropriate access to each of the presidential candidates, Robinson and Sheehan measured how much coverage (in seconds for CBS and in column inches for UPI) each of the candidates received. They did not say whether this coding procedure presented any difficulties, although they did evaluate whether the amount of access granted each candidate was justified. Determining whether press coverage was fair was much more difficult than measuring access, since an evaluation of fairness requires that the tone of campaign coverage be measured. Establishing tone in a reliable and valid way is not easy. Robinson and Sheehan defined tone and fairness in these terms: Tone pertains not simply to the explicit message offered by the journalist but the implicit message as well. Tone involves the overall (and admittedly subjective) assessment we made about each story: whether the story was, for the major candidates, "good press," "bad press," or something in between. "Fairness," as we define it, involves the sum total of a candidate's press tone; how far from neutrality the candidate's press score lies.22 They evaluated content by whether it represented good press (a story that had three times as much positive information as negative information about a candidate) or bad press (a story that had three times as much negative as positive information). But they never discussed how they determined what constituted positive and negative information. Furthermore, in their effort to restrict their analysis to the behavior of journalists, Robinson and Sheehan Document Analysis excluded information about political events (such as the failure of the Iranian hostage rescue mission), polls, comments made by partisans, remarks of "criminals and anti-Americans" (such as Fidel Castro and the Ayatollah Khomeini), and statements made by the candidates themselves.23 In short, their measurement of fairness depended on the wisdom of their decisions regarding the encoding of campaign stories. Some of these decisions are questionable, such as using an arbitrary three-to-one ratio to determine good press/bad press and excluding political events, polls, and the words of the candidates themselves from the analysis. The seriousness of press coverage was measured by coding each story and, at times, each sentence, according to whether it represented policy issues, candidate issues, "horse-race coverage," or something else. Policy issues were ones that "involve major questions as to how the government should (or should not) proceed in some area of social life"; candidate issues "concern the personal behavior of the candidate during the course of his or her campaign"; and horse-race coverage focuses on "any consideration as to winning or losing." Because of the difficulty of encoding entire stories into only one of these categories, the researchers shifted to the more exacting sentence-by-sentence analysis. Some sentences did not fit into one and only one of these categories, but "the majority of sentences were fairly easy to classify as one form of news or another."24 The seriousness of UPI and CBS campaign coverage in 1980 was then measured by comparing their amount of policy issue coverage with the policy issue coverage in other media in previous presidential election years. Finally, to evaluate how comprehensive press coverage was in 1980, Robinson and Sheehan coded campaign stories as to the level of office covered: presidential, vice presidential, senatorial, congressional, and gubernatorial. More than 90 percent of both CBS and UPI campaign coverage was of the presidential and vice presidential races.25 Over the Wire and on TV represents one of the most thorough content analyses ever performed by political scientists. Certainly, in regard to the time period covered and the sheer quantity of material analyzed, it was an ambitious study. The value of the study was weakened, however, by the inadequate explanation of the content analysis procedures. The definitions of the categories used were brief and the illustrative material sketchy. Furthermore, Robinson and Sheehan dispensed with the issue of measurement quality in only one paragraph, where they reported that intercoder reliability figures among four members of the coding team averaged about 95 percent agreement.26 However, they failed to report any details about how this reliability was measured or about the agreement scores for different content categories. Despite these shortcomings, this study exemplifies how content analysis can reveal useful information about a significant political phenomenon. It also 289 Political Science Research Methods illustrates how practical limitations—such as AP's refusal to participate, as well as financial constraints—all too often limit what researchers can actually accomplish. Advantages and Disadvantages of the Written Record Using documents and records, or what we have called the written record, has several advantages for researchers. First, it allows us access to subjects that may be difficult or impossible to research through direct, personal contact, because they pertain either to the past or to phenomena that are geographically distant. For example, the record keeping of the Puritans in the Massachusetts Bay Colony during the seventeenth century allowed Erikson to study their approach to crime control, and late eighteenth-century records permitted Beard to advance and test a novel interpretation of the framing of the U.S. Constitution. Neither of these studies would have been possible had there been no records available from these periods. A second advantage of data gleaned from archival sources is that the raw data are usually nonreactive. As we mentioned in previous chapters, human subjects often consciously or unconsciously establish expectations or other relationships with investigators, which can influence their behavior in ways that might confound the results of a study. But those writing and preserving the records are frequently unaware of any future research goal or hypothesis or, for that matter, that the fruits of their labors will be used for research purposes at all. The record keepers of the Massachusetts Bay Colony were surely unaware that their records would ever be used to study how a society defines and reacts to deviant behavior. Similarly, state loan officers during the late 1700s had no idea that two hundred years later a historian would use their records to discover why some people were in favor of revising the Articles of Confederation. This nonreactivity has the virtue of encouraging more accurate and less self-serving measures of political phenomena. Record keeping is not always completely nonreactive, however. Record keepers are less likely to create and preserve records that are embarrassing to them, their friends, or their bosses; that reveal illegal or immoral actions; or that disclose stupidity, greed, or other unappealing attributes. Richard Nixon, for example, undoubtedly wished that he had destroyed or never made the infamous Watergate tapes that revealed the extent of his administration's knowledge of the 1972 break-in at Democratic National Committee headquarters. Today many record-keeping agencies employ paper shredders to ensure that a portion of the written record does not endure. Researchers must be aware of the possibility that the written record has been selectively preserved to serve the record keepers' own interests. Document Analysis A third advantage of using the written record is that sometimes the record has existed long enough to permit analyses of political phenomena over time. The before-and-after research designs discussed in Chapter 5 may then be used. For example, suppose you are interested in how changes in the 55-mile-per-hour speed limit (gradually adopted by the states and then later dropped by many states on large stretches of their highway systems) affected the rate of traffic accidents. Assuming that the written record contains data on the incidence of traffic accidents over time in each state, you could compare the accident rate before and after changes in the speed limit in those states that changed their speed limit. These changes in the accident rate could then be compared with the changes occurring in states in which no change in the speed limit took place. The rate changes could then be "corrected" for other factors that might affect the rate of traffic accidents. In this way an interrupted time series research design could be used, a research design that has some important advantages over cross-sectional designs. Because of the importance of time, and of changes in phenomena over time, for the acquisition of causal knowledge, a data source that supports longitudinal analyses is a valuable one. The written record more readily permits longitudinal analyses than do either interview data or direct observation. A fourth advantage to researchers of using the written record is that it often enables us to increase sample size above what would be possible through either interviews or direct observation. For example, it would be terribly expensive and time consuming to observe the level of spending by all candidates for the House of Representatives in any given year. Interviewing candidates would require a lot of travel, long-distance phone calls, or the design of a questionnaire to secure the necessary information. Direct observation would require gaining access to many campaigns. How much easier and less expensive it is to contact the Federal Election Commission in Washington, D.C., and request the printout of campaign spending for all House candidates. Without this written record, resources might permit only the inclusion of a handful of campaigns in a study; with the written record, all 435 campaigns can easily be included. This raises the fifth main advantage of using the written record: cost. Since the cost of creating, organizing, and preserving the written record is borne by the record keepers, researchers are able to conduct research projects on a much smaller budget than would be the case if they had to bear the cost themselves. In fact, one of the major beneficiaries of the record-keeping activities of the federal government and of news organizations is the research community. It would cost a prohibitive amount for a researcher to measure the amount of crime in all cities larger than 25,000 or to collect the voting returns in all 435 congressional districts. Both pieces of information are available at little or no cost, however, because of the record-keeping activities of 291 Political Science Research Methods the FBI and the Elections Research Center, respectively. Similarly, using the written record often saves a researcher considerable time. It is usually much quicker to consult printed government documents, reference materials, computerized data, and research institute reports than it is to accumulate data ourselves. The written record is a veritable treasure trove for researchers. Collecting data in this manner, however, is not without some disadvantages. One problem mentioned earlier is selective survival. For a variety of reasons, record keepers may not preserve all pertinent materials but rather selectively save those that are the least embarrassing, controversial, or problematic. It would be surprising, for example, if political candidates, campaign consultants, and public officials saved correspondence and memoranda that cast disfavor on themselves. Obviously, whenever a person is selectively preserving portions of the written record, the accuracy of what remains is suspect. This is less of a problem when the connection between the record keeper's self-interest and the subject being examined by the researcher is minimal. A second, related disadvantage of the written record is its incompleteness. Large gaps exist in many archives due to fires, losses of other types, personnel shortages that hinder record-keeping activities, and the failure of the record maker or record keeper to regard a record as worthy of preservation. We all throw out personal records every day; political entities do the same. It is difficult to know what kinds of records should be preserved, and it is often impossible for record keepers to bear the costs of maintaining and storing voluminous amounts of material. Another reason records may be incomplete is simply because no person or organization has assumed the responsibility for collecting or preserving them. For example, before 1930, national crime statistics were not collected by the FBI, and before the creation of the Federal Election Commission in 1971, records on campaign expenditures by candidates for the U.S. Congress were spotty and inaccurate. A third disadvantage of the written record is that its content may be biased. Not only may the record be incomplete or selectively preserved, but it also may be inaccurate or falsified, either inadvertently or on purpose. Memoranda or copies of letters that were never sent may be filed, events may be conveniently forgotten or misrepresented, the authorship of documents may be disguised, and the dates of written records may be altered; furthermore, the content of government reports may tell more about political interests than empirical facts. For example, Soviet and East European governments apparently released exaggerated reports of their economic performance for many years; scholars (and investigators) attempting to reconstruct the actions in the Watergate episode have been hampered by alterations of the record by those worried about the legality of their role in it. Often, historical Document Analysis interpretations rest upon who said or did what, and when. To the extent that falsifications of the written record lead to erroneous conclusions, the problem of record-keeping accuracy can bias the results of a research project. The main safeguard against bias is the one used by responsible journalists: confirming important pieces of information through several dissimilar sources. A fourth disadvantage is that some written records are unavailable to researchers. Documents may be classified by the federal government; they may be sealed (that is, not made public) until a legal action has ceased or the political actors involved have passed away; or they may be stored in such a way that they are difficult to use. Other written records—such as the memoranda of multinational corporations, campaign consultants, and Supreme Court justices—are seldom made public because there is no legal obligation to do so and the authors benefit from keeping them private. Finally, the written record may lack a standard format because it is kept by different people. For example, the Chicago budget office may have budget categories for public expenditures different from those used in the San Francisco budget office. Or budget categories used in the Chicago budget office before 1960 may be different from the ones used after 1960. Or the French may include items in their published military defense expenditures that differ from those included by the Chileans in their published reports. Consequently, a researcher often must expend considerable effort to ensure that the formats in which records are kept by different record-keeping entities can be made comparable. Despite these limitations, political scientists have generally found that the advantages of using the written record outweigh the disadvantages. The written record often supplements the data we collect through interviews and direct observation, and in many cases it is the only source of data on historical and cross-cultural political phenomena. Conclusion The written record includes personal records, archival collections, organizational statistics, and the products of the news media. Researchers interested in historical research, or in a particular event or time in the life of a polity, generally use the episodic record. Gaining access to the appropriate material is often the most resource-consuming aspect of this method of data collection, and the hypothesis testing that results is usually more qualitative and less rigorous (some would say more flexible) than with the running record. The running record of organizations has become a rich source of political data as a result of the record-keeping activities of governments at all levels and of interest groups and research institutes concerned with public affairs. The running record is generally more quantitative than the episodic record 293 Political Science Research Methods and may be used to conduct longitudinal research. Measurements using the running record can often be obtained inexpensively, although the researcher frequently relinquishes considerable control over the data collection enterprise in exchange for this economy. One of the ways in which a voluminous, nonnumerical written record may be turned into numerical measures and then used to test hypotheses is through a procedure called content analysis. Content analysis is most frequently used by political scientists interested in studying media content, but it has been used to advantage in studies of political speeches, statutes, and judicial decisions. Through the written record, researchers may observe political phenomena that are geographically, physically, and temporally distant from them. Without such records, our ability to record and measure historical phenomena, cross-cultural phenomena, and political behavior that does not occur in public would be seriously hampered. Notes 1. Frank Way and Barbara J. Burt, "Religious Marginality and the Free Exercise Clause," American Political Science Review 77 (September 1983): 652-665; Samuel C. Patterson and Gregory A. Caldeira, "Getting Out the Vote: Participation in Gubernatorial Elections," American Political Science Review 77 (September 1983): 675-689; William Zimmerman and Glenn Palmer, "Words and Deeds in Soviet Foreign Policy: The Case of Soviet Military Expenditures," American Political Science Review 77 (June 1983): 358-367; and Alan P. L. Liu, The Politics of Corruption in the People's Republic of China," American Political Science Review 77 (September 1983): 602-623. 2. Steven C. Poe and C. Neal Tate, "Repression of Human Rights to Personal Integrity in the 1980s: A Global Analysis," American Political Science Review 88 (December 1994): 853-872; Jeff Yates and Andrew Whitford, "Presidential Power and the United States Supreme Court," Political Research Quarterly 51 (June 1998): 539-550; Jeffrey A. Segal and Albert D. Cover, "Ideological Values and the Votes of U.S. Supreme Court Justices," American Political Science Review 83 Qune 1989): 557-565; and Km Fridkin Kahn and Patrick J. Kenney, "Do Negative Campaigns Mobilize or Suppress Turnout? Clarifying the Relationship between Negativity and Participation," American Political Science Review 93 (December 1999): 877-890. 3. Charles Beard reports that he was able to use some records in the U.S. Treasury Department in Washington "only after a vacuum cleaner had been brought in to excavate the ruins." See Charles Beard, An Economic Interpretation of the Constitution of the United States (London: Macmillan, 1913), 22. 4. Kai T. Erikson, The Wayward Puritans (New York: Wiley, 1966). 5. The records of the governor were edited by Nathaniel B. ShurfJeff and printed by order of the Massachusetts legislature in 1853-1854; the records of the courts were edited by George Francis Dow and published by the Essex Institute in Salem, Massachusetts. 6. Erikson, The Wayward Puritans, 209-210. 7. Beard, An Economic Interpretation. 8. Ibid., 324. Beard's interpretation has been challenged by several historians. Among his critics are Robert E. Brown, Charles Beard and the Constitution (Princeton: Princeton University Press, 1956); Forrest McDonald, We the People: The Economic Origins of the Constitution (Chicago: University of Chicago Press, 1958); and Gordon Wood, The Creation of the American Republic (New York: Norton, 1972). Although Beard's interpretation continues to be controversial, the authors of one mainstream political science textbook state, "[AJlthough histor- Document Analysis ical evidence does not fuily support Beard's conclusions, most historians acknowledge that economic interests were very much at issue in the framing and ratification of the Constitution." Lewis Lipsitz and David M. Speak, American Democracy, 2d ed. (New York: St. Martin's, 1989), 76. 9. James David Barber, The Presidential Character, 3d ed. (Englewood Cliffs, N J.: Prentice-Hall, 1985), 4, 5. 10. James David Barber, The Presidential Character, 1st ed. (Englewood Cliffs, NJ.: Prentice-Hall, 1972), ix. 11. A critique of Barber's analysis may be found in Garry Wills, The Kennedy Imprisonment (Boston: Little, Brown, 1982). 12. Emily Van Dunk, "Getting Data through the Back Door: Techniques for Gathering Data from State Agencies," Slate Politics and Policy Quarterly 1 (Summer 2001), 210-218. 13. Time Magazine, "50 Best Websites 2007." Retrieved from www.time.com/time/specials/ 2007/article/0,28804,1633488_1639316,00.html. 14. PollingReport.com, "Bush: Job Ratings." Retrieved from www.pollingreport.com/Bush Jobl.htm. 15. Kenneth D. Bailey, Methods of Social Research, 2d ed. (New York: Free Press, 1982), 312-313. 16. Segal and Cover, "Ideological Values." 17. Bailey, Methods of Social Research, 319. 18. Michael J. Robinson and Margaret A. Sheehan, Over the Wire and on TV (New York: Russell Sage Foundation, 1983); on their survey decisions, discussed in the following paragraphs, see pp. 17-27. 19. On the 1976 campaign, see Thomas Patterson, The Mass Media Election: How Americans Choose Their President (New York: Praeger, 1980). On the 1974 congressional elections, see Arthur Miller, Edie Goldenberg, and Lutz Erbring, Type-Set Politics: Impact of Newspapers on Public Confidence," American Political Science Review 73 (March 1979): 67--84. 20. Robinson and Sheehan, Over the Wire, 20. 21. Ibid., 49-50. 22. Ibid., 92. 23. Ibid., 94-95. 24. Ibid., 144,145,155. 25. Ibid., 173. 26. Ibid., 22. rms Introduced Content analysis. A procedure by which verbal, nonquantitative records are transformed into quantitative data. Episodic record. The portion of the written record that is not part of a regular, ongoing record-keeping enterprise. Intercoder reliability. Demonstration that multiple analysts, following the same content analysis procedure, agree and obtain the same measurements. Running record. The portion of the written record that is enduring and covers an extensive period of time. Written record. Documents, reports, statistics, manuscripts, and other recorded materials available and useful for empirical research. 295 Political Science Research Methods Suggested Readings Hovey, Kendra A., and Harold A. Hovey. CQ's State Fact Finder 2004. Washington, D.C.: CQ Press, 2004. Miller, Delbert C. Handbook of Research Design and Measurement. Thousand Oaks, Calif.: Sage Publications, 2002. Stanley, Harold W., and Richard K. Niemi. Vital Statistics on American Politics: 2003-2004. Washington, D.C.: CQ Press, 2003. Van Dunk, Emily. "Getting Data through the Back Door: Techniques for Gathering Data from State Agencies." State Politics and Policy Quarterly (Summer 2001): 210-218. Webb, Eugene J., and others. Unobtrusive Measures, Rev. ed. Thousand Oaks, Calif.: Sage Publications, 1999.