ES apsa Conducting and Coding Elite Interviews Author(s): Joel D. Aberbach and Bert A. Rockman Source: PS: Political Science and Politics, Vol. 35, No. 4 (Dec, 2002), pp. 673-676 Published by: American Political Science Association Stable URL: http://www.jstor.org/stable/155480~ Accessed: 24/03/2014 02:26 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. 1 STOR American Political Science Association is collaborating with JSTOR to digitize, preserve and extend access to PS: Political Science and Politics. http://www.jstor.org This content downloaded from 142.103.160.110 on Mon, 24 Mar 2014 02:26:03 AM All use subject to JSTOR Terms and Conditions Conducting and Coding Elite Interviews Introduction In real estate the maxim for picking a piece of property is "location, location, location." In elite interviewing, as in social science generally, the maxim for the best way to design and conduct a study is "purpose, purpose, purpose." It's elementary that the primary question one must ask before designing a study is, "What do I want to learn?" Appropriate methods flow from the answer. Interviewing is often important if one needs to know what a set of people think, or how they interpret an event or series of events, or what they have done or are planning to do. (Interviews are not always necessary. Written records, for example, may be more than adequate.) In a case study, respondents are selected on the basis of what they might know to help the investigator fill in pieces of a puzzle or confirm the proper alignment of pieces already in place. If one aims to make inferences about a larger population, then one must draw a systematic sample. For some kinds of information, highly structured interviews using mainly or exclusively close-ended questions may by Joel D. Aberbach, UCLA Bert Ohio A. Rockman, State University be an excellent way to proceed. If one needs to probe for information and to give respondents maximum flexibility in structuring their responses, then open-ended questions are the way to go. In short, elite studies will vary a lot depending on what one wants to learn, and elite interviewing must be tailored to the purposes of the study. Our focus here will be on the types of studies we have conducted as reported in Bureaucrats and Politicians in Western Democracies (coauthored with Robert D. Putnam, 1981) and In the Web of Politics (2000)—studies of elite attitudes, values, and beliefs—but from time to time we will make reference to other types of studies as well. Designing the Study Our goals were to examine the political thinking of American administrators and (in the first round of our study) members of Congress. We were interested in their political attitudes, values, and beliefs, not in particular events or individuals. A major aim was to examine important parameters that guide elite's definitions of problems and these responses to them. We wanted to generalize about these phenomena in the population of top administrators, both political appointees and high-level civil servants, and among high-level elected officials as well. This meant that we had to draw representative samples of members of these elites and use an interviewing technique that would enable us to gauge subtle aspects of elite views of the world. Drawing a sample of members of Congress was quite straightforward. Lists of members are easily accessible and drawing them at random (stratified into two broad age groups in our case) was a simple process. Sampling high-level administrators was quite another matter. We wanted to study officials who worked for federal agencies primarily concerned with domestic policy (a requirement for a comparative aspect of the study), who were at a level where they might have a say in policymaking, and who, for convenience' sake, worked in the general vicinity of Washington, DC. Further, we wanted to make sure that we covered both political appointees and career civil servants. To accomplish these goals, we had to compile lists of top administrators in each agency, determine who held top career positions in each hierarchy (our criterion was that civil servants had to hold the top career positions in their administrative units and report to a political appointee), and then sample in such a way that we had representative groups of career and noncareer executives. We eventually drew people randomly from cabinet departments, regulatory agencies, executive agencies, and independent agencies in proportion to the number of executives in each sampling classification within each agency. The good news is that bureaucratic elites are little studied by political scientists, so response rates were very high (over 90% for career civil servants). They were lower for members of Congress—in the high seventies in the first round of our study (1970-71) and lower when we tried a second round in 1986-87, so low in fact that we did not feel it appropriate to use the second-round congressional interviews for anything but illustration. This points to an important problem facing those wishing to interview elites. One must get access, and it can be quite difficult to secure interviews with busy officials who are widely sought after. It helps to have the imprimatur of a major and respected research house like the Brookings Institution, and it is important to be politely persistent. One should not be too put off when told that your potential respondent is too busy to see you when you call (after writing a letter) and call back in an attempt to arrange the interview. One PSOnline www.apsanet.org 673 This content downloaded from 142.103.160.110 on Mon, 24 Mar 2014 02:26:03 AM All use subject to JSTOR Terms and Conditions should write a letter laying out the general purpose of the study and be ready to repeat your "spiel" over the phone to the appointments secretary. Mention prestigious organizational sponsors if you have them and mention some past experience in studying the area of interest if you have it. It sometimes helps to mention what you've written, but do not expect respondents or those who schedule them to be impressed that you have published in APSR or its equivalents. They are more attuned to other types of journals (like National Journal) or to the press. Getting in the door is important, but what you do next is even more important. We'll touch only briefly on the suggestion that you refrain from spilling the coffee that may be offered to you or that you look reasonably presentable to the rather conservative dressers in Washington, and get to the heart of the matter—what you ask the respondents and how you ask them. What you ask is, of course, a function of what you want to know, but so also is how you ask the questions. As noted, we wanted to examine the political thinking of American political and bureaucratic elites. We wanted to know about their political attitudes, values, and beliefs. We were not trying to predict discrete behavior, for example, their choices of particular policies; rather, we were interested in examining the parameters that guided their definition of problems and their responses to them. To accomplish our goals, we decided on an approach using mainly open-ended questions that allowed the respondents to engage in wide-ranging discussions. One of our main aims was to get at the contextual nuance of response and to probe beneath the surface of a response to the reasoning and premises that underlie it. Consequently, we decided on a semi-structured interview in which the open-ended questions we mainly relied on gave the respondents latitude to articulate fully their responses. This requires great attention from the interviewer since such an interview has a more conversational quality to it than the typical highly structured interview and questions may, therefore, be more easily broached in a manner that does not follow the exact order of the original interview instrument. There is an obvious cost here in terms of textbook advice on interviews—respondents may not necessarily have been asked questions in the same order—but in our experience the advantages of conversational flow and depth of response outweigh the disadvantages of inconsistent ordering. That suggests a key principle of real-world research—sometimes one does something that is not the ideal (in this case, vary the order of questions) because the less than ideal approach is better than the alternative (in this case, a clumsy flow of conversation that will inhibit in-depth ruminations on the issues of interest). There are three major considerations in deciding on a mainly open-ended approach rather than one using more close-ended questions. One is the degree of prior research on the subject of concern. The more that is known, the easier it is to define the questions and the response options with clarity, that is, to use close-ended questions. Our study explored a series of rather abstract and complex issues in a relatively uncharted area at the time, the styles of thinking as well as the actual views of American political and bureaucratic elites. Emphasizing close-ended questions and tight structuring would not have served our major purpose, the exploration of elite value 674 patterns and perceptions, but we did recognize the cost—the kinds of data we collected made it more difficult to produce an analytically elegant end product, at least if one uses statistical elegance as the major criterion in evaluating analytical elegance. A second consideration leading us to use an open-ended approach was our desire to maximize response validity. Open-ended questions provide a greater opportunity for respondents to organize their answers within their own frameworks. This increases the validity of the responses and is best for the kind of exploratory and in-depth work we were doing, but it makes coding and then analysis more difficult. The third major consideration is the receptivity of respondents. Elites especially—but other highly educated people as well—do not like being put in the straightjacket of close-ended questions. They prefer to articulate their views, explaining why they think what they think. Close-ended questions (and we did use some) often elicited questions in return about why we used the response categories we used or why we framed the questions the way we did. Parenthetically, in later rounds of our longitudinal study, we used close-ended versions of some of our earlier open-ended questions, but by then we had the benefit of great experience in ascertaining both the mind-sets of our respondents and the range of responses they would find tolerable. We should stress again that there are costs in using open-ended questions. First, there are substantial costs in time spent in doing the interviews themselves, in transcribing them or otherwise preparing them for coding, and in the coding process itself (see below). Second, there are the related costs in money. The process is slow and the costs mount in direct relation to the time spent. Third, as mentioned above, there are costs in analytic rigor, certainly in terms of limits on what one can do in data analysis. But, going back to the maxim on purpose, answering the research questions one starts with in the most reliable way is more valuable than an analytically rigorous treatment of less reliable and informative data. Conducting the Interviews We noted earlier some of the practical considerations in getting an interview. They include such things as writing a brief letter to respondents on the most prestigious, non-inflammatory letterhead you have access to, stating your purpose in a few well-chosen sentences (no need to be too precise or certainly overly detailed); having a good "spiel" prepared for the appointments secretary and later for the respondent prior to the interview; and fending off questions about your hypotheses until after the interview is over. That prevents contamination of the respondents and also puts this part of the conversation on the respondent's "time" and not the time reserved for the interview. Obvious advice includes the need to be persistent and to insist firmly, but politely (and with a convincing explanation) that no one but the person sampled, i.e., the principal, will do for the interview. It can be a major undertaking in time and effort to secure the interview, but success there is only the beginning. We did most of our interviews in the respondents' offices, but you should be prepared to do them where you can. Our most harrowing experience was interviewing an administrator as he drove to an appointment. He was very animated (and in a hurry) and nearly got himself and us killed as he weaved PS December 2002 Elites especially—but other highly educated people as well—do not like being put in the straightjacket of close-ended questions. This content downloaded from 142.103.160.110 on Mon, 24 Mar 2014 02:26:03 AM All use subject to JSTOR Terms and Conditions through Washington traffic late in the afternoon while presenting his views in a passionate and often amusing style. We tape-recorded the interviews in order to facilitate use of a conversational style and to minimize information loss. Few respondents refused to be taped, and almost all quickly lost any inhibitions the recorder might have induced. Starting with innocuous questions about the person's background facilitates this since people find talking about themselves about as fascinating as any subject they know. Our judgment (and the judgment of our coders) is that the interviewees were frank in their answers, especially because our questions focused on general views and not information that might jeopardize the respondents' personal interests. Coding Open-Ended Interviews Coding procedures assume paramount importance when, as in our studies, one employs open-ended interviewing techniques to elicit subtle and rich responses and then uses this information in quantitative analyses. Particularly in elite interviewing, where responses to questions are almost always coherent and well formulated, respondents can productively and effectively answer questions in their own ways and the analyst can then build a coding system that maintains the richness of individual responses but is sufficiently structured that the interviews can be analyzed using quantitative techniques. The wealth of material contained in the responses, in fact, allows a varied set of codes, some recording manifest responses to the questions asked and some probing deeper into the meaning of the responses. We developed three basic types of codes to achieve the purposes of our study (Aberbach, Chesney, and Rockman 1975, 14—16). Manifest coding items involved direct responses to particular questions (for example, whether differences between the parties are great, moderate or few). Latent coding items were those where the characteristics of the response coded were not explicitly called for by the questions themselves (for example, we coded variables dealing with positive and negative references towards the role of conflict from questions about the nature of conflict in American society and the degree to which it can be reconciled). Global coding items were those where coders formed judgments from the interview transcripts about general traits and styles (for example, coding whether respondents employed a coherent political framework in responding to political questions). In the first round of the study we had two sets of coders independently code each interview and calculated inter-coder reliability coefficients for the various variables. Not surprisingly, on average, the manifest items were the most reliable, followed by the latent items, and then the global items. We increased reliability further (we hope) by having a study director reconcile any disagreements among the coders in conferences with the coders. Our experience with coding taught us that simultaneous coding by the two coders with immediate reconciliation yielded much more reliable coding than serial coding where large numbers of interviews were coded prior to reconciliation meetings. Some Problems and Advantages in Doing a Longitudinal Elite Study We encountered a series of problems in doing a longitudinal study, most of which impacted both the interviews themselves and the coding. First, elite systems do not necessarily remain stable over time. This is particularly likely in the bureaucracy, which is PSOnline www.apsanet.org actually a much more dynamic institution than stereotypes might lead one to believe. Aside from reorganizations and the creation of new administrative units, which were easily dealt with when we constructed each successive sampling frame, the Civil Service Reform Act of 1978 created the Senior Executive Service (SES) to replace the system of Supergrades that existed prior to the act. SES created a rank-in-the-person system in place of a rank-in-the-position system and made us reexamine our earlier criterion of interviewing the highest civil servant in each hierarchy. We eventually decided to continue sampling the highest civil servant in each hierarchy for purposes of continuity, but added a sample of other SES executives. In the end, this proved substantively beneficial, as readers of In the Web of Politics will see. Second, by round two of the study we encountered a few executives who knew something of our earlier work published on the basis of round one. We interviewed these people, but there may be some unknown effects of their familiarity with the project. Third, we had to decide whether to repeat questions in later rounds even though we knew we could ask better ones. Following the advice of Philip Converse, we tended to repeat items, choosing to keep whatever measurement error the original item introduced over the great problem of comparing results from different questions. Fourth, as already mentioned, the costs of open-ended longitudinal studies are considerable. We dealt with this by shortening the instrument in subsequent rounds—retaining the questions we knew to be key as our understanding deepened over time. In addition, we supplemented with more close-ended questions in later years, and now have a basis for comparing open and closed questions in certain areas. There were also advantages beyond those we have already mentioned to doing a longitudinal, heavily open-ended study. First, once we actually developed our codes in the first round of the study, the costs of coding dropped substantially in subsequent rounds. We did some refining of the codes, of course, but the costs here were minor compared to our original investment. Second, as noted above, in each succeeding round, as we developed a fuller understanding of what and how elites think, we were able to use more close-ended questions to supplement the open questions. Third, our interviewing technique means that we have a raw product that should be of great use to historians. We interviewed during a turbulent period in American administrative history (particularly during the Nixon and Reagan administrations) and we have transcripts of in-depth conversations with people who are historically important. Because of confidentiality promises, these interviews will not be available until respondents are deceased, but eventually the interviews should prove valuable in understanding the era when our respondents wielded power. Conclusions To reiterate the key point, studies must be designed with purpose as the key criterion. Elite studies are no exception. We conducted our longitudinal study the way we did because of our desire to probe deeply elite attitudes, values, and beliefs, and also because of the state of prior research in the area, our desire to maximize response validity, and our sense that elites would be most receptive to the type of interview we conducted and would be well positioned to handle the types of questions we asked. While we would use more close-ended questions in future research because of what we learned 675 This content downloaded from 142.103.160.110 on Mon, 24 Mar 2014 02:26:03 AM All use subject to JSTOR Terms and Conditions (and we used more as time went on), the basic approach of a semi-structured and largely open-ended interview still seems best to us. We learned a great deal from our subjects—and about our subject—through these conversations. Using a systematic coding procedure not only allowed us to employ quantitative techniques in our later analyses, but also kept us from allowing the colorful interviewee or especially enjoyable story to dominate our view of the overall phenomena we were studying. At the same time, the interviewing technique helped us to use clues from the most insightful respondents to suggest hypotheses for our analysis. We close with a general observation about elite interviewing studies; they take a lot of persistence, time, and whatever passes these days for shoe leather, but they are immense fun. You'll meet some of the most interesting people in the country and learn a huge amount about political life and the workings of political institutions. If you like both politics and political science, it's one terrific way to spend your time. References Aberbach, Joel D., Robert D. Putnam, and Bert A. Rockman. 1981. Bureaucrats and Politicians in Western Democracies. Cambridge: Harvard University Press. Aberbach, Joel D., and Bert A. Rockman. 2000. In the Web of Politics: Three Decades of the U.S. Federal Executive. Washington, DC: The Brookings Press. Aberbach, Joel D.t James D. Chesney, and Bert A. Rockman. 1975. "Exploring Elite Political Attitudes: Some Methodological Lessons.' Political Methodology. 2:1-27. 676 PS December 2002 This content downloaded from 142.103.160.110 on Mon, 24 Mar 2014 02:26:03 AM All use subject to JSTOR Terms and Conditions