PSOnline www.apsanet.org 663
Interview Methods in Political Science
SYMPOSIUM
The following essays are based on presentations
given by the authors during a short
course on elite interviewing, held at the 2001
APSA meeting in San Francisco. The short
course, sponsored by the Political Organizations
and Parties organized section of the
APSA, drew nearly 100 participants.
The term elite interviewing generates some
confusion and disagreement, as some researchers
use “elite” to refer to the socioeconomic
position of the respondent, whereas for
others it has more to do with how the respondent
is treated by the interviewer. There is an
interaction between
these two situations,
as political scientist
Lewis Dexter pointed
out in his book, Elite
and Specialized In-
terviewing:
“In standardized interviewing…the
investigator
defines the question and the problem;
he is only looking for answers within the
bounds set by his presuppositions. In elite interviewing,
as here defined, however, the investigator
is willing, and often eager to let the
interviewee teach him what the problem, the
question, the situation, is.…Partly out of necessity…this
approach has been adopted much
more often with the influential, the prominent
and the well-informed than with the rank-andfile
of a population. For one thing, a good
many well-informed or influential people are
unwilling to accept the assumptions with
which the investigator starts; they insist on explaining
to him how they see the situation,
what the real problems are as they view the
matter” (pp. 6–7).
The essays presented here for the most part
focus on interviews of people in decisionmaking
or leadership roles—members of
Congress, members of parliaments, top-level
bureaucrats, party leaders, and interest group
leaders. More broadly speaking, however, elite
interviewing can be used whenever it is appropriate
to treat a respondent as an expert
about the topic at hand. One of the essays on
these pages, for example, involves interviews
with activists, who while not “elites” in the
socioeconomic sense of the word, are experts
in their field and treated as such by the inter-
viewer.
There have been relatively few resources in
the discipline for training students and other
researchers about the methodological challenges
and informational benefits of conducting
interviews with elite subjects. It is our
hope that the short course and these essays
help further discussion of these topics.
by
Beth L. Leech,
Rutgers University
Joel D. Aberbach is professor of
political science and policy studies and
director of the Center for American
Politics and Public Policy at UCLA. He
is also co-chair of the International
Political Science Association’s Research
Committee on Structure and Organization
of Government. He can be reached at
aberbach@polisci.ucla.edu.
Bert A. Rockman is director and professor
in the School of Public Policy
and Management at Ohio State University.
He is co-editor of the journal,
Governance, and is the co-author with
Joel D. Aberbach of In the Web of
Politics: Three Decades of the U.S.
Federal Executive (Brookings, 2000). He
can be reached at rockman.1@osu.edu.
Jeffrey M. Berry is the John Richard
Skuse, Class of 1941 Professor of Political
Science at Tufts University. His most
recent book is The New Liberalism
(Brookings Institution Press, 1999). He can
be reached at jeffrey.berry@tufts.edu.
Kenneth Goldstein is an associate
professor of political science at the
University of Wisconsin, Madison. He is
the author of Interest Groups, Lobbying,
and Participation in America and a number
of journal articles and book chapters on
televison advertising, participation, and
survey methods. He can be reached at
goldstei@polisci.wisc.edu.
Polina M. Kozyreva is head of the department
of social stratification at the Institute
of Sociology of the Russian Academy of
Sciences. She is a specialist on survey
research methods and has contributed to
numerous edited volumes, including Public
Opinion and Regime Change: The New
Politics of Post-Soviet Societies (Westview
Press, 1993).
Contributors to this symposium
https://doi.org/10.1017/S1049096502001117
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:15, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
Beth L. Leech is an assistant professor of political science
at Rutgers University, where her research and teaching interests
focus on the roles of interest groups and the news media
in policy formation. She organized the short course on elite
interviewing at the 2001 APSA Annual Meeting. She can be
reached at leech@polisci.rutgers.edu.
Sharon Werning Rivera is an assistant professor of
political science at Hamilton College, where she teaches in the
areas of comparative politics, democratic transitions, and the
politics of Russia and Eastern Europe. She can be reached at
srivera@hamilton.edu.
Eduard G. Sarovskii is a senior researcher at the Institute
of Sociology of the Russian Academy of Sciences. He specializes
in the social structure of contemporary Russian society
and has published widely on the issue of social stratification.
Laura R. Woliver is a professor in the department of government
and international studies at the University of South
Carolina, where she is also interim director of the Women’s
Studies Program. Her latest book, The Political Geographies of
Pregnancy (University of Illinois Press, 2002) includes materials
from person to person in-depth interviews. She can be
reached at woliver@sc.edu.
664 PS December 2002
https://doi.org/10.1017/S1049096502001117
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:15, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
perspective, while at the same time allowing
hypothesis testing and the quantitative analysis
of interview responses. In this essay I will
focus on that interview style—semistructured
interviews with open-ended questions. It is a
style that is often used in elite interviewing,
and variations on this style are discussed in
several of the other essays on these pages. My
observations and suggestions here come not
only from my past experiences as a journalist
and an ethnographic researcher, but also from
my current research among lobbyists and policymakers
in Washington, DC, as part of the
Advocacy and Public Policymaking project.1
Gaining Rapport
Without rapport, even the best-phrased
questions can fall flat and elicit brief, uninformative
answers. Rapport means more than
just putting people at ease. It means convincing
people that you are listening, that you
understand and are interested in what they are
talking about, and that they should continue
talking. There are several ways of doing this
within the interview.
Putting Respondents at Ease
Some interviewing textbooks recommend
that the interviewer “appear slightly dim and
agreeable” (McCracken 1988, 38) or “play
dumb” so that respondents do not feel threatened
and are not worried that they will lose
face in the interview. The danger here is
that—especially when dealing with highly
educated, highly placed respondents—they
will feel that they are wasting their time with
an idiot, or at least will dumb-down their
answers and subject interviewers to a Politics
101 lecture. At the same time, the concern
about respondents’ feelings is valid. Even
highly educated, highly placed respondents do
not want to appear stupid in front of a university
professor (I have had to reassure vice
presidents of large organizations who were
worried that they had “babbled” during an
interview).
I recommend a middle road. The interviewer
should seem professional and generally
knowledgeable, but less knowledgeable than
the respondent on the particular topic of the
interview. So for me, I know a lot about lobbying
and a lot about American politics, and I
know what has been in the newspaper on a
given policy issue, but I present myself as
having little or no idea about what happened
behind the scenes in the given policy issue I
am interviewing about. I try to continue this
PSOnline www.apsanet.org 665
Asking Questions: Techniques for
Semistructured Interviews
In an interview, what you already know is as
important as what you want to know. What
you want to know determines which questions
you will ask. What you already know will determine
how you ask them.
Thanks to past jobs as a journalist and as
an anthropological researcher, I’ve had training
in both journalistic and ethnographic
styles of interviewing. The two are at the
opposite ends of the interview continuum. The
journalistic style tries to verbally pin the respondent
down by appearing to know everything
already. The questions are direct and
directed toward a particular outcome. The
ethnographic style of interviewing instead tries
to enter into the world of the respondent by
appearing to know very little.
There are many types of interviews with
many styles of questions, each appropriate in
different circumstances. Unstructured interviews,
often used by ethnographers, are really
more conversations
than interviews,
with even the topic
of conversation subject
to change as
the interview progresses.
These
“soaking and poking”
experiences are most appropriate when
the interviewer has limited knowledge about a
topic or wants an insider perspective. But the
tendency for such interviews to wander off in
unexpected directions—although they may
provide for fresh ideas—almost guarantees
that the interviews will not be a very
consistent source of reliable data that can be
compared across interviews. Unstructured interviews
are best used as a source of insight,
not for hypothesis testing.
Sometimes, however, we already have a lot
of knowledge about a topic and want very
specific answers to very specific questions.
When the researcher already knows a lot about
the subject matter—the categories and all possible
responses are familiar, and the only goal
is to count how many people fall into each
category of response—structured interviews
with closed-ended questions are most appropriate.
Political scientists are most familiar with
this type of interview because of mass public
opinion surveys. Such closed-ended approaches
can sometimes backfire, however, if we assume
we are familiar with an area but end up
asking the wrong questions in the wrong way
or omitting an important response choice. We
may find ourselves with reliable data that lacks
any content validity.
There is a middle ground, however, and one
that can provide detail, depth, and an insider’s
by
Beth L. Leech,
Rutgers University
https://doi.org/10.1017/S1049096502001129
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:26, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
approach even after I have conducted many interviews on the
same policy issue. I don’t want someone to leave something
out because they assume I already knew it.
A second, related point to remember is that many of your
subjects will be more nervous than you are. After all, after
you’ve done a couple of these, you are experienced. Your respondents,
however, may never have been the subject of an
academic study before. Reassure them by being open and
avoiding threatening descriptions of your work. “Talk with
you” is less threatening than “interview you,” for example
(Weinberg 1996, 83). It is possible to be honest without being
scary. There’s no need to make your work sound like a medical
procedure (at least until you are getting ready to submit it
to a journal, that is). Approach interview subjects with a positive
attitude. Act as though it is natural that people would
want to talk to you. Appear friendly and curious.
An important way to make an interview subject feel at ease
is to explain your project again. This is the one-minute version
of your project, which should describe the topic you are
interested in and the types of questions you will ask, without
tipping your hand as to your hypotheses.2
At this point you
can remind respondents that their answers are confidential.3
Are You Listening?
During the interview itself, before moving on to the next
question it often helps to briefly restate what the respondent
has just said. (This should take no more than a sentence.) This
shows that you are interested and have understood what the
respondent has just said. It also provides a chance for the
respondent to correct you if you have misunderstood. Avoid
reinterpreting what the respondent has just said, as this has
the tendency to work against rapport and leave the respondents
feeling as if the interviewer is trying to put words in
their mouths. Use the respondents’ own language, if possible,
to summarize what has just been said.
Anthropologist James Spradley suggests that when an interviewer
does not understand a particular point, that it is better
to ask for use rather than meaning (1979, 82). That is, “When
would you do that?” or “What would you use that for?” are
usually better questions for building rapport than “What do
you mean by that?” The latter tends to shift respondents out
of their own verbal word and to begin speaking to you as an
outsider.
Question Order
Question order is important for substantive reasons (order
affects occur in interviews, just as they do in surveys), but
order is also important as a means of gaining rapport. As any
journalist would tell you, in an interview you should always
move from the nonthreatening to the threatening (Weinberg
1996, 85). That is, ask the easy questions first.
In the interviews I have conducted among lobbyists and
policymakers, I find that it usually works better to ask things
like age, background, title, and other personal things last. That
way the interview doesn’t come off as if it is about my respondent
personally, but rather about the political issue or
organization that we are talking about. This type of question
order works for me because my other questions are not
personal, and are therefore even less threatening than the demographic
information I collect at the end. On the other hand,
if your interview questions focus on an individual’s own political
and philosophical beliefs, then obviously questions about
education, background, and title would be less threatening and
would provide a good place to start.
666 PS December 2002
When and how should you ask sensitive questions? It’s usually
best to wait until the middle or toward the end of the interview.
Don’t wait until the last minute—you may run out of
time. Don’t hem, haw, or make it seem as though any normal
person would refuse to answer this question. Just ask. Then be
quiet, and give the respondent time to answer. Most people
will try to fill the silence, and you will get your answer.
A second thing to remember about sensitive questions—or
any question, for that matter—is to use nonjudgmental, nonthreatening
wording. For instance, asking a respondent, “What
kinds of help do you give to members of Congress as they are
going about their work or daily lives?” is likely to gain you
more information than if you were to ask, “Do you do favors
for members of Congress?”4
Likewise, I know that nonprofit
organizations with 501(c)3 charitable status are skittish about
the word “lobbying,” since the IRS restricts the amount of
lobbying they can do. So I make a habit of referring to
“advocacy efforts” or “policy work” instead.
When Did You Stop Beating Your Wife?
What are known as “presuming” questions are common
in journalism, but are usually not good social science. There
are circumstances, however, when such questions are necessary
to make respondents comfortable enough to answer honestly.
When the question is one that the respondent is likely to try to
avoid and involves a matter that may have a stigma attached
to it, a presuming question may be the only way to go. When
I was working as an ethnographic researcher in Kenya, collecting
reproductive histories from women, I first began simply by
asking women to tell me about all of their pregnancies. It was
clear from the first few interviews that no one was mentioning
miscarriages, stillbirths, or deaths of children—and I knew that
could not be accurate in a rural area with nonexistent prenatal
care and high child mortality. So I tried probing: “Tell me
about any children who died.” I used this question only once,
and it caused a respondent to jump up, mutter that she must
go check on the goats, and run out the door. After some help
from a language consultant, I did two things. I made my
language less threatening, and I asked the question in a
presuming way. “How many children are the lost ones?” I
asked—“Aja inkera netala?” My respondents’ faces would turn
serious, they would sigh, then they would tell me the details I
was seeking.
To return to the political world, instead of asking a lobbyist,
“Did you give soft money donations?” it might make the question
easier to answer to say, “How much did your organization
give in soft money donations?” The latter presumes that it is
normal to give soft money donations and that everyone must
do it, and also shifts the onus away from the individual and
onto the organization. (Actually, I should point out here that
you should never ask for information in an interview that you
could collect elsewhere, unless you are using the question to
double-check the veracity and accuracy of a respondent. Asking
for information you could easily collect elsewhere wastes precious
interview time and risks insulting your respondents, since
you are essentially asking them to do your homework for you.)
Presuming questions are presuming in the sense that they
imply that the researcher already knows the answer—or at
least part of it. So one danger is that the respondent will bluff
to save face and make something up. That is why I suggest
such questions should be used very sparingly and only when
they are needed to take the edge off of questions that may
otherwise have a stigma attached. In my examples above, a
respondent would be relieved, not shamed, to be able to say
“None.”
https://doi.org/10.1017/S1049096502001129
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:26, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
Types of Questions
We all know that there are certain types of questions to
avoid—loaded questions, double-barreled questions, leading
questions, and (usually) presuming questions. But what types
of questions should you ask in an open-ended, semistructured
interview?
Grand Tour Questions
The single best question I know of for a semistructured
interview is what Spradley (1979) calls a grand tour question.
Like the name suggests, these questions ask respondents
to give a verbal tour of something they know well.
The major benefit of the question is that it gets respondents
talking, but in a fairly focused way. Many good interviewers
use this type of question instinctively. Jeff Berry, for example,
used one in the research for his book The Interest
Group Society when he asked lobbyists to describe an
average day (1997, 94).
There are many different types of grand tour questions (see
Spradley 1979, 86–88). The most common is probably the
typical grand tour question:
“Could you describe a typical day in your office?”
“Could you describe a typical day on the Hill?”
“Could you describe a typical day in a member of Parliament’s
office?”
Such questions have the benefit of giving you a sense of what
an average day is like, but the drawback that you are not certain
what is being averaged—that is, how much variation there
is and how accurate the respondent’s sense of the usual really
is. Respondents may have a tendency to focus on the interesting
(which may not be usual), or on what they think should happen
day to day (although it actually may not). If you are doing
enough interviews to get a sense of the average by comparing
across interviews, then you may want to turn to a specific
grand tour question.
Specific grand tour questions ask for a tour based on some
parameter decided by the interviewer—a day, a topic, an
event: “Could you walk me through what you did yesterday in
your office?” or “Walk me through what your organization did
in response to issue X.” We used a specific grand tour question
to begin our interviews for the Advocacy and Public Policymaking
project, asking respondents to describe their organizations’
activities on the most recent policy issue in which
they were involved.
Not all interviews need to be conducted sitting down, and
not all grand tours need to be virtual. A guided grand tour is
an actual tour: “The next time you are lobbying on the Hill,
could you bring me along and show me what you do?” Related
to this are task-related grand tours. Such questions ask the respondent
to perform some usual task while verbally walking the
interviewer through the task. For instance, I could ask a lobbyist
to lay out talking points for a meeting with a legislator, or
to compile a list of which members of Congress to talk to, explaining
the decisions being made at each step of the process.
Example Questions
Example questions are similar to grand tour questions, but
still more specific (see Spradley 1979, 87–88). They take some
single act or event identified by the respondent and ask for an
example: “Can you give me an example of a time that you used
grassroots lobbying?” A related type of question is native language
questions, which ask for an example in the respondent’s
own words. These can be direct-language questions—“How do
you refer to these lobbying activities? What do you call
them?”—or hypothetical interaction questions—“If you were
talking to another lobbyist, what would you call that?” or “If I
were to sit in on that meeting, how would I hear people referring
to that?” Hypothetical interaction questions are sometimes
easier to answer than direct language questions, because they
help put the respondent in the mindset of talking to other experts,
and can help shake them out of Politics 101.
Ethnographers use many other types of questions, many of
which are of diminishing usefulness for most political scientists.
However, the less you knew about an area, the more
important such questions would become, to add direction to
what otherwise would be a random conversational walk. Structural
questions, for example, ask respondents to semantically
structure their world through such exercises as listing all the
different types of something and how they relate to each other
(Spradley 1979; Werner and Schoepfle 1987). So, hypothetically,
if I did not already know the different ways in which
interest groups can lobby, instead of simply asking “What has
your organization done in relation to this issue?”—I could ask
something like this:
“We’ve been talking about your advocacy efforts on this issue
and you have mentioned that you sent a letter to members on
the committee, visited with members of the congressional delegation
from your district, and put information on your website.
Now I want to ask you a slightly different kind of question. I’m
interested in getting a list of all the different types of advocacy
activities your organization has undertaken in relation to this
issue. This might take a little time, but I’d like to know all the
different types and what you would call them. (Adapted from
Spradley 1979, 122)
Note that the second question would get you a lot more information
than the first. It starts off by showing that the interviewer
has been listening, then asks for more information in a
specific way.5
Be aware that if you really want a complete list
then you may need to repeat the last part of this question
many times to get all of them: “And are there any other types
of advocacy efforts your group uses?” This is an example of a
prompt, and leads me into my final type of question.
Prompts
Prompts are as important as the questions themselves in
semistructured interviews. Prompts do two things: they keep
people talking and they rescue you when responses turn to
mush.
Let’s take the introductory question from the Advocacy and
Public Policymaking project:
“Could you take the most recent issue you’ve been spending
time on and describe what you’re trying to accomplish on this
issue and what type of action are you taking to make that
happen?”
One of my respondents answered, “Well, we’ve been talking
to some people on the Hill and trying to get our message
out.” He had just described the activities of every lobbyist in
Washington. If I had stopped here, the interview would have
been useless. Luckily, my interview protocol included numerous
prompts, based on what we wanted to be able to code
from this question, including who the targets of lobbying
were, and what lobbying tactics were used. So at this point,
possible prompts would include: “Who have you been talking
to on the Hill?” and “What are you doing to try to get your
message out?”
PSOnline www.apsanet.org 667
https://doi.org/10.1017/S1049096502001129
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:26, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
References
Berry, Jeffrey M. 1997. The Interest Group Society. Third ed. New York:
Longman.
Leech, Beth L., Frank R. Baumgartner, Jeffrey M. Berry, Marie Hojnacki,
and David C. Kimball. 2002. “Organized Interests and Issue Definition
in Policy Debates.” In Interest Group Politics, eds. Allan J. Cigler and
Burdett A. Loomis. Washington, DC: CQ Press.
McCracken, Grant. 1988. The Long Interview. Newbury Park, CA: Sage.
Spradley, James P. 1979. The Ethnographic Interview. New York: Holt,
Rinehart and Winston.
Weinberg, Steve. 1996. The Reporter’s Handbook: An Investigator’s Guide
to Documents and Techniques. Third ed. New York: St. Martin’s.
Werner, Oswald, and G. Mark Schoepfle. 1987. Systematic Fieldwork:
Foundations of Ethnography and Interviewing. Vol. 1. Newbury Park,
CA: Sage.
McCracken (1988, 35–36) identifies several different types of
prompts. Prompts like the ones I just mentioned are planned
prompts—prompts that are formally included in the interview
protocol. At the end of each formal question we ask as part of
the Advocacy and Public Policymaking project, there is an italicized
list of specifics that the interviewer is supposed to probe
for if the respondent doesn’t bring them up. For example:
probe about coalition partners (formal or informal)
probe about who they are speaking with about this issue
One difference between a prompt and a question is that the
prompts are not scripted as are the initial questions. The reason is
that every interview is different and the list of possible probe situations
could potentially go on for dozens of pages. That makes
it important for the interviewer to have a plan for how the interviews
will eventually be coded, so that the interviewer can make
sure that the responses have covered the necessary points.
Probably the most instinctive type of prompt is an informal
prompt. This is an unscripted prompt that may be nothing more
than the reassuring noises and interjections that people make
during any conversation to show that they are listening and interested:
“Uh-huh.” “Yes.” “How interesting.” But the welltrained
interviewer has a variety of informal prompts to use.
Floating prompts, for example, are used to clarify (McCracken
1988, 35). These may be nothing more than raising an eyebrow
and cocking one’s head, or they may be specific questions:
“How?” “Why?” and “And then…?” One way to ask for clarification
and at the same time build rapport is to repeat the key
term of the respondent’s last remark as a question:
Respondent: “And the bill was completely whitewashed in
committee.”
Interviewer: “Whitewashed?”
McCracken warns against leading respondents by putting
words in their mouths (“Do you mean the bill was gutted?”)
You risk losing rapport or having the respondent go along
with your definition (“oh, yeah, sort of”), rather than clarifying
further. The goal here is to listen for key terms and to prompt
the respondent to say more about them.
Enough is Enough
One of the most important rules about asking questions has
to do with shutting up. Give your respondent room to talk. If
respondents get off topic, let them finish, then bring them
gently back to the issue you are interested in. But don’t try to
control too much or you may miss important, unexpected
points.
Conclusion
Used in combination, grand tour questions and floating
prompts are sometimes enough to elicit almost all of the
information you need in a semistructured interview (with
planned prompts ready in case the floating prompts don’t
work!). I know that in many of my interviews for the
Advocacy and Public Policymaking project, the answer to
the first grand tour question took up half of the interview
hour—and rendered many of the subsequent questions on
the protocol virtually unnecessary. I would, of course, check,
“You have mentioned x and x as people you worked with
on this issue. Was anyone else involved in this issue?” But
often the answer was no and we would quickly move on to
the next question on the protocol. This was the best of both
worlds, because it collected the information we wanted and
provided it in the respondent’s own language and
framework.
Some of the question styles that semistructured interviewing
borrows from anthropology may seem not very useful if you
seek very specific information about a known topic and are
not planning to write an ethnography of lobbyists, elected officials,
or civil servants. On the other hand, if you take the time
to ask these kinds of questions, you sometimes get surprising
answers and learn something new. It’s true that the type of
interview you use depends on what you already know, but if
you already knew everything, there would be little reason to
spend time in a face-to-face interview. Semistructured interviews
allow respondents the chance to be the experts and to
inform the research.
668 PS December 2002
Notes
1. My collaborators on the Advocacy and Public Policymaking project
are Frank R. Baumgartner, Marie Hojnacki, David C. Kimball, and
Jeffrey M. Berry. Research has been supported by National Science
Foundation grants SBR-9905195 and SES-0111224. For more
information on this project, including the complete interview protocol,
see our website at . Also see Leech et al.
2002.
2. Elite interviewing subjects often are quite savvy about social science
research, and it is not uncommon for an interviewee to ask, “So what is
your working hypothesis here?” I respond to questions like these by
explaining that if I told them I would risk biasing my results, but that I
would be happy to send them information about the project and its
hypotheses after the interview is over.
3. An excellent way to convince your respondents that you really are serious
about confidentiality issues is to decline to give them any information
about the people you already have interviewed. A respondent may ask, “So
who else have you talked to?” The interviewer can answer, “Oh, several
people, although I can’t reveal exactly who without their permission.”
4. These questions also raise an elementary point about interviewing:
Don’t ask a yes-or-no question unless you want a yes-or-no answer.
“How,” “why,” “what kinds of,” and “in what way” usually are much
better ways to begin a question in a semistructured interview.
5. This question also demonstrates that expanding the length of the
question tends to expand the length of the response (Spradley 1979, 85).
Be aware, however, that long questions can lead people off point or
confuse them. If you want a specific answer, ask a specific question.
https://doi.org/10.1017/S1049096502001129
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:26, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
PSOnline www.apsanet.org 669
Getting in the Door: Sampling and
Completing Elite Interviews
Many factors are important when it comes
to conducting high quality elite interviews.
As my colleagues have noted in their
presentations in San Francisco and in their
essays in this issue, gaining valid and reliable
data from elite interviews demands that researchers
be well prepared, construct sound
questions, establish a rapport with respondents,
know how to write up their notes, and
code responses accurately and consistently.
Improving these skills will certainly reduce
the amount of measurement error contained in
interview data. Unfortunately, none of these
skills matter if you do not get the interview.
In other words, everything that my colleagues
have talked about depends on getting in the
door, getting access to your subject. A wellprepared
personable researcher who would be
able to control an open-ended and wideranging
interview, while establishing a strong
informal rapport with an elite respondent will
never get to demonstrate his or her interviewing
skills—or ability to decrease measurement
error—if the meeting never takes place. Furthermore
and fundamentally,
systematic
error will also be
introduced if researchers
only get
access to certain
types of respondents.
Frankly, “getting
the interview” is
more art than science
and, with few
exceptions, political scientists are not particularly
well known for our skill at the art of
“cold calling.” Even the most charming political
scientist may find it difficult to pick up
the phone and call the offices of powerful
and busy government officials or lobbyists
and navigate through busy receptionists and
wary schedulers. Still, there are systematic
commonsense things that you can do to make
it more likely that you will “get the interview.”
In addition, understanding how the
goals of your project interact with the process
of gaining access can help you to understand
the types of error that are introduced into a
study by what will be your unavoidable inability
to interview some legislators, staffers,
lobbyists, or judges. In this essay, I provide a
few tips that should improve your chances of
getting interviews and draw from work in
survey research to outline a framework for
understanding what sorts of error is introduced
when researchers fail to get in the
door. I illustrate some of these points with
my own work as well as the work of
colleagues.
There are three basic goals that researchers
have when conducting elite interviews:
(1) gathering information from a sample of
officials in order to make generalizeable
claims about all such officials’ characteristics
or decisions; (2) discovering a particular piece
of information or getting hold of a particular
document; (3) informing or guiding work that
uses other sources of data. Consistent with
this last point, elite interviews can and should
also be used to provide much needed context
or color for our books and journal articles. In
this essay I focus on research with the first
sort of goal. Nevertheless, although the consequences
of failing to get in the door may be
less severe in the latter two cases, researchers
want to make sure that they gather factual information
and have their research informed
from sources with different points of view.
Even when one does not aim to generalize
from interviews, researchers should obviously
still strive to confirm the accuracy of documents
and information. No matter the goal,
good research practice demands that one use
multiple sources.
At its core, “getting the interview” is a
sampling issue. We typically think of the two
research modes as distinct. Still, elite interviewers
hoping to gather generalizeable information
about an entire population of decisions
or decision makers can learn much from colleagues
in survey research about sampling and
about how nonresponse can lead to biased
results. In the survey research world, we talk
about random error and nonrandom error (systematic
error.) Random error is sampling error
and is the unavoidable noise that characterizes
any research that tries to estimate a larger population’s
characteristics from a smaller number
of cases. Random error is a function of variance
in the target population (if just about
everyone in the target population has the same
characteristics or attitudes, error is less) and
the number of sampling units (the more sampling
units, the less noisy the estimate.) Although
matters can become a bit complicated
with multistage designs, sampling theory is a
well-developed area and calculating sampling
error (the plus or minus figure that is dutifully
reported in all publicly released polls) is a
fairly straightforward exercise.
Many factors can introduce nonrandom error
and it is difficult if not impossible to measure
with precision. Measurement problems are obviously
a major source of nonrandom error
and many of the essays in this collection focus
on reducing measurement error. Nonresponse
can also introduce significant systematic error.
Systematic error from nonresponse is a function
of both the number of nonrespondents and
by
Kenneth Goldstein,
University of Wisconsin,
Madison
https://doi.org/10.1017/S1049096502001130
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:30, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
the degree to which those who cannot be contacted or refuse
to be interviewed differ in traits or attitudes from those who
are successfully contacted and interviewed. Researchers often
focus too much on the total number of nonrespondents and less
on the degree to which nonrespondents are likely to differ from
those sampling units who are successfully contacted and interviewed.
Survey estimates of population parameters can be
robust even with high levels
of nonresponse if the traits
and attitudes of nonrespondents
differ little from the
traits and attitudes of respondents.
A major problem in
survey research targeted at the
mass public is that researchers
often know little or nothing
about the characteristics and
attitudes of those whom they
fail to contact or interview. As
I will note later in this essay,
elite interviewers actually have
an advantage in this area because
they typically know
more about the characteristics
and attitudes of their nonrespondents.
The first step is identifying the research question and your
target population. In other words, you need to decide which
doors you need to get in and why. “Who or what are you
trying to generalize about?” The next step is to list a sampling
frame. In a perfect world, the sampling frame would be identical
to the target population. At the very least, it should be a
representative sample of the target population. In some cases,
figuring out the target population and coming up with a sampling
frame is not a particularly difficult task. Although it was
surely difficult to schedule interviews with Supreme Court justices,
defining the target population was not the major hurdle
for H.W. Perry (1994). Similarly, if one wanted to interview
district court, circuit court, state, or local judges, or even
lawyers involved in cases, widely available, easily accessible
lists and databases would reveal the relevant players. If the
unit of analysis for the study is a member of
Congress or state legislator, lists of elected
officials and their contact information are
readily available. The work of Richard Fenno
(1978) and John Kingdon (1989) stands out
as some of the best in elite interviewing and
they are among our discipline’s most skilled
practitioners at the craft. Still, the one part of
their work that was not difficult was devising
a sampling frame of members of Congress.
Although it may be difficult to actually
schedule an interview with legislators, knowing
who they are and devising a sampling
frame is relatively easy. With other projects,
however, it is not so easy to define a
sampling frame.
In my work on the targeting decisions of
lobbyists in grassroots campaigns, there was
obviously no easily to accessible list of all
the tactical and strategic decisions made by
lobbyists employing this particular tactic. Because
lobbying disclosure laws do not require
groups to report this tactic, there was no list
available of groups that had even used it. I
built a primary sampling frame by carefully
following news coverage of lobbying activities
in the New York Times, the Washington
Post, and the Wall Street Journal, as well as inside the beltway
publications such as The National Journal, Congressional
Quarterly, and Hotline. I noted every instance in which an
ideological group, union, corporation, or trade association was
mentioned as using grassroots district-based tactics over a particular
period of time. This list became the sampling frame
from which I sampled 80 groups to interview (Goldstein
1999). Decisions on grassroots campaigns and
lobbying choices—not the groups themselves—
were the unit of analysis that interested me.
Accordingly, in my interviews, I asked my respondents
to name recent legislation in which
they employed grassroots or constituency-based
lobbying tactics. I then asked them specific
questions about the tactical and strategic choices
they made in each of these lobbying
campaigns.1
In their important and continuing project on
lobbying, Baumgartner, Berry, Hojnacki, Kimball,
and Leech want to analyze public policy
issues as their basic unit of analysis. There are,
of course, a limitless number of issues that are
in play and no simple easy-to- access list of
policy issues. Accordingly, Baumgartner and his
collaborators also pursued a multistage sampling design. They
first turned to a sample of organizational representatives to
identify a set of issues. The sampling frame for these representatives
was a database of lobbying reports that were filed in the
Senate. Lobbying firms and other organizations were required
to list the broad policy issues on which they were active. In
the sampling frame, Baumgartner et al., listed an organization
each time they reported working on a distinct issue. They then
took a sample of organizations from this sampling frame and
attempted to contact the staffer in charge of Congressional Relations
or Government Affairs. This staffer was then asked to
identify the most recent issue that he or she worked on. It was
in this way that the scholars in charge of the Advocacy and
Public Policy making project chose their sample of issues. Although
they gathered and are gathering more extensive information
on each of these issues, the interview also provided
670 PS December 2002
Photo Courtesy of PhotoDisc.
The term “off the
record,” however, is
often misunderstood
and is often confused
with “not for attribution”
or “on back-
ground.”
https://doi.org/10.1017/S1049096502001130
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:30, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
crucial information on the major players and provided clues on
where the researchers should look for additional intelligence.
(See for results of the project, an extensive
description of the methodology, and a list of working
papers.)
In the two preceding examples, once the sampling frame was
built, sampling was a relatively straightforward exercise. Of
course, picking a good sample is of little use if
you cannot get your sample units to speak with
you. So, how do you get in the door? In survey
research, there have been scores of studies systematically
experimenting with different ways of
increasing response rates. I am aware of no such
systematic work with elite interviewing. I base
the following suggestions on my own experiences
as well as conversations with colleagues who
have also done elite interviewing. I also include
impressions from friends who have been the targets
of scholarly interviewers. This topic, though,
clearly deserves more systematic research.
The bottom line is that there are no silver
bullet solutions, and scheduling and completing
elite interviews takes a fair bit of luck. Still,
there are things you can do to create your own
luck. First, it is very important to send advance
letters on some sort of official (usually department)
stationary. The letter should clearly spell
out the basic outlines of your research and be
clear about the amount of time you are requesting.
Be sure to provide phone, fax, and email
contacts. Graduate students should put also put
their advisor’s name and contact information on the letter.
The letter should also be clear about what the ground rules
for the interview will be. How will the information gathered
in the interview be used? How will it be reported and where
will it appear? Is the interview completely on the record? Will
particular responses or reported behaviors be attributed to particular
respondents or organizations. Will information gained in
the interview only be released in aggregate or summary form?
What steps will be taken to keep sensitive information confidential?
Outlining and understanding the ground rules is not
only crucial for getting the interview in the short run, but
crucial for continuing our discipline’s ability to conduct such
research in the future.
Most elite respondents will be familiar with the common
journalistic rules for use of information gathered in interviews
settings. Obviously then, researchers need to be familiar with
these rules as well. Most everyone thinks they understand the
terms “on the record” or “off the record.” The meaning of
“on the record” is clear cut. The information can be used in
any form the researcher desires and the comments or actions of
the individual or group can be attributed by name. The term
“off the record,” however, is often misunderstood and is often
confused with “not for attribution” or “on background.” Technically,
“off the record” means that you don’t know what you
were just told. You cannot use the information in any way,
shape, or form. You cannot use it in an unattributed quote or
even to inform your work. The term “on background” means
that you can use the information to inform your own work and
you can use the information as a clue to search for corroborating
information or for organizations or individuals who will go
on the record. “Not for attribution” means that the comments or
information can be used and quoted as long as the organization
or individual giving out the information is not directly identified
as the source of the information or quote. My experience is that
interviewees will often give all four sorts of comments in a single
interview.
For inside-the-beltway interviewing, I firmly believe in “being
there.” Now, obviously one has to be in Washington to conduct
the actual interview, but I think a sustained time period “in
country” is key to making connections and being able to set up
interviews. Elites will often have last minute breaks in their
schedules and being on the ground and ready to conduct the interview
at a moment’s notice is a huge advantage. Furthermore,
Washington, DC is really a
small town when it comes to
politics and the more time one
spends there, the more likely
it is that one will make connections
that can help one
schedule an interview. Using
connections, friends, relatives,
friends of friends, friends of
friends of friends, has its advantages
and disadvantages.
Join a softball team (really,
I’m serious!)
Researchers need to be
careful about straying from
their target sample or using
connections to get only one
set of interviews. Still, with
all the difficulties involved in
scheduling elite interviews, I
think it would be foolish not
to take advantage of any
points of access that one has.
If one has done a good job at
devising a sampling frame and drawing a sample, one will be
able to determine if this reliance on connections is leading to
an unbalanced set of interviews.
Whether you are on the ground in Washington for only a
couple days or for a more extended block of time, it is crucial
to take advantage of some easy and relatively inexpensive
logistical support. A cell phone with voice mail is a must
(remember to turn it off during the actual interview) for
scheduling appointments and making sure that you are always
reachable when in Washington. Leave a simple professional
message. If you are using your home phone as a contact
number before your trip, make sure the message is simple and
professional. If you are staying with friends and leave their
number, make sure that their message is relatively tame.
(I learned this lesson the hard way.)
Also, make sure that you are able to check your email via
a web-based program. With most university systems, it is possible
to set up an account to check your email via the web. If
your institution does not have such a system in place, use of
one of the myriad number of free web-based email services
that are available (Excite, Hotmail, or Yahoo for example.)
Good preparation does not only lead to good data for a particular
interview, but credibility for future interviews for your
project and colleagues’ projects. In many instances, I have had
my interviewees offer to help set up or gain access to other
people and organizations on my target list.2
If you have
established a good rapport with a particular respondent, do not
be shy about enlisting their help in getting in the door with
others on your sample list. This is often called snowball
sampling.
When all is said and done, no matter how good a job you
do and how lucky you are, you will not be able to interview a
portion of your target sample. What then are the consequences
of nonresponse in elite interviewing? The answer to this question
largely depends on the goals of your interviewing. If your
goal is to gather particular factual information or to inform
PSOnline www.apsanet.org 671
Washington, DC is
really a small town
when it comes to
politics and the more
time one spends
there, the more likely
it is that one will
make connections
that can help one
schedule an
interview.
https://doi.org/10.1017/S1049096502001130
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:30, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
your work and write with a little real color, then confirming
that you heard from different sides and different types of organizations
can confirm that you do not have unbalanced or
biased information.
Even when the goal is more broad generalization, this is
actually an area where small N elite interviewers have an advantage
over researchers doing surveys of the mass public. As
noted above, nonresponse bias is a function of both the proportion
of potential respondents or sampling units questioned
and the degree to which those who are not contacted or refuse
to be interviewed differ from those who were successfully
contacted and interviewed. Unlike those doing survey research
of the mass public, researchers using elite interviews actually
know quite a bit about those who remain uninterviewed.
For example, if members of Congress are the target sample,
an interviewer knows a lot about even those members who
cannot be contacted or refuse to be interviewed. The researcher
would know the member of Congress’ party, state,
incumbency status, type of district (rural, urban, suburban),
and past voting behavior. Such information can be crucial in
determining whether there is a bias in the data. Similarly, one
knows much about the past decisions of federal judges and the
president who nominated them. If organizations are the target
sample, a researcher can discover much about the ideological
bent, areas of interest, membership, budget, size, and previous
jobs of staff for just about any corporation, union, ideological
group, or law firm in Washington. Although one still will not
have information on unobserved or unanswered questions from
the interview protocol, such observed variables can provide a
clue as to whether bias exists.
Following the suggestions I outline in this essay will not
guarantee that you get in the door. Given my experience and the
experience of others, however, they should help. Understanding
how sampling and nonresponse fit into the overall elite interviewing
research mode should also help researchers evaluate the
useability and generalizeablity of the information they gather.
672 PS December 2002
Notes
1. Admittedly, even without bias from nonresponse (41 organizations
agreed to speak with me about 94 different lobbying campaigns), the way
I built the sampling frame created a bias in favor of large and resource
rich groups and high profile issues.
2. As a small editorial aside, I think, our discipline’s access to elites in
Washington, especially members of Congress has been hurt by massive
amounts of poorly trained students and scholars being unprepared for
interviews.
References
Fenno, Richard. 1978. Home Style: House Members in Their Districts.
Boston: Little Brown.
Goldstein, Kenneth. 1999. Interest Groups, Lobbying, and Participation in
America. New York: Cambridge University Press.
Kingdon, John. 1989. Congressmen’s Voting Decisions. Ann Arbor:
University of Michigan Press.
Perry, H.W. 1994. Deciding to Decide: Agenda Setting in the United States
Supreme Court. Cambridge: Harvard University Press.
https://doi.org/10.1017/S1049096502001130
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:30, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
PSOnline www.apsanet.org 673
Conducting and Coding Elite
Interviews
Introduction
In real estate the maxim for picking a piece
of property is “location, location, location.” In
elite interviewing, as in social science generally,
the maxim for the best way to design and
conduct a study is “purpose, purpose, purpose.”
It’s elementary that the primary question
one must ask before designing a study is,
“What do I want to learn?” Appropriate methods
flow from the answer. Interviewing is often
important if one needs to know what a set
of people think, or how they interpret an event
or series of events, or what they have done or
are planning to do. (Interviews are not always
necessary. Written records, for example, may
be more than adequate.) In a case study, respondents
are selected on the basis of what
they might know to help the investigator fill in
pieces of a puzzle or confirm the proper alignment
of pieces already in place. If one aims to
make inferences about a larger population,
then one must draw a systematic sample. For
some kinds of information, highly structured
interviews using mainly or exclusively closeended
questions may
be an excellent way
to proceed. If one
needs to probe for
information and to
give respondents
maximum flexibility
in structuring their
responses, then openended
questions are
the way to go.
In short, elite
studies will vary a lot depending on what one
wants to learn, and elite interviewing must be
tailored to the purposes of the study. Our focus
here will be on the types of studies we have
conducted as reported in Bureaucrats and
Politicians in Western Democracies (coauthored
with Robert D. Putnam, 1981) and In the Web
of Politics (2000)—studies of elite attitudes,
values, and beliefs—but from time to time we
will make reference to other types of studies as
well.
Designing the Study
Our goals were to examine the political
thinking of American administrators and (in
the first round of our study) members of
Congress. We were interested in their political
attitudes, values, and beliefs, not in particular
events or individuals. A major aim was to
examine important parameters that guide elite’s
definitions of problems and these responses to
them. We wanted to generalize about these
phenomena in the population of top administrators,
both political appointees and highlevel
civil servants, and among high-level
elected officials as well. This meant that we
had to draw representative samples of members
of these elites and use an interviewing
technique that would enable us to gauge
subtle aspects of elite views of the world.
Drawing a sample of members of Congress
was quite straightforward. Lists of members
are easily accessible and drawing them at
random (stratified into two broad age groups
in our case) was a simple process. Sampling
high-level administrators was quite another
matter. We wanted to study officials who
worked for federal agencies primarily concerned
with domestic policy (a requirement
for a comparative aspect of the study), who
were at a level where they might have a say
in policymaking, and who, for convenience’
sake, worked in the general vicinity of
Washington, DC. Further, we wanted to make
sure that we covered both political appointees
and career civil servants. To accomplish these
goals, we had to compile lists of top administrators
in each agency, determine who held
top career positions in each hierarchy (our
criterion was that civil servants had to hold
the top career positions in their administrative
units and report to a political appointee), and
then sample in such a way that we had representative
groups of career and noncareer executives.
We eventually drew people randomly
from cabinet departments, regulatory agencies,
executive agencies, and independent agencies
in proportion to the number of executives in
each sampling classification within each
agency.
The good news is that bureaucratic elites
are little studied by political scientists, so
response rates were very high (over 90% for
career civil servants). They were lower for
members of Congress—in the high seventies
in the first round of our study (1970–71) and
lower when we tried a second round in
1986–87, so low in fact that we did not feel
it appropriate to use the second-round congressional
interviews for anything but illustration.
This points to an important problem
facing those wishing to interview elites. One
must get access, and it can be quite difficult
to secure interviews with busy officials who
are widely sought after. It helps to have the
imprimatur of a major and respected research
house like the Brookings Institution, and it is
important to be politely persistent. One should
not be too put off when told that your potential
respondent is too busy to see you when
you call (after writing a letter) and call back
in an attempt to arrange the interview. One
by
Joel D. Aberbach,
UCLA
Bert A. Rockman,
Ohio State University
https://doi.org/10.1017/S1049096502001142
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:35, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
should write a letter laying out the general purpose of the
study and be ready to repeat your “spiel” over the phone to
the appointments secretary. Mention prestigious organizational
sponsors if you have them and mention some past experience
in studying the area of interest if you have it. It sometimes
helps to mention what you’ve written, but do not expect respondents
or those who schedule them to be impressed that
you have published in APSR
or its equivalents. They are
more attuned to other types of
journals (like National
Journal) or to the press.
Getting in the door is important,
but what you do next
is even more important. We’ll
touch only briefly on the suggestion
that you refrain from
spilling the coffee that may
be offered to you or that you
look reasonably presentable to
the rather conservative
dressers in Washington, and
get to the heart of the matter—what you ask the respondents
and how you ask them. What you ask is, of course, a function
of what you want to know, but so also is how you ask the
questions. As noted, we wanted to examine the political thinking
of American political and bureaucratic elites. We wanted
to know about their political attitudes, values, and beliefs. We
were not trying to predict discrete behavior, for example, their
choices of particular policies; rather, we were interested in examining
the parameters that guided their definition of problems
and their responses to them.
To accomplish our goals, we decided on an approach using
mainly open-ended questions that allowed the respondents to
engage in wide-ranging discussions. One of our main aims
was to get at the contextual nuance of response and to probe
beneath the surface of a response to the reasoning and premises
that underlie it. Consequently, we decided on a semistructured
interview in which the open-ended questions we
mainly relied on gave the respondents latitude to articulate
fully their responses. This requires great attention from the interviewer
since such an interview has a more conversational
quality to it than the typical highly structured interview and
questions may, therefore, be more easily broached in a manner
that does not follow the exact order of the original interview
instrument. There is an obvious cost here in terms of textbook
advice on interviews—respondents may not necessarily have
been asked questions in the same order—but in our experience
the advantages of conversational flow and depth of response
outweigh the disadvantages of inconsistent ordering. That suggests
a key principle of real-world research—sometimes one
does something that is not the ideal (in this case, vary the order
of questions) because the less than ideal approach is better
than the alternative (in this case, a clumsy flow of conversation
that will inhibit in-depth ruminations on the issues of
interest).
There are three major considerations in deciding on a
mainly open-ended approach rather than one using more closeended
questions. One is the degree of prior research on the
subject of concern. The more that is known, the easier it is to
define the questions and the response options with clarity, that
is, to use close-ended questions. Our study explored a series
of rather abstract and complex issues in a relatively uncharted
area at the time, the styles of thinking as well as the actual
views of American political and bureaucratic elites. Emphasizing
close-ended questions and tight structuring would not have
served our major purpose, the exploration of elite value
patterns and perceptions, but we did recognize the cost—the
kinds of data we collected made it more difficult to produce
an analytically elegant end product, at least if one uses statistical
elegance as the major criterion in evaluating analytical
elegance.
A second consideration leading us to use an open-ended
approach was our desire to maximize response validity.
Open-ended questions provide a greater opportunity
for respondents to organize their answers
within their own frameworks. This increases the
validity of the responses and is best for the
kind of exploratory and in-depth work we were
doing, but it makes coding and then analysis
more difficult.
The third major consideration is the receptivity
of respondents. Elites especially—but other
highly educated people as well—do not like
being put in the straightjacket of close-ended
questions. They prefer to articulate their views,
explaining why they think what they think.
Close-ended questions (and we did use some)
often elicited questions in return about why we
used the response categories we used or why we framed the
questions the way we did. Parenthetically, in later rounds of
our longitudinal study, we used close-ended versions of some
of our earlier open-ended questions, but by then we had the
benefit of great experience in ascertaining both the mind-sets
of our respondents and the range of responses they would find
tolerable.
We should stress again that there are costs in using openended
questions. First, there are substantial costs in time spent
in doing the interviews themselves, in transcribing them or
otherwise preparing them for coding, and in the coding
process itself (see below). Second, there are the related costs
in money. The process is slow and the costs mount in direct
relation to the time spent. Third, as mentioned above, there
are costs in analytic rigor, certainly in terms of limits on what
one can do in data analysis. But, going back to the maxim on
purpose, answering the research questions one starts with in
the most reliable way is more valuable than an analytically
rigorous treatment of less reliable and informative data.
Conducting the Interviews
We noted earlier some of the practical considerations in getting
an interview. They include such things as writing a brief
letter to respondents on the most prestigious, non-inflammatory
letterhead you have access to, stating your purpose in a few
well-chosen sentences (no need to be too precise or certainly
overly detailed); having a good “spiel” prepared for the appointments
secretary and later for the respondent prior to the
interview; and fending off questions about your hypotheses
until after the interview is over. That prevents contamination
of the respondents and also puts this part of the conversation
on the respondent’s “time” and not the time reserved for the
interview. Obvious advice includes the need to be persistent
and to insist firmly, but politely (and with a convincing explanation)
that no one but the person sampled, i.e., the principal,
will do for the interview.
It can be a major undertaking in time and effort to secure
the interview, but success there is only the beginning. We did
most of our interviews in the respondents’ offices, but you
should be prepared to do them where you can. Our most
harrowing experience was interviewing an administrator as he
drove to an appointment. He was very animated (and in a
hurry) and nearly got himself and us killed as he weaved
674 PS December 2002
Elites especially—but
other highly educated
people as well—do not
like being put in the
straightjacket of closeended
questions.
https://doi.org/10.1017/S1049096502001142
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:35, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
through Washington traffic late in the afternoon while presenting
his views in a passionate and often amusing style.
We tape-recorded the interviews in order to facilitate use of
a conversational style and to minimize information loss. Few
respondents refused to be taped, and almost all quickly lost
any inhibitions the recorder might have induced. Starting with
innocuous questions about the person’s background facilitates
this since people find talking about themselves about as fascinating
as any subject they know. Our judgment (and the judgment
of our coders) is that the interviewees were frank in
their answers, especially because our questions focused on
general views and not information that might jeopardize the
respondents’ personal interests.
Coding Open-Ended Interviews
Coding procedures assume paramount importance when, as
in our studies, one employs open-ended interviewing techniques
to elicit subtle and rich responses and then uses this
information in quantitative analyses. Particularly in elite interviewing,
where responses to questions are almost always coherent
and well formulated, respondents can productively and
effectively answer questions in their own ways and the analyst
can then build a coding system that maintains the richness of
individual responses but is sufficiently structured that the interviews
can be analyzed using quantitative techniques. The
wealth of material contained in the responses, in fact, allows a
varied set of codes, some recording manifest responses to the
questions asked and some probing deeper into the meaning of
the responses.
We developed three basic types of codes to achieve the purposes
of our study (Aberbach, Chesney, and Rockman 1975,
14–16). Manifest coding items involved direct responses to
particular questions (for example, whether differences between
the parties are great, moderate or few). Latent coding items
were those where the characteristics of the response coded
were not explicitly called for by the questions themselves (for
example, we coded variables dealing with positive and negative
references towards the role of conflict from questions
about the nature of conflict in American society and the degree
to which it can be reconciled). Global coding items were
those where coders formed judgments from the interview transcripts
about general traits and styles (for example, coding
whether respondents employed a coherent political framework
in responding to political questions).
In the first round of the study we had two sets of coders
independently code each interview and calculated inter-coder
reliability coefficients for the various variables. Not surprisingly,
on average, the manifest items were the most reliable,
followed by the latent items, and then the global items. We increased
reliability further (we hope) by having a study director
reconcile any disagreements among the coders in conferences
with the coders. Our experience with coding taught us that
simultaneous coding by the two coders with immediate reconciliation
yielded much more reliable coding than serial coding
where large numbers of interviews were coded prior to
reconciliation meetings.
Some Problems and Advantages in
Doing a Longitudinal Elite Study
We encountered a series of problems in doing a longitudinal
study, most of which impacted both the interviews themselves
and the coding.
First, elite systems do not necessarily remain stable over
time. This is particularly likely in the bureaucracy, which is
actually a much more dynamic institution than stereotypes
might lead one to believe. Aside from reorganizations and the
creation of new administrative units, which were easily dealt
with when we constructed each successive sampling frame,
the Civil Service Reform Act of 1978 created the Senior
Executive Service (SES) to replace the system of Supergrades
that existed prior to the act. SES created a rank-in-the-person
system in place of a rank-in-the-position system and made us
reexamine our earlier criterion of interviewing the highest
civil servant in each hierarchy. We eventually decided to continue
sampling the highest civil servant in each hierarchy for
purposes of continuity, but added a sample of other SES
executives. In the end, this proved substantively beneficial, as
readers of In the Web of Politics will see.
Second, by round two of the study we encountered a few
executives who knew something of our earlier work published
on the basis of round one. We interviewed these people, but
there may be some unknown effects of their familiarity with
the project.
Third, we had to decide whether to repeat questions in later
rounds even though we knew we could ask better ones. Following
the advice of Philip Converse, we tended to repeat
items, choosing to keep whatever measurement error the original
item introduced over the great problem of comparing results
from different questions.
Fourth, as already mentioned, the costs of open-ended
longitudinal studies are considerable. We dealt with this by
shortening the instrument in subsequent rounds—retaining
the questions we knew to be key as our understanding deepened
over time. In addition, we supplemented with more
close-ended questions in later years, and now have a basis
for comparing open and closed questions in certain areas.
There were also advantages beyond those we have already
mentioned to doing a longitudinal, heavily open-ended study.
First, once we actually developed our codes in the first
round of the study, the costs of coding dropped substantially in
subsequent rounds. We did some refining of the codes, of
course, but the costs here were minor compared to our original
investment.
Second, as noted above, in each succeeding round, as we
developed a fuller understanding of what and how elites think,
we were able to use more close-ended questions to supplement
the open questions.
Third, our interviewing technique means that we have a raw
product that should be of great use to historians. We interviewed
during a turbulent period in American administrative
history (particularly during the Nixon and Reagan administrations)
and we have transcripts of in-depth conversations with
people who are historically important. Because of confidentiality
promises, these interviews will not be available until respondents
are deceased, but eventually the interviews should
prove valuable in understanding the era when our respondents
wielded power.
Conclusions
To reiterate the key point, studies must be designed with
purpose as the key criterion. Elite studies are no exception.
We conducted our longitudinal study the way we did because
of our desire to probe deeply elite attitudes, values, and beliefs,
and also because of the state of prior research in the
area, our desire to maximize response validity, and our sense
that elites would be most receptive to the type of interview
we conducted and would be well positioned to handle the
types of questions we asked. While we would use more closeended
questions in future research because of what we learned
PSOnline www.apsanet.org 675
https://doi.org/10.1017/S1049096502001142
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:35, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
(and we used more as time went on), the basic approach of a
semi-structured and largely open-ended interview still seems
best to us. We learned a great deal from our subjects—and
about our subject—through these conversations. Using a systematic
coding procedure not only allowed us to employ quantitative
techniques in our later analyses, but also kept us from
allowing the colorful interviewee or especially enjoyable story
to dominate our view of the overall phenomena we were
studying. At the same time, the interviewing technique helped
us to use clues from the most insightful respondents to suggest
hypotheses for our analysis.
We close with a general observation about elite interviewing
studies; they take a lot of persistence, time, and whatever
passes these days for shoe leather, but they are immense fun.
You’ll meet some of the most interesting people in the country
and learn a huge amount about political life and the workings
of political institutions. If you like both politics and political
science, it’s one terrific way to spend your time.
676 PS December 2002
References
Aberbach, Joel D., Robert D. Putnam, and Bert A. Rockman. 1981.
Bureaucrats and Politicians in Western Democracies. Cambridge:
Harvard University Press.
Aberbach, Joel D., and Bert A. Rockman. 2000. In the Web of Politics:
Three Decades of the U.S. Federal Executive. Washington, DC: The
Brookings Press.
Aberbach, Joel D., James D. Chesney, and Bert A. Rockman. 1975.
“Exploring Elite Political Attitudes: Some Methodological Lessons.”
Political Methodology. 2:1–27.
https://doi.org/10.1017/S1049096502001142
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:35, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
PSOnline www.apsanet.org 677
Ethical Dilemmas in Personal
Interviewing
There are many issues of ethics and openness
in elite interviewing that I have
learned how to deal with through the years.1
My work has focused on people who cause
trouble: protesters, litigants, defendants, sidewalk
counselors, rescuers, and abortion
providers, to name a few. Of course, in dealing
with people you are studying you must be
honest and ethical. It is important to remember
that their activism comes from something
they deeply feel. Their activism is because of
their beliefs, opinions, experiences, and sense
of community. They do not exist as activists
so that you can add more lines to your vita
or finish your dissertation. You must leave
them in the same position in which you found
them. You must do no harm to them.
Openness: Recently, my work has focused
on reproductive politics (see The Political
Geographies of Pregnancy, University of Illinois
Press, 2002). I have interviewed dozens of prochoice
and pro-life (I use the terms they describe
themselves with) activists, lobbyists, interest
group leaders, litigants, and lawyers. This
work is often tricky
ethically. For example,
I am genuinely
fascinated with prolife
activism. However,
I have found it
more difficult to book
interviews with prolife
people than with
pro-choice people. This is propably due to my
being an academic or my affiliation with a
women’s studies program. Pro-life activists
might assume that I am very pro-choice and
that my purpose in wanting to interview them
or observe their meetings or demonstrations
might be to belittle them. That is not my intention,
but pro-life people are nervous for many
reasons about letting someone like me learn
more about their organization, tactics, resources,
and plans.2
When I try to schedule a meeting
with a pro-life person, often they ask me,
“What are you?” (i.e., are you pro-life or not?).
Rather than tell them my position on abortion, I
tell them that I am an academic researcher and
that I study people who do more than simply
vote about an issue. I tell them the truth, that I
am fascinated in their activism and want to
learn more about it. Usually (not always) a response
along those lines satisfies them. You
have to hedge sometimes in order to get an interview.
However, you cannot mislead people.
Conflicts: Researchers sometimes find
themselves with conflicts of interest concerning
their access to interviewees and what they
are going to write about. In some of the
groups I have studied I have learned about
personal animosities between group members
or untoward behavior by activists. If I write
about this (and the activists find out about it),
it can hamper my access to these activists in
the future for follow-up interviews. Activists
also travel in small worlds and can let each
other know about less-than-satisfactory experiences
with a researcher. So, my future interviews
could be jeopardized. In group politics,
conflicts are sometimes important since they
add to the fragile nature of a group’s mobilization.
Thus I mask the conflict and discuss
it in another way. I do not name people who
hate each other, or who feel that someone is
hogging the spotlight or using the group for
his or her political career or personal agenda.
But, I do write about factions within the
group if it is an important part of the group
dynamics. If it is just gossip, I don’t use it. It
is up to the judgment of the interviewer to
know when these conflicts are a serious part
of the story and when they are just part of
the complexities of people’s personalities and
relationships and not important politically.
Context: Depending on who you are interviewing
and what the context is, you might
have to anticipate how to handle people
telling you things that are very painful to
them still. It helps you make these judgments
if you are very well informed about the context
in which your interviewees work. For instance,
for my first book (From Outrage to
Action: The Politics of Grass-Roots Dissent,
University of Illinois Press, 1993) I interviewed
people who organized into ad hoc,
issue-specific groups in order to recall judges
who they felt had made insensitive comments
during the sentencing phases of two rape trials.
One group successfully recalled Judge
Archie Simonson in Madison, Wisconsin, who
remarked that given the way women dress,
rape is a normal reaction. Another group, a
few years later in a rural area of Wisconsin,
tried to recall another judge who said that a
four-year-old sexual assault victim was a
“particularly promiscuous young lady.” Their
recall was not successful. I knew before I did
any interviews for these two cases that rape is
an underreported crime, often kept secret by
victims. As a scholar of gender I also knew
how widespread rape is in any population. I
surmised, then, that some of the people remembering
their efforts to recall these judges
had sexual-assault experiences in their lives.
However, it was not my purpose to dredge up
painful memories from someone’s past. Since
I am not a counselor or psychologist, I am
not trained in how to help people who might
be recalling something very traumatic in their
lives. Yet, given the statistics on rape in this
by
Laura R. Woliver,
University of South Carolina
https://doi.org/10.1017/S1049096502001154
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
country, the issue comes up in my interviews. In a couple of
the interviews when I asked why it was that this issue (a
judge saying these insensitive things in a rape case) caused
them to become politically active, the respondents would become
upset. Some cried. Some revealed to me that they had
been raped decades ago when nobody talked about it, no crisis
hotlines or support groups existed, and they had simply kept
the event to themselves. When the judge, however, said these
things about another rape victim, they were in a place in their
lives where they felt they had to do something about it. Their
activism is an important part of my book. During the interviews,
however, I was faced with upset people. What I did
was tell them that I understood and that I was glad they were
telling me their story. I also reassured them that their identities
would not be revealed in anything I published. In fact, that
pain even decades later is a research finding. In my case, it
displays how important rape reform efforts are within the large
and diffuse women’s movement and how controversial sexual
trauma is for people their whole lives long.
Additional tips: From my fieldwork I can offer some tips
for others doing person to person interviewing. I often end my
interviews with a question like this, “Is there anything you
would like to tell me about which I haven’t thought to ask
you?” It is amazing what I learn from this question. Interviewers
cannot anticipate everything and you need to give the respondents
openings to tell you about an event, connection, or
insight that you didn’t think to ask them about. Don’t let
your interview schedule tie you down. I do most of my interviews
in respondent’s offices, homes, churches, or community
centers. In one person’s home I noticed two framed portraits
on the dining room wall: Reverend Dr. Martin Luther King
and John F. Kennedy. I decided not to ask the interviewee
what political party they affiliated with.
During my interviews, I take notes about the environment.
This helps me to remember the context of the interview and
to understand my interviewees. Since most of the activists I
study are women, they are often balancing many other obligations
as I talk to them. My field notes include, “Children kept
running in and out of the room,” and, “She paused while she
took a phone call from her mother.” One judicial recall activist
took a call during an interview, and I could tell that she
was speaking to a woman who could not afford an abortion;
the activist was part of an unofficial local network that could
piece together the funds for an abortion. Though I did not intend
for this interview to cover abortion, I now knew more
about the depth of this activist’s commitment.
It is amazing what people will xerox and give to you once
you are in their office. I have been given copies of confidential
internal memoes, drafts of amicus briefs, correspondence
between brief signers that I would never be given if I met
the person outside of their office. Even if they promise to
send it to me when they return to their office, usually they
never do. Either they are too busy or when they are back in
their offices they think twice about handing the materials over
to a researcher and then “forget” to do it. I have also witnessed
interesting interactions while waiting in interest group
lobbies. I once saw a UPS man pick up boxes of newsletters
from a group that claimed to be separate from the interest
group whose lobby I was sitting in. I figured out that one
group was really just an affiliate of the other, with a separate
post office box, but running its operations out of the larger
groups offices.
Another issue is whether to tape record. I have quit using
tape recorders; it is up to you whether you use one. I find
them intrusive for me and for the interviewee. I write notes
during the interviews and flesh them out later. If it is something
important that I might like to quote directly in my work,
I read it back to the interviewee to make sure that I wrote it
down correctly.
Ethically, it is important that we researchers send copies of
what we write and publish to the people we have interviewed.
It is more than a courtesy. It is an acknowledgment on the
part of the researcher that without the interviewee, our work
would be diminished. Many times people have called or written
to thank me for sending them a copy of the chapter in
which their interview is included in my works. They feel
happy and vindicated that their stories will not fade away and
that their activism and efforts were noticed and appreciated.
Finally, be sure to ask them if you can come back later, or
call on the phone, if you have further questions or issues.
Sometimes I must recontact someone I interviewed early in
my rounds of fieldwork because later interviews brought up
topics and issues of which I was not fully aware when I
began the fieldwork.
Serendipity. Although it is probably a sin in political science
to admit it, luck plays a big role in fieldwork and interviews.
If you are interviewing someone and they ask you if
you would like to accompany them to their next meeting, or
ride in the taxi with them while they go downtown and file
legal papers at the courthouse, make sure that you do so.
People will be very candid when outside of their usual home
or work environment. I have been able to secure interviews
with additional people because they saw me while I was with
an activist they know and respect.
Inspiration. Finally, I must report that this work has been
fun for me. The people I have interviewed over the years are
inspiring. Their activism and enthusiasm invigorates me regardless
of whether I agree with their positions. They have
taught me a lot about the nitty gritty of politics. They have
been active on issues even when they knew it could hurt them
in their small towns, in their business networks, in their neighborhoods
and communities. They have picketed stores in
Wisconsin snow storms. They have badgered pompous politicians
to keep their promises and be more connected to people
like them. They have directly challenged racism, sexism, and
class bias. It has been an honor to talk to many of these
people.
I hope that you enjoy your fieldwork as much as I have
enjoyed mine (despite being stood up for interviews, rescheduled,
ignored, and put off). In a discipline that sometimes
doesn’t value this kind of work, it is interesting nonetheless to
notice how many political science classics are built on elite
interviewing and fieldwork. You will most likely find that your
interviews give you far more than you ever expected they
would. If I can be of help to anyone doing this kind of work,
just let me know.
678 PS December 2002
Notes
1. Thanks to Beth L. Leech, Rutgers University, for including me in
this workshop. I learned a lot listening to my fellow interviewers and
gleaning wisdom from their experiences. Thanks also to Jeffrey Berry and
the Political Organizations and Parties Section of APSA for putting the
workshop on.
2. It is important to remember, also, that some pro-life organizations
are nervous because of lawsuits against them and because some adherents
have practiced violence. They are very reluctant to let a researcher into
their community given the legal issues involved in some of their
activities.
https://doi.org/10.1017/S1049096502001154
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
PSOnline www.apsanet.org 679
Validity and Reliability Issues In Elite
Interviewing
Many of the early important empirical
works on policymaking in Washington
were built around elite interviews. We first
learned about how Congress really operates
from pioneers in elite interviewing such as
Lewis Anthony Dexter (1969), Ralph Huitt
(1969), and Donald Matthews (1960). No less
revered is the scholarship of Richard Fenno
(1978), John Kingdon (1995), and Robert
Salisbury (1993), who have produced enduring
and respected work from elite interviewing.
Yet there are few other contemporary political
scientists working on public policymaking
who have built reputations for their methodological
skills as interviewers. Elite interviewing
is still widely used as the basis for
collecting data, but most interviewing depends
on a few trusted templates. Most commonly,
elites in a particular institution are chosen at
random and subjected to the same interview
protocol composed of structured or semistructured
questions. For example, state legislators
are asked a series of questions about their
attitudes on particular issues or institutional
practices. Or policymakers
involved in
certain issues are
selected and then
quizzed about those
matters. Some confident
and skilled
interviewers, like
William Browne
(1988) and Richard Hall (1996), combine
different interview approaches in their work
but they are the exceptions and not the rule.
When scholars use a sample of interviews, it
is the statistical manipulation of the coded interview
transcripts that is considered to be the
rigorous part of the research; the fieldwork itself
is largely viewed as a means to that end.
Unless researchers pay close attention to the
field methodology, though, the “error term” in
elite interviews can easily cross an unacceptable
threshold. What if the questions are poorly
constructed, or the subjects are unrevealing, or,
worse, misleading in their answers? More to
the point, how does the interviewer know if
any of these problems exist?
Despite the common use of elite interviews
to collect primary data, it is a skill that is
rarely taught in graduate school. In contrast,
methods courses pay enormous attention to
the most minute of statistical issues, and
newly minted Ph.D.’s enter the profession
with an impressive proficiency in quantitative
methods. What little training graduate programs
offer related to interviewing is usually
restricted to matters of question wording and
bias (and often this comes about in training in
survey research, which relies on different
kinds of questions). This lack of attention
mirrors readers’ expectations of published
work using elite interviews. There simply isn’t
a demand for political scientists to document
the resolution of methodological issues associated
with this kind of interviewing. It is usually
sufficient merely to describe the sampling
framework (if there is one) and to reprint the
interview protocol in an appendix.
The methodological issues in elite interviewing
are serious and involve both issues
of validity—how appropriate is the measuring
instrument to the task at hand?—and
reliability—how consistent are the results of
repeated tests with the chosen measuring
instrument? I’ve confronted these issues for
years as almost all my research projects have
used elite interviews. I was lucky enough to
be trained by a master—Robert Peabody of
the Johns Hopkins University. As a graduate
student I followed him around the Congress
and sat in on his interviews with legislators,
staffers, and lobbyists. He taught me some of
the basic skills of an interviewer. None was
more important than this: the best interviewer
is not one who writes the best
questions. Rather, excellent interviewers are
excellent conversationalists. They make interviews
seem like a good talk among old
friends. He didn’t carry a printed set of
questions in front of him to consult as the
interview progressed; yet he always knew
were he was going and never lost control of
the discussion. He gave his subjects a lot of
license to roam but would occasionally corral
them back if the discussion went too far
astray.
His method illustrates the paradox of elite
interviewing: the valuable flexibility of openended
questioning exacerbates the validity
and reliability issues that are part and parcel
of this approach. As I’ve followed my initial
training and developed my own style of elite
interviewing, I’ve thought about the methodological
problems of open-ended questioning
and tried to develop ways to minimize the
risks associated with this approach. Here I
focus on three methodological issues common
to this kind of elite interviewing. In
each case, I’ll offer some suggestions to
improve the chances that the data acquired
won’t be badly compromised by validity or
reliability concerns. These suggestions are far
from foolproof. Open-ended questioning—the
riskiest but potentially most valuable type of
elite interviewing—requires interviewers to
know when to probe and how to formulate
follow-up questions on the fly. It’s a
high-wire act.
by
Jeffrey M. Berry,
Tufts University
https://doi.org/10.1017/S1049096502001166
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:46, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
Passion, not Dispassion. During a recent trip to Washington
I interviewed a trade association lobbyist about ergonomics
standards being considered by OSHA. He responded to my
first question with a half-hour diatribe against OSHA. He repeatedly
denounced its behavior, accused bureaucrats there of
unethical actions, and never acknowledged that there might be
something to the workers’ health and safety problems that the
proposed regulations addressed. At one point he mocked
OSHA, saying a bureaucrat there boasts that OSHA has “a
Zen to regulate.” At another point he said OSHA “intimidated
witnesses” at a hearing—a very serious charge. At the same
time, he gave a wonderfully detailed history of the development
of these regulations, which is why I let him carry on
rather than try to move him on to other questions I had. The
trade group lobbyist was bright, articulate, and persuasive and
I walked away feeling I had learned a lot on this issue.
But what I had learned was certainly not the “truth” about
the OSHA regulations. Since a main focus of the research was
to study how lobbies use arguments to push their causes, I had
an interest in having him state his organization’s point of view
as baldly as he wanted to. Still, if the goal of interviews is to
find out the truth about what happened—how was the bill
passed, how was the deal cut, how was the judge chosen?—
there is a very high risk of finding one interviewee more persuasive
than the others and having that one interview strongly
shape our understanding of the
issue. It’s easy to make oneself
believe that one account is more
accurate than another because a
subject was more knowledgeable
or more detailed in her answers,
rather than admitting that we
liked that person better or her
story was closer to our own take
on the situation. In the case of
the lobbyist on the ergonomics
regulations, it was easy to recognize
the lack of objectivity. It
was more difficult for me to
judge the OSHA bureaucrat that I
later interviewed. He was much
more measured, seemingly more objective. But then again his
political point of view was much closer to my own.
Interviewers must always keep in mind that it is not the
obligation of a subject to be objective and to tell us the truth.
We have a purpose in requesting an interview but ignore the
reality that subjects have a purpose in the interview too: they
have something they want to say. Consciously or unconsciously,
they’ve thought about what they want to say in the
period between the request and the actual interview. They’re
talking about their work and, as such, justifying what they do.
That’s no small matter.
Sometimes all we want to know is the subject’s point of
view and this problem doesn’t loom as large. Or we’re studying
just a single case so no one interview is likely to carry
too much weight. Other times, though, we’re trying to come
as close to the truth as is humanely possible for a number of
different cases. How do we try to minimize this problem then?
Here are three suggestions:
• Most obviously, use multiple sources. Although this goes a
long way in guarding against self-serving or “party-line” accounts,
it’s much easier to preach than to practice. Elite interviewing
is highly time consuming. It takes me two hours of
transcription for every half hour of interview. If you’ve traveled
somewhere to conduct the interviewees, there’s limited
time (money) to conduct them. If one is studying multiple
cases, it’s breadth versus depth, a familiar problem to field researchers
(King, Keohane, and Verba 1994). It’s very tempting
for interviewers to go for breadth over depth—doing more
cases rather than doing more detailed cases—because in elite
interviewing the error term is largely hidden to those outside
the project while the number of cases, the “n,” is there for all
to see and judge.
• Ask the subject to critique his own case. Don’t show skepticism
and don’t challenge the subject. With subtlety, move the
subject away from his case to the politics of the situation.
For example, “Well, you have me convinced. Why aren’t the
Democrats buying this?” Or a bit more pointedly, “I’m a little
confused on something. I read in the Washington Post the
other day that labor was making progress with the committee
chair. What’s the part of their argument that resonates with
legislators?” This latter approach, using a third party [the
Post], is a way of taking the subject away from his own
perspective without demonstrating one’s own personal
skepticism.
• Use the interview for what it is. If you’ve got an ideologue
or someone who isn’t going to be terribly helpful in a particular
area because of their bias, think about where you can
spend the time most profitably. Move more quickly to questions
that might have a higher payoff.
Excessive personal bias isn’t a chronic
problem. Some subjects are more than happy
to tell you about the weaknesses of their
cases or speak admiringly of the other side
while detailing their successes. Even so,
there’s a danger here too. In interviewing a
lobbyist for an airline trade group, I was
struck by his tendency to lower his voice—
so no one in the hallway could hear—when
he criticized his own organization for its
blindness about the industry’s shoddy service.
It wasn’t until later when I was typing
up the interview that I thought about how
seductive this was. It’s a little too easy to
believe you’re getting the truth when it’s coming from a
source who is going out of his way not to give you the party
line.
Exaggerated Roles. Before I spoke with this airline lobbyist
I interviewed another lobbyist for a trade group in a different
part of the industry. He quickly came alive and gave me a
very animated, highly detailed account of the group’s efforts
on an important bill dealing with the aviation trust fund. (It
became known as “Air-21” during its movement through the
Congress.) In his rendition, his group was at the center of the
lobbying effort. For years proposals to change the formulas in
the aviation trust fund had gone nowhere but when former
Representative Bud Shuster (R-PA), the highly influential chair
of the Transportation Committee got behind it, the bill went
through the House easily. The Senate was still problematic and
in this lobbyist’s history, a critical juncture came when:
We went to those who we wrote [PAC] checks to. We went to
[Senator Mitch] McConnell and said “You know, you said you
wanted to meet with stakeholders. Well, we’re a stakeholder.
You keep warning us what will happen if the other side takes
over.” So I said to him, “what difference does it make? You
never do anything. You never do anything.”
The hyperbole in this passage is obvious. Lobbyists don’t
talk to United States senators that way. Still, it is significant
that he was in the room with Mitch McConnell [R-KY] to talk
about what his group wanted. But while some of the hyperbole
680 PS December 2002
Interviewers must
always keep in mind
that it is not the
obligation of a subject
to be objective and
to tell us the truth.
https://doi.org/10.1017/S1049096502001166
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:46, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
was easily recognized, further research on the case made me
rethink this group’s role. Later, when I asked a staffer on the
House Transportation committee which groups were active on
the issue, this lobbyist’s group was not included in the committee
aide’s list. And when I interviewed the other aforementioned
aviation lobbyist, he mentioned a number of lobbyists active on
this issue but not the one who said his meeting with McConnell
was so pivotal.
There are at least three methodological issues illustrated
here. One is simple exaggeration. All of us like to think that
what we do has an impact and Washington-based elites may
be among the worst of all since influence is the coin of their
realm. It was easy to see the exaggeration in this case because
it was so extreme. But it will usually be much more subtle,
more skillfully conveyed, and much harder to detect. Second
is the flip side of this coin. If the subject exaggerates his role,
what got crowded out? There’s always missing information in
an interview, but exaggeration increases the amount of important
information that’s left out. Third, if there’s exaggeration,
doesn’t that call into mind the credibility of everything the
subject says, even the parts that have nothing to do with his
role?
The good news is that there are some simple remedies for
this problem. The bad news is that they can’t fully solve it:
• Do your homework. One reason why I was misled by my interview
with the first aviation lobbyist was because I walked
in cold, not knowing a thing about the organization. If we’re
studying a single case or just a few, this usually isn’t a problem.
We’ve already become experts in the area under study
before we do our interviews. But this project had many
cases. Still, if I had just read one or two articles in CQ
Weekly or the National Journal about this organization I
would have recognized the problem a little more quickly and
made an earlier movement away from his exaggerated and
self-congratulatory account of the trust-fund issue.
• Ask about other participants and organizations. Don’t assume
because someone exaggerates their role that they’ll minimize
that of others. At the end of my interviews on this case I
went back over this particular one and I noticed that he was
relatively accurate about the other organizations that he discussed.
My questions outside of his role turned out quite
well. Once the pressure was off him to justify his personal
effectiveness, he was an extremely helpful interview subject.
• Move away from impact questions. It’s perfectly fine to ask
about someone’s personal role or that of their organization;
you’ll learn things other questions might not uncover. Nevertheless,
when their account seems to place undue emphasis
on their own role or that of their organization, it may be
preferable to move quickly to other parts of your protocol.
Again, your time with a subject is a scarce resource. Try to
determine early on in an interview what part of the protocol
is likely to yield the best answers. You can always circle
back to a topic if you guess wrong. If you’re using openended
questions, there’s no expectation that the conversation
is linear and that you have to follow the order of the questions
on your interview schedule.
To Probe or not to Probe. Elite interview protocols often
rely on a limited number of open-ended questions. In a set of
interviews I did for a current project on the political participation
of nonprofits, I relied on a base list of just eight questions.
Unlike the more passive role played by an interviewer
using structured questions, this type of questioning allows the
researcher to make decisions about what additional questions
to ask as the session progresses. Generally, these probes are
prompted by two different situations. The first is probing to
gather more depth about the topic of discussion. The interviewee
may be terse, cautious or unsure about how much detail
is appropriate. When this happens the natural tendency for the
interviewer is simply to ask a follow up. Skilled interviewers
know how to probe nonverbally as well. When a subject gives
an answer that does not appear to contain all the information
needed, the immediate response on the part of the interviewer
should be to say nothing and stare expectantly at the subject.
Silence immediately creates tension and the interviewer should
be patient to allow the subject to break that uncomfortable silence
by speaking again. If that doesn’t elicit the information
needed, the interviewer can ask a follow-up question.
The second reason to probe is the subject taking the interviewer
down an unanticipated path. The interviewer must decide
whether the subject has offered a distracting digression or
an interesting new avenue to pursue. This kind of branching
can be very rewarding and is one of the main benefits of
open-ended questioning. Open-ended questions have the virtue
of allowing the subjects to tell the interviewer what’s relevant
and what’s important rather than being restricted by the researchers’
preconceived notions about what is important.
For the interviewer the skill factor is knowing when to
probe and when to continue with the sequence of questions on
the interview protocol. Even allowing for some elasticity in
the time the interview takes, there is a very real limit to how
many probes one can ask. Instantaneous judgments have to be
made to weigh the value of a probe on the subject you’re
talking about against “probe time” you may need later in the
session. Subsequent probes may be more or less valuable and
therein lie the difficult calculations that must be made quickly.
The critical methodological issue is that different interviewers
might not probe at the same points in the session even if
they hear the same answers to their initial question. The same
interviewer might not probe at the same point or with the same
question in two otherwise similar interviews. Consciously or
subconsciously, we’re always looking for certain things in an
interview answer and our follow-up questions reflect this. The
reliability issues become very serious if the responses are to be
quantified or if more than one person is doing the interviews.
As the interviewer prepares for a project where he or she must
negotiate the tradeoffs between systematically following the interview
protocol and following up intriguing (or incomplete)
answers, some thought might be given to these suggestions:
• Write probe notes into the copy of the protocol you carry
into the interview. Such scripted probes are for areas that you
believe that most respondents will cover in answering the
core question that you ask. Include critical material in these
reminders and make a consistent effort to get the pertinent
data even if it is not initially volunteered.
• Before the fieldwork commences, create an intermediate coding
template. In the normal sequence of a research project
built around elite interviewing, coding isn’t done until after
all the interviews are completed. Still, one can easily produce
some outlining of what is to be coded before the interviews
begin. Once this intermediate document is fixed in the head
of all of the interviewers, it increases the chances that the
probes will consistently fill in the information needed for
each case or each subject.
• Create a set of decision rules as to what to focus on if
time begins to run out. The order of questions on the protocol
may have more to do with a logical flow of topics
than a ranking of priorities. In a similar vein, have a clear
PSOnline www.apsanet.org 681
https://doi.org/10.1017/S1049096502001166
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:46, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
sense of what questions can be answered with a short
answer, and those that require a longer explanation.
Those questions where a briefer answer might suffice can
be reworded on the fly so that they invite a more concise
response. There is considerable variation in the expansiveness
of interview subjects and the management of answers
and probes can become pressing when the subjects are
more talkative.
• Have some stock “bridges” to use when you need to get
back to a subject area where you still need information. An
unsatisfactory answer may go on for a while and take off
into unproductive areas. Getting the subject back to the
original question is tricky, particularly if an initial follow-up
still didn’t get the information. One alternative is to move
quickly to a new question rather than let the time continue to
slip away. When I still haven’t gotten my answer I often circle
back a few questions later. You don’t want to imply that
the subject didn’t give you a satisfactory answer earlier, so
it’s necessary to hide the sense that you’re going back to
something you’ve already asked. I try to think of bridges that
will get respondents back to my subject. Something like,
“You know it’s really interesting you mentioned that about
Congress because it made me think of a situation that’s
common in the bureaucracy…” Bridges don’t have to make
logical sense so don’t wait for a perfect opening. The subject
isn’t going to stop to try to figure out how you got from A
to B because they’re focused on listening to the question that
you’re now articulating.
All these problems (and possible solutions) must be kept in
mind and balanced as the interview moves along rapidly. If
you’re taking notes rather than recording the interview, the
challenge of dealing with the issues raised here becomes even
more daunting. How can you make a clear-headed decision
about your next question when you’re listening, trying to make
sense of the answer, and taking notes all at the same time? Yet
if you are conducting the interview correctly—as a casual,
comfortable conversation—then the follow-up questions, the
branching, the movement away from unproductive avenues to
new areas, and the circling back should come across as a natural
part of that conversation. If there are too many discrete
areas where information is necessary, then open-ended questioning
might not be the most appropriate alternative for research.
For projects where depth, context, or the historical
record is at the heart of data collection, elite interviewing using
broad, open-ended questioning might be the best choice.
Even the most experienced researcher can’t anticipate all
twists and turns that interviews take. The goal here is to encourage
interviewers to think about their decision rules (or absence
thereof) for guiding themselves through problems that
emerge in this kind of research. One should not underestimate
the value of flexibility to explore unanticipated answers. At the
same time, it’s important to develop some consistency in the
way one uses probes. Although each subject is unique, many of
the problems we encounter in interviewing elites are common
ones that we confront over and over again. Systematic approaches
to those problems will enhance our confidence in the
quality of the data.
682 PS December 2002
References
Browne, William P. 1988. Private Interests, Public Policy, and American
Agriculture. Lawrence: University Press of Kansas.
Dexter, Lewis Anthony. 1969. The Sociology and Politics of Congress.
Chicago: Rand McNally.
Fenno, Richard F., Jr. 1978. Home Style: House Members in Their
Districts. Boston: Little, Brown.
Hall, Richard L. 1996. Participation in Congress. New Haven, CT: Yale
University Press.
Heinz, John P., Edward O. Laumann, Robert L. Nelson, and Robert H.
Salisbury. 1993. The Hollow Core: Private Interests in National Policy
Making. Cambridge, MA: Harvard University Press.
Huitt, Ralph K., and Robert L. Peabody. 1969. Congress: Two Decades of
Analysis. New York: Harper and Row.
King, Gary, Robert O. Keohane, and Sidney Verba. 1994. Designing Social
Inquiry. Princeton, NJ: Princeton University Press.
Kingdon, John W. 1995. Agendas, Alternatives, and Public Policies. Second
ed. New York: HarperCollins.
Matthews, Donald R. 1960. U.S. Senators and Their World. New York:
Vintage Books.
https://doi.org/10.1017/S1049096502001166
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:46, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
PSOnline www.apsanet.org 683
Interviewing Political Elites: Lessons
from Russia*
The past decade has opened up unprecedented
opportunities for scholars of post-communist
countries. Throughout much of Eastern Europe
and the former Soviet Union, scholars can now
engage policymakers and other elites directly
through interviews—probing their decision
calculi and obtaining unpublished information
and data. Yet there are gaps in the scholarly
literature that would prepare researchers for interviewing
highly placed individuals in these
countries.
This is largely because most of the related
literature discusses techniques for interviewing
elites in advanced industrial democracies (e.g.,
Aberbach, Chesney, and Rockman 1975;
Dexter 1970; Peabody et al. 1990). While informative
and to some extent applicable, there
are significantly fewer works that address obstacles
confronted by those working in the
post-communist world. Even experience
gained in other countries undergoing transitions
from authoritarian rule may not be
entirely applicable, since the post-communist
countries arguably exhibit a number of unique
features that set
them apart from
other instances of
authoritarian breakdown
(Bunce 1998;
Terry 1993). The experience
of communist
rule and its
sudden collapse produced,
in varying degrees,
a disorganized
and often disoriented
civil society, poorlyinstitutionalized
political
parties, weak
and financially strapped states, only partially
reconstructed security agencies, and in some
regions, suspicion of the West. All of
these features can pose unique problems for
the elite researcher, examples of which include
difficulties in constructing sampling
frames due to incomplete information; problems
in locating respondents who may work
without receptionists or answering machines;
a general apprehension towards foreigners
and/or interviews; an aversion to advance
scheduling; and suspicions aroused by standard
demographic questions.
There is now a wealth of English-language
studies spanning a range of post-communist
countries that rely extensively on elite interviews
and/or surveys (e.g., Fish 1995; Hahn
1993; Higley and Lengyel 2000; Jacob,
Ostrowski, and Teune 1993; Lane 1995; Lukin
2000; McFaul 2001; Miller, Hesli, and
Reisinger 1997; Miller, White, and Heywood
1998; Remington 2001; Rohrschneider 1999;
Sperling 1999; Steen 1997; Stoner-Weiss 1997;
Szelényi and Szelényi 1995; Yoder 1999;
Zimmerman 2002). However, there are few
methodological tools to guide scholars of postcommunist
countries who either lack the resources
to commission surveys by in-country
experts or desire to conduct in-depth personal
interviews.
Such concerns motivated us to write this
article. We offer a few suggestions on interviewing
elites in Russia;1
our advice should
also be applicable to other post-communist
countries and possibly to other states that exhibit
higher levels of political instability than
do advanced industrial countries. We base our
conclusions on a series of 133 in-depth interviews
with top-level bureaucrats and parliamentary
deputies which we conducted (in
Russian) in Moscow and two regions of the
Russian Federation (Nizhnii Novgorod and
Tatarstan) in 1996, and which will be replicated
in the Putin era.2
Selecting an Appropriate
Sample Design
The selection of an appropriate sample design
is a key decision that affects the type of
conclusions that one can draw later during
data analysis. In considering various ways of
drawing a sample of Russia’s national political
elite, we initially believed that probability
sampling would be impossible. We reasoned
that although a sampling frame could be constructed
without much difficulty, the polarized
political context and general suspicion of foreigners
would frustrate our efforts to arrange
interviews with the individuals selected for the
sample. Hence, we considered nonprobability
sampling techniques that tend to rely more
heavily on personal contacts and introductions,
such as a referral (or snowball) sample.
Yet due to the limitations that nonprobability
sampling would impose on our ability to
generalize from our sample to the population
of Russian national political elites,3
we chose
to employ probability sampling.4
We used a
stratified random sample design, in which the
strata were defined by institutional affiliation.
The political elite was defined by positional
criteria, consisting of parliamentary deputies
from the lower house of the national legislature
and top-level bureaucrats working in federal
ministries. Although Russia’s “national
political elite” arguably encompasses more
sectors than just these two, we narrowed our
scope in order to be comparable to the Aberbach,
Putnam, and Rockman (1981) study of
by
Sharon Werning Rivera,
Hamilton College
Polina M. Kozyreva,
Russian Academy of Sciences
Eduard G. Sarovskii,
Russian Academy of Sciences
https://doi.org/10.1017/S1049096502001178
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:51, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
bureaucratic and parliamentary elites in seven advanced
industrial nations. Using their criteria for defining our survey
populations, our bureaucrats directed departments, divisions, or
bureaus in federal ministries; were situated in the nation’s
capital; and occupied positions roughly one to two rungs below
the minister.5
The parliamentarians were members of the
lower house of Russia’s national legislature, the State Duma.
Analogous samples were drawn in each of the two provincial
capitals as well.6
For the national-level sample a sampling frame was readily
available for only the parliamentary deputies and consisted of a
published list of the 450 deputies elected in the December 1995
parliamentary elections. Constructing a sampling frame for the
federal bureaucrats was considerably more problematic, although
as Aberbach and Rockman (2002) point out, this is a challenge
not restricted to the Russian experience. Over the past decade,
government directories of all sorts have proliferated in Russia,
but we did not find one that was entirely comprehensive and
up-to-date. Consequently, we compiled our own list of ministry
department heads (379 in all), using a variety of published
directories to draft a preliminary list. We then personally contacted
all of the ministries and cajoled them to verify and update
the information. (Vestiges of Soviet-era secrecy still live on
in Russia’s federal bureaucracy: ministerial information centers
were often quite reluctant to divulge information on their organizational
structures, personnel, or contact numbers—especially
to anyone speaking Russian with a foreign accent.) Within each
stratum, a random sample of individuals was selected to represent
the stratum.
From there it often took 15 to 20 phone calls to arrange a
single interview, whereas a 1959 survey of U.S. members of
Congress averaged 3.3 callbacks per respondent (Robinson
1960, 129). Yet sheer persistence paid off. Response rates mirrored
and in some cases surpassed rates achieved in other elite
studies in a variety of contexts.7
As Table 1 shows, we interviewed
81.8% of the national parliamentary deputies in our
sample, 74.5% of the federal bureaucrats, and between 60.9%
and 86.7% of the four regional samples. Moreover, most of the
nonresponses were not outright refusals to grant an interview.
Most failures to interview respondents stemmed from a problem
endemic to all elite interviewing—the extraordinarily busy
lives of the respondents. (Respondents were particularly busy at
this time because the 1996 presidential campaign was in full
swing.) This type of nonresponse was coded as unavailable,
meaning either that contact could not be made with the respondent
or that a convenient time for the interview could never be
arranged.
Although a great deal of persistence was necessary to convince
respondents to grant us interviews, accessibility was
greater than anticipated overall.8
Thus, although there are circumstances
in which nonprobability sampling is the preferred
option, probability sampling is a viable option for many countries
outside of the developed world. The key to its success is
perseverance in locating respondents and convincing them to
grant interviews.
Gaining Access to Respondents
Some of the factors impeding access to highly placed officials
in Russia are undoubtedly similar to those faced by elite
interviewers in any context. However, those working in postcommunist
societies confront additional problems in securing
interviews. First, the simple process of locating respondents
and agreeing on a time for an interview is complicated by the
fluidity of the political environment and the newness of various
political institutions. For example, deputies in the Russian
Duma often worked without receptionists and/or answering
machines. Second, some respondents may be less familiar with
the interview process than elites in advanced industrial democracies.
This no doubt contributed to the greater apprehension
about the interviews that we observed among the civil servants
than among the parliamentary deputies, a finding also reported
by Denitch (1972, 155) in the former Yugoslavia.
Third, respondents in more politically unstable environments
may be a good deal more suspicious about the goals and purposes
of the research project. As noted earlier, our project coincided
with the highly politicized, polarized environment of
the 1996 presidential elections, leading several respondents to
suspect that the survey was merely a cover for their political
opponents to acquire potentially damaging information. Several
expressed concern that “someone wanted to learn about their
views”—whether it be the Yeltsin administration, their political
competitors, state security agencies, or foreigners. For instance,
in answering the demographic questions, one regional deputy
(D-115) remarked that it seemed as if the information was being
collected for the “organs” [of state security].9
Deputies
from the Communist Party of the Russian Federation (CPRF)
were particularly guarded since a listening device reportedly
had been found in their offices during the
presidential campaign.10
Hence, the process of gaining access to
respondents in Russia and then winning their
confidence requires some special attention.
We present a few suggestions for surmounting
potential roadblocks in postcommunist
and other countries in transition.
Have an Institutional Affiliation
All the interviews in our project were conducted
under the auspices of the Institute of
Sociology of the Russian Academy of Sciences,
with the interviewing responsibilities in
Moscow divided between the American
(Rivera) and the Russian (Sarovskii). Interviewers
were affiliated with either the Institute
of Sociology or another institute of the
Academy of Sciences, and the questionnaire
itself mentioned the institute’s sponsorship
and listed a contact name and phone number.
684 PS December 2002
Table 1
Reasons for Nonresponse
Interviews Total
Completed Refusals Unavailable Sample Size
n (%) n (%) n (%) n (%)
Duma deputies 45 (81.8) 2 (3.6) 8 (14.6) 55 (100.0)
Federal bureaucrats 38 (74.5) 4 (7.8) 9 (17.7) 51 (100.0)
N. Novgorod deputies 11 (73.3) 1 (6.7) 3 (20.0) 15 (100.0)
N. Novgorod bureaucrats 14 (60.9) 0 (0.0) 9 (39.1) 23 (100.0)
Tatarstan deputies 13 (86.7) 1 (6.7) 1 (6.7) 15 (100.1)
Tatarstan bureaucrats 12 (80.0) 0 (0.0) 3 (20.0) 15 (100.0)
Note: Response rates are calculated based on the number of eligible elements in
each sample. In total, there were only four blanks (all in the federal bureaucratic
sample), since four ministerial departments were no longer in existence. In Nizhnii
Novgorod, there was one substitution made for a deputy who refused an interview,
and in two cases, deputy department heads were interviewed because the department
heads were unavailable. Percentages may not sum to 100.0% due to
rounding.
https://doi.org/10.1017/S1049096502001178
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:51, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
Despite the interviewers mentioning the goals of the interview
and the sponsoring organization when introducing themselves, respondents
often asked additional questions about who was sponsoring
the research. The fact that the study was being undertaken
in association with an authoritative, well-established institution
seemed to assure respondents that the research was genuinely intended
for academic purposes.11
In a study of Yugoslav opinion
leaders conducted in 1968, the role of having appropriate
“legitimizers” is also clear (Denitch
1972, 153).12
In China, however, interpersonal
connections and relationships were found to be
more crucial than official channels in obtaining
access (Hsu 2000, Ch. 3).
Reassure the Respondent That There Are
No Right Answers
Another problem we encountered in securing
interviews was that some respondents apparently
equated the interview situation with an
examination. Some expressed concern that they
would not be able to answer our questions; this
was particularly true for bureaucrats, who said
they could answer questions about the work
of their ministries but not about more general
themes. One Duma deputy was unconcerned
about the confidentiality of the information, but
rather wanted reassurance that the interviewer
would not ridicule him (D-026). During the interview
itself, some respondents became perceptibly guarded
and tense, and when answering, they seemed to be searching
for words that would demonstrate a certain level of competence
and erudition.
Throughout the entire process, we tried to reassure respondents
that there were no correct answers to our questions. We
also stressed that they were members of a highly select group
of individuals, whose task it was to make key decisions in the
realm of public policy. As a result, any answers they could
provide in and of themselves would constitute very valuable
information for us. Such reassurances seemed to alleviate certain
insecurities and anxieties felt by some respondents in this
regard.
Establish an Appropriate Identity
for the Interviewer
One of the issues that must be resolved by each researcher
is how to present oneself to the respondents in the study.
Some researchers believe that “in the typical interview there
exists a hierarchical relation, with the respondent being in the
subordinate position.” Accordingly, feminist researchers have
suggested that a way of responding to these inequalities and
minimizing status differences is for interviewers to “show their
human side and answer questions and express feelings”
(Fontana and Frey 2000, 658).
Yet to preserve our structured interview format, we chose to
address these potential inequalities by emphasizing the status
and rights of the respondents. For instance, when respondents
were deciding whether to grant us interviews, we would remind
them that since we were only the “requesting party,”
they always had the last word. This reassured them that they
had the upper hand in the interview and could refuse to answer
any question if they so chose.
At the same time, elite researchers emphasize the need for
balance when establishing the researcher’s identity. One potential
pitfall is the tendency for the interviewer to be overly
deferential and concerned with establishing rapport, thereby
losing the ability to control the direction and scope of the
interview (Ostrander 1993). As a counterweight, some recommend
conveying to respondents that you’ve “done your homework”
on them so that the extent of preparation for the interview
causes respondents to take you seriously (Richards 1996,
202–203; Zuckerman 1972, 164–66). However, we concur with
the views expressed by
Denitch (1972, 154), whose
interviewers in the Yugoslav
context gave no indication
that they knew anything about
the backgrounds of the respondents.
Revealing knowledge
about the interviewees,
he contends, might raise too
many doubts about anonymity.
Another helpful factor was
that the occupational status
of the interviewers—by and
large professional researchers—was
roughly
equivalent to many of the
respondents. This appeared to
foster mutual understanding
and convince respondents
that their answers and comments
would be understood.
In much the same way, Alan
Aldridge (1993) notes that emphasizing the congruence between
his occupational status as an academic and that of his
respondents facilitated access, rapport, and high-quality
responses. Occupational status seemed to outweigh potential
problems created by gender. Despite 95.2% of the Moscowbased
respondents being male, this was not a significant
obstacle for the female (American) interviewer in any
discernible way.13
Request Interviews in Person When Possible
In most elite projects (and indeed, in other projects described
in this symposium), initial contact is made via an introductory
letter explaining the goals of the project. This is
usually followed by a phone call to set a date and time for
the interview. Outside of the developed world, however, this
approach is of less utility for a variety of reasons, both
technical and cultural. Technical barriers to advance scheduling
of interviews can include an undependable mail service,
unreliable reception and delivery of mail in offices, and
incomplete—or in some cases—nonexistent directories and
phone books. Cultural barriers involve—at least in the Russian
case and also in China—a penchant for day-to-day scheduling
without much advance notice. As a matter of fact, when requesting
an appointment for the following week, respondents
frequently told us that it was too far in advance to plan and
requested that we call back on the day that we wished to
speak with them. Other interviewers working with Russian
elites also found that introductory letters were of limited use
and that it was necessary to approach potential interviewees
by telephone (White et al. 1996, 310).14
Thus, rather than
using an introductory letter, we simply phoned respondents
directly with our requests.
Once we were granted a pass to a ministerial building or
the parliament for one interview, it proved useful simply to
appear unannounced at the offices of other respondents on our
sample list who were located in the same institution. In most
PSOnline www.apsanet.org 685
One potential pitfall is
the tendency for the
interviewer to be
overly deferential and
concerned with establishing
rapport, thereby
losing the ability to
control the direction
and scope of the interview
(Ostrander 1993).
https://doi.org/10.1017/S1049096502001178
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:51, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
cases, a request made in person increased the likelihood that
the target respondent would agree to an interview.
Developing a Questionnaire
The methodological costs and benefits of open-ended
queries versus closed-ended questions have been discussed in
the literature (Aberbach, Chesney, and Rockman 1975;
Schuman and Presser 1981, 79–112), and we will not repeat
them here. Like several other authors in this symposium, we
wish to highlight the importance of open-ended questions for
elite interviewing. In our experience, Russian elites strongly
resisted the imposition of categories or choices on their reasoning
processes. One Duma deputy remarked that “sociologists
aren’t inclined to understand that it’s impossible to answer
some questions in the way that they’ve instructed us to.
They are not inclined to make a notation to the effect that a
certain answer is not precisely as stated but is rather slightly
different” (D-013).
Yet we did not use open-ended questions exclusively; rather,
we used a combination of open-ended and closed-ended
questions (refined through pretesting and back-translation), presented
in alternating fashion. The first five questions were very
general open-ended queries, followed a couple of closed-ended
questions, and so on in a similar fashion. This sequencing had
several advantages. First, once the introductory open-ended
questions had been covered, it was easy to elicit answers to
the more formulaic questions. We had demonstrated respect
for the complexity of their views through the open-ended
questions and thus had “earned” the right to ask questions
posed exclusively from our frame of reference. Also, the
closed-ended questions probably allowed respondents to recover
a bit from the more demanding open-ended question
format. Second, since political elites can expound on their responses
at great length, especially in the early stages of an interview,
we tried to channel such tendencies toward subjects
on which we desired elaboration. Third, although the interview
was fully structured, the frequency and format of the openended
questions (with scripted probes written into the interview
protocol) gave it a more semi-structured feel.15
The oral interview also included a series of background
questions, which we anticipated would be perceived by some
as threatening since they included not only standard demographic
questions such as age, education, and place of birth,
but also questions dealing with past and present political activities,
travel abroad, business dealings, and the like. By contrast,
elite interviewers working in Austria and France several
decades ago encountered an entirely different situation. According
to them, asking personal and biographical questions at the
beginning of the interview “served to relax respondents and involve
them in the interview” (Hunt, Crane, and Wahlke 1964,
68). In the Russian context, however, these types of questions
can raise suspicions, and thus we heeded the following
advice—to put threatening behavioral questions “near the end
of the interview so that the interviewer has a chance to establish
good rapport with the respondent” (Sudman and Bradburn
1974, 143). In an attempt to minimize response effects, we
asked the background questions after all of the substantive
questions and also phrased them in the most general and nonthreatening
way. For example, when questioning elites about
their residence abroad, we formulated the question as follows:
“Did you ever happen to live abroad (not including the
Commonwealth of Independent States and the Baltics) for a
period of three months or more?” By phrasing the question in
this way, we tried to: (1) draw attention away from their
reason for living abroad, and (2) downplay their having been
in a position to live abroad during the Soviet era, as this was a
right granted only with Communist Party approval. This was
important because in the post-communist era, some respondents
may be reluctant to disclose the extent of their previous
involvement with the Party.
Another means of putting respondents at ease during the interviews
was to assure them that their identities would remain
confidential, be presented only in aggregate or anonymous
form, and be used only in academic research. Several other
phrases also proved helpful in coaxing answers out of reluctant
respondents: asking them to say “something—if only a
few words” in response to a question; telling them that there
are as many different opinions as there are people (Skol’ko
lyudei, stol’ko mnenii); and reminding them—if they objected
to a question—that they had the last word in deciding whether
to answer it.
After completing the oral part of the interview (which was
conducted in Russian and, in the vast majority of cases, tape
recorded), we asked all respondents to fill out a short, selfadministered
written questionnaire consisting primarily of
closed-ended value questions. Again, building on the rapport
that had developed over the course of the interview, most
respondents completed this questionnaire on the spot, in the
presence of the interviewer. Occasionally, time constraints
required that questionnaires be left with respondents; in those
cases, we usually expended substantial efforts on retrieving
them. In the end, only 7.5% of all 133 respondents (from
Moscow and the two regions) failed to complete the selfadministered
written questionnaires.
One additional issue that affected our use of both open-ended
and closed-ended questions was the challenge of translating certain
concepts into Russian. For example, the phrase “authoritarian
rule” can be translated literally as avtoritarnaya vlast’. Alternatively,
a more commonly used phrase, “strong hand”
(zhestkaya ruka) can be used, although the latter phrase has a
weaker connotation and its meaning is subject to a wider variety
of interpretations. In such cases, the American researcher
deferred to the judgment of native Russian speakers, aiming
above all to capture the spirit of the phrase or concept. Several
pretests with “debriefings” by native Russian speakers as to
how they understood problematic concepts were also helpful, as
was back-translation of the questionnaire into English by a native
English speaker fluent in Russian. In cases where conceptual
problems arose with the meanings of standard closed-ended
questions that had been used previously by other researchers,
we retained the original Russian-language wording. We regarded
the ability to conduct reliable comparisons with prior findings as
more important than linguistic clarity.
686 PS December 2002
Notes
*The authors gratefully acknowledge the financial support of a Rackham
Graduate School-Russian Academy of Sciences’ Institute of Sociology
Collaboration Grant provided by the University of Michigan.
Rivera thanks Cornell University’s Institute for European Studies for a
Mellon-Sawyer Postdoctoral Fellowship in Democratization that facilitated
the writing of this article. For helpful comments and suggestions,
we thank Carolyn Hsu, Steve Heeringa, David Rivera, and Aseema
Sinha.
1. For insights on conducting surveys of the mass public in the postcommunist
region, see Gibson 1994; Swafford 1992.
2. For details on the interviews, sample, and methodology, see Rivera
1998, 2000.
https://doi.org/10.1017/S1049096502001178
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:51, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
3. As Judd, Smith, and Kidder (1991, 133) succinctly state, “Probability
sampling is the only approach that makes possible representative sampling
plans. It makes it possible for the investigators to estimate the extent to
which the findings based on their sample are likely to differ from what
they would have found by studying the population.”
4. We recognize that nonprobability sampling may be the most appropriate
vehicle for certain projects where accessibility is much more problematic
(e.g., interviewing economic or business elites, as in McDowell
1998) or where the sample size is very small. Nonprobability sampling
also has the advantage of convenience and cost effectiveness, which may
outweigh the researcher’s desire to be able to “specify the chances that
the sample findings do not differ by more than a certain amount from the
true population values”—a feature of probability sampling (Judd, Smith,
and Kidder 1991, 134–136). See also Kalton 1983, 90–93.
5. Following Aberbach, Putnam, and Rockman (1981, 27), we excluded
(1) the Ministries of Defense and Internal Affairs (though we included the
Ministry of Foreign Affairs) and (2) departments that “performed obvious
staff functions.” In the Russian case, first deputy ministers and deputy
ministers were considered to constitute one level.
6. We are grateful to Yurii Gapeenkov and his team, Liliya Sagitova,
and Guzel Stolarova for their help in the regions.
7. Robert D. Putnam (1973, 15) reports response rates of 85% for
British MPs and 78% for Italian parliamentarians. See also Aberbach,
Putnam, and Rockman 1981, 26; Hoffmann-Lange, 1987, 36; McDonough
1981, 253; Verba et al. 1987, 280–81.
8. In both the Yugoslav and American contexts (Denitch 1972, 146;
Ostrander 1993, 9; Zuckerman 1972, 161), researchers imply that the difficulties
of gaining access to certain elites have been overstated.
9. These numbers identify the interviewees in the study. “D” denotes
deputies and “G” stands for government bureaucrats.
10. Interviews conducted in non-election years should meet less politically
charged suspicion. Moreover, if Michael McFaul (1997) is correct
that the 1996 presidential race was the last “revolutionary,” highly ideological,
and polarized election in which the principal divide was between
pro-reform and anti-reform groups, even interviews conducted during election
campaigns in Russia should be less problematic in the future. On the
other hand, a certain measure of secrecy on the part of the CPRF has extended
beyond the 1996 elections. Deputies will not say in advance where
plenary sessions of the party’s Central Committee will be held. As one
deputy, Yurii Chunkov, explains: “We want to keep the location secret as
long as possible so they won’t tape us. They listen to everything. One
hour after a conversation, the transcript is on the desk of whoever needs
to see it” (Bohlen 1998, 1).
11. In the context of this single study, it is difficult to measure precisely
what difference such an affiliation made in terms of access and
information supplied to the interviewers. For reflections on the impact of
sponsorship by an elite interviewer working in London, see McDowell
(1998, 2136).
12. This is also an important factor in Zuckerman’s access to Nobel
laureates in science (Zuckerman 1972, 162–63). For more on sponsorship,
see Dexter 1970, 50–55, and Javeline 1996.
13. In a series of interviews with high-status employees of merchant
banks in London, McDowell (1998, 2140–41) expresses a similar viewpoint.
To her surprise, most of her male interviewees “seemed to feel surprisingly
free to be open with [her],” even when discussing gender relations and respondents’
attitudes toward their women colleagues. Similarly, in a study of
local elites in Scotland and France, Sabot (1999, 334) concludes that gender
“becomes secondary to other positional factors,” such as nationality. However,
in other contexts (e.g., rural areas in India), interviews conducted by a
person of another gender can be problematic in many respects.
14. However, one study of elites in Russia sent prospective respondents
an interview schedule and accompanying letter that described the goals
and character of the research, achieving a response rate of 70%
(Mikul’skii et al. 1995, 35–36).
15. We should note one drawback to this approach. Some elites, especially
civil servants, found the lack of specificity inherent in the opening
battery of questions disconcerting. They felt that the questions were too
general and wide ranging.
PSOnline www.apsanet.org 687
References
Aberbach, Joel D., James D. Chesney, and Bert A. Rockman. 1975.
“Exploring Elite Political Attitudes: Some Methodological Lessons.”
Political Methodology 2:1–27.
Aberbach, Joel D., Robert D. Putnam, and Bert A. Rockman. 1981.
Bureaucrats and Politicians in Western Democracies. Cambridge:
Harvard University Press.
Aberbach, Joel D., and Bert A. Rockman. 2002. “Conducting and Coding
Elite Interviews.” PS: Political Science and Politics 35:673–76.
Aldridge, Alan. 1993. “Negotiating Status: Social Scientists and Anglican
Clergy.” Journal of Contemporary Ethnography 22:97–112.
Bohlen, Celestine, “Communists Risking Perks and Power in Yeltsin
Battle,” New York Times, 24 April 1998, sec. A.
Bunce, Valerie. 1998. “Regional Differences in Democratization: The East
Versus the South.” Post-Soviet Affairs 14:187–211.
Denitch, Bogdan. 1972. “Elite Interviewing and Social Structure: An Example
from Yugoslavia.” Public Opinion Quarterly 36:143–58.
Dexter, Lewis Anthony. 1970. Elite and Specialized Interviewing.
Evanston, IL: Northwestern University Press.
Fish, M. Steven. 1995. Democracy from Scratch: Opposition and
Regime in the New Russian Revolution. Princeton: Princeton
University Press.
Fontana, Andrea, and James H. Frey. 2000. “The Interview: From Structured
Questions to Negotiated Text.” In Handbook of Qualitative Research,
2nd ed., eds. Norman K. Denzin and Yvonna S. Lincoln. Thousand
Oaks, CA: Sage Publications, 645–72.
Gibson, James L. 1994. “Survey Research in the Past and Future USSR:
Reflections on the Methodology of Mass Opinion Surveys.” In Research
in Micropolitics: New Directions in Political Psychology, Vol. 4, eds.
Michael X. Delli Carpini, Leonie Huddy, and Robert Y. Shapiro.
Greenwich, CT: JAI Press.
Hahn, Jeffrey. 1993. “Attitudes Toward Reform Among Provincial Russian
Politicians.” Post-Soviet Affairs 9:66–85.
Higley, John, and György Lengyel, eds. 2000. Elites after State Socialism:
Theories and Analysis. New York: Rowman and Littlefield.
Hoffmann-Lange, Ursula. 1987. “Surveying National Elites in the Federal
Republic of Germany.” In Research Methods for Elite Studies, eds.
George Moyser and Margaret Wagstaffe. Boston: Allen & Unwin.
Hsu, Carolyn. 2000. “Creating Market Socialism: Narratives and Emerging
Economic Institutions in the People’s Republic of China.” Ph.D. diss.
University of California, San Diego.
Hunt, William H., Wilder W. Crane, and John C. Wahlke. 1964. “Interviewing
Political Elites in Cross-cultural Comparative Research.”
American Journal of Sociology 70:59–68.
Jacob, Betty M., Krzysztof Ostrowski, and Henry Teune, eds. 1993.
Democracy and Local Governance: Ten Empirical Studies. Honolulu,
HI: Matsunaga Institute for Peace.
Javeline, Debra. 1996 “Effects of the American Sponsor on Survey Responses
in Russia, Ukraine, and Central Asia.” Presented at the Annual Meeting of
the American Association for the Advancement of Slavic Studies.
Judd, Charles M., Eliot R. Smith, and Louise H. Kidder. 1991. Research
Methods in Social Relations. 6th ed. Philadelphia: Harcourt Brace
Jovanovich College Publishers.
Kalton, Graham. 1983. Introduction to Survey Sampling. Newbury Park,
CA: Sage Publications.
Kullberg, Judith. 1994. “The Ideological Roots of Elite Political Conflict
in Post-Soviet Russia.” Europe-Asia Studies 46:929–53.
Lane, David. 1995. “Political Elites Under Gorbachev and Yeltsin in the
Early Period of Transition: A Reputational and Analytical Study.” In
Patterns in Post-Soviet Leadership, eds. Timothy J. Colton and Robert
C. Tucker. Boulder, CO: Westview Press, 29–47.
Lukin, Alexander. 2000. The Political Culture of the Russian ‘Democrats.’
New York: Oxford University Press.
McDonough, Peter. 1981. Power and Ideology in Brazil. Princeton:
Princeton University Press.
McDowell, L. 1998. “Elites in the City of London: Some Methodological
Considerations.” Environment and Planning A 30:2133–46.
McFaul, Michael. 1997. Russia’s 1996 Presidential Election: The End of
Polarized Politics. Stanford: Hoover Institution Press.
McFaul, Michael. 2001. Russia’s Unfinished Revolution: Political Change
from Gorbachev to Putin. Ithaca: Cornell University Press.
Mikul’skii, K. I., et al. 1995. Rossiiskaya elita: opyt sotsiologicheskogo
analiza—Chast’ 1. Kontseptsiya i metody issledovaniya [The Russian
elite: an attempt at a sociological analysis—part one. Conceptualization
and research methods]. Moscow: Nauka.
https://doi.org/10.1017/S1049096502001178
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:51, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.
Miller, Arthur H., Vicki L. Hesli, and William M. Reisinger. 1997.
“Conceptions of Democracy Among Mass and Elite in Post-Soviet Societies.”
British Journal of Political Science 27:157–90.
Miller, William L., Stephen White, and Paul Heywood. 1998. Values and
Political Change in Postcommunist Europe. New York: St. Martin’s
Press.
Ostrander, Susan A. 1993. “‘Surely You’re Not in This Just to Be
Helpful’: Access, Rapport, and Interviews in Three Studies of Elites.”
Journal of Contemporary Ethnography 22:7–27.
Peabody, Robert L., et al. 1990. “Interviewing Political Elites.” PS: Political
Science & Politics 23:451–55.
Putnam, Robert D. 1973. The Beliefs of Politicians: Ideology, Conflict, and
Democracy in Britain and Italy. New Haven: Yale University Press.
Remington, Thomas F. 2001. The Russian Parliament: Institutional Evolution
in a Transitional Regime, 1989–1999. New Haven: Yale University
Press.
Richards, David. 1996. “Elite Interviewing: Approaches and Pitfalls.”
Politics 16:199–204.
Rivera, Sharon Werning. 1998. “Communists as Democrats? Elite Political
Culture in Post-Communist Russia.” Ph.D. diss. University of
Michigan.
Rivera, Sharon Werning. 2000. “Elites in Post-communist Russia: A
Changing of the Guard?” Europe-Asia Studies 52:413–32.
Robinson, James A. 1960. “Survey Interviewing among Members of
Congress.” Public Opinion Quarterly 24:127–38.
Rohrschneider, Robert. 1999. Learning Democracy: Democratic and
Economic Values in Unified Germany. New York: Oxford University
Press.
Sabot, Cladie Emmanuèle. 1999. “Dr. Jekyl, Mr H(i)de: The Contrasting
Face of Elites at Interview.” Geoforum 30:329–35.
Schuman, Howard, and Stanley Presser. 1981. Questions and Answers in
Attitude Surveys: Experiments on Question Form, Wording, and
Context. New York: Academic Press.
Sperling, Valerie. 1999. Organizing Women in Contemporary Russia:
Engendering Transition. New York: Cambridge University Press.
Steen, Anton. 1997. Between Past and Future: Elites, Democracy and the
State in Post-Communist Countries—A Comparison of Estonia, Latvia,
and Lithuania. Brookfield: Ashgate.
Stoner-Weiss, Kathryn. 1997. Local Heroes: The Political Economy of
Russian Regional Governance. Princeton: Princeton University Press.
Sudman, Seymour, and Norman M. Bradburn. 1974. Response Effects in
Surveys: A Review and Synthesis. Chicago: Aldine Publishing.
Swafford, Michael. 1992. “Sociological Aspects of Survey Research in the
Commonwealth of Independent States.” International Journal of Public
Opinion Research 4:346–57.
Szelényi, Iván, and Szonja Szelényi. 1995. “Circulation or Reproduction of
Elites during the Postcommunist Transformation of Eastern Europe:
Introduction.” Theory and Society 24:615–38.
Terry, Sarah Meiklejohn. 1993. “Thinking about Post-Communist Transitions:
How Different Are They?” Slavic Review 52:333–37.
Verba, Sidney, et al. 1987. Elites and the Idea of Equality: A Comparison of
Japan, Sweden, and the United States. Cambridge: Harvard University Press.
White, Stephen, Olga Kryshtanovskaia, Igor Kukolev, Evan Mawdsley and
Pavel Saldin. 1996. “Interviewing the Soviet Elite.” The Russian
Review 55:309–16.
Yoder, Jennifer A. 1999. From East Germans to Germans? The New Postcommunist
Elites. Durham, NC: Duke University Press.
Zimmerman, William. 2002. The Russian People and Foreign Policy: Russian
Elite and Mass Perspectives, 1993–2000. Princeton: Princeton University
Press.
Zuckerman, Harriet. 1972. “Interviewing an Ultra-Elite.” Public Opinion
Quarterly 36:159–75.
688 PS December 2002
https://doi.org/10.1017/S1049096502001178
Downloaded from https:/www.cambridge.org/core. The New School, on 04 May 2017 at 21:20:51, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms.