11 Asking questions Chapter outline Introduction 246 Open or closed questions? 246 Open questions 246 Closed questions 249 Types of questions 253 Rules for designing questions 254 General rules of thumb 254 Specific rules when designing questions 255 Vignette questions 261 Piloting and pre-testing questions 263 Using existing questions 264 Checklist 265 Key points 266 Questions for review 267 Asking questions246 Introduction than how best to phrase questions. However, there is no doubt that the issue of how questions should be asked is a crucial concern for the survey researcher, and it is not surprising that this aspect of designing survey instruments has been a major focus of attention over the years and preoccupies many practising researchers. Chapter guide This chapter is concerned with the considerations that are involved in asking questions that are used in structured interviews and questionnaires of the kinds discussed in the two previous chapters. As such, it continues the focus upon social survey research that began with Chapter 8. The chapter explores: • the issues involved in deciding whether or when to use open or closed questions; • the different kinds of question that can be asked in structured interviews and questionnaires; • rules to bear in mind when designing questions; • vignette questions in which respondents are presented with a scenario and are asked to reflect on the scenario; • the importance of piloting questions; • the possibility of using questions that have been used in previous survey research. To many people, how to ask questions represents the crux of considerations surrounding the use of social survey instruments such as the structured interview or the self-completion questionnaire. As the previous two chapters have sought to suggest, there is much more to the design and administration of such research instruments Open or closed questions? One of the most significant considerations for many researchers is whether to ask a question in an open or closed format. This distinction was first introduced in Chapter 9. The issue of whether to ask a question in an open or closed format is relevant to the design of both structured interview and self-completion questionnaire research. With an open question respondents are asked a question and can reply however they wish. With a closed question they are presented with a set of fixed alternatives from which they have to choose an appropriate answer. All the questions in Tips and skills ‘Instructions for interviewers in the use of a filter question’ (in Chapter 9) are of the closed kind. So too are the Likertscale items in Research in focus 7.2 and Tips and skills ‘Formatting a Likert scale’ (in Chapter 10); these form a particular kind of closed question. What, then, are some of the advantages and limitations of these two types of question format? Open questions Open questions present both advantages and disadvantages to the survey researcher, though, as the following discussion suggests, the problems associated with the processing of answers to open questions tend to mean that closed questions are more likely to be used. Advantages Although survey researchers typically prefer to use closed questions, open questions do have certain advantages over closed ones, as outlined in the list below. Asking questions 247 • Respondents can answer in their own terms. They are not forced to answer in the same terms as those foisted on them by the response choices. • They allow unusual responses to be derived. Replies that the survey researcher may not have contemplated (and that would therefore not form the basis for fixedchoice alternatives) are possible. • The questions do not suggest certain kinds of answer to respondents. Therefore, respondents’ levels of knowledge and understanding of issues can be tapped. The salience of issues for respondents can also be explored. • They are useful for exploring new areas or ones in which the researcher has limited knowledge. • They are useful for generating fixed-choice format answers. This is a point that will be returned to below. Disadvantages However, open questions present problems for the survey researcher, as the following list reveals. • They are time-consuming for interviewers to administer. Interviewees are likely to talk for longer than is usually the case with a comparable closed question. • Answers have to be ‘coded’, which is very time consuming. Key concept 11.1 outlines the nature of coding and provides some considerations involved in its use. For each open question it entails reading through answers, deriving themes that can be employed to form the basis for codes, and then going through the answers again so that the answers can be coded for entry into a computer spreadsheet. The process is essentially identical to that involved in content analysis and is sometimes called post-coding to distinguish it from pre-coding, whereby the researcher designs a coding frame in advance of administering a survey instrument and often includes the pre-codes in the questionnaire (as in Tips and skills ‘Processing a closed question’). However, in addition to being timeconsuming, post-coding can be an unreliable process, because it can introduce the possibility of variability in the coding of answers and therefore of measurement error (and hence lack of validity). This is a form of data-processing error (see Figure 8.9). Research in focus 11.1 and 11.2 deal with aspects of the coding of open questions. • They require greater effort from respondents. Respondents are likely to talk for longer than would be the case for a comparable closed question, or, in the case of a self-completion questionnaire, would need to write for much longer. Therefore, it is often suggested that open questions have limited utility in the context of self-completion questionnaires. Because of the greater effort involved, many prospective respondents are likely to be put off by the prospect of having to write extensively, which may exacerbate the problem of low response rates with postal questionnaires in particular (see Chapter 10). • There is the possibility in research based on structured interviews of variability between interviewers in the recording of answers. This possibility is likely to arise as a result of the difficulty of writing down verbatim what respondents say to interviewers. The obvious solution is to employ a tape recorder. However, it is not always practicable to employ one—for example, in a noisy environment. Also, the transcription of answers to tape-recorded open questions is immensely time-consuming and adds additional costs to a survey. The problem of transcription is one continually faced by qualitative researchers using semi-structured and unstructured interviews (see Chapter 20). Key concept 11.1 What is coding? Coding is a key stage in quantitative research. Many forms of data that are of interest to social scientists are essentially in an unstructured form. Examples are: answers to open questions in interviews and questionnaires; newspaper articles; television programmes; and behaviour in a school classroom. In order to quantify and analyse such materials, the social researcher has to code them. Coding entails two main stages. First, the unstructured material must be categorized. For example, with answers to an open question, this means that the researcher must examine people’s answers and group them into different categories. Research in focus 11.1 provides some examples of this process. Second, the researcher must assign numbers to the categories that have been created. This step is a largely arbitrary process, in the sense that the numbers themselves are simply tags that will allow Asking questions248 the material to be processed quantitatively. Thus, when Schuman and Presser (1981; see Research in focus 11.3) asked a question about the features of a job that people most prefer, answers were grouped into eleven categories: pay; feeling of accomplishment; control of work; pleasant work; security; opportunity for promotion; short hours; working conditions; benefits; satisfaction; other responses. Each of these eleven categories would then need to be assigned a number, such as: 1 for pay; 2 for feeling of accomplishment; 3 for control of work; 4 for pleasant work; etc. There is an important distinction between pre-coding and post-coding. Many closed questions in survey research instruments are pre-coded (see Tips and skills ‘Processing a closed question’ for an example). This means that respondents are being asked to assign themselves to a category that has already had a number assigned to it. Post-coding occurs when answers to an open question are being coded or when themes in newspaper articles concerned with a certain topic are being counted, as in content analysis (see Chapter 13). When coding, three basic principles need to be observed (Bryman and Cramer 2011). 1. The categories that are generated must not overlap. If they do, the numbers that are assigned to them cannot be applied to distinct categories. 2. The list of categories must be complete and therefore cover all possibilities. If it is not, some material will not be capable of being coded. This is why coding a certain item of information, such as answers to an open question, sometimes includes a category of ‘other’. 3. There should be clear rules about how codes should be applied, so that the person conducting the coding has instructions about the kinds of answers that should be subsumed under a particular code. Such rules are meant to ensure that those who are conducting the coding are consistent over time in how they assign the material to categories and, if more than one person is coding, are consistent with each other. The term ‘coding frame’ is often employed to describe the lists of codes that should be applied to unstructured data and the rules for their application. In content analysis and structured observation, the term coding manual is often preferred to describe the lists of codes for each item of information and the rules to be employed. Quantitative data are also sometimes recoded. For example, if we have data on the exact age of each person in a sample, we may want to group people into age bands. The rationale for doing this is described in Chapter 15 and the procedure of recoding with a computer program is described in Chapter 16. Coding also occurs in qualitative research, but the role it plays and its significance are somewhat different there from quantitative research. Coding in qualitative research is described in Chapter 24 and the procedure for coding with a qualitative data analysis computer program is described in Chapter 25. Research in focus 11.1 Coding an open question Coding an open question usually entails reading and rereading transcripts of respondents’ replies and formulating distinct themes in their replies. A coding frame then needs to be designed that identifies the types of answer associated with each question and their respective codes (that is, numbers). A coding schedule may also be necessary to keep a record of rules to be followed in the identification of certain kinds of answer in terms of a theme. The numbers allocated to each answer can then be used in the computer processing of the data. Charles and Kerr (1988) conducted interviews concerning the consumption of food in the home with 200 British women. Their interviews were of the semi-structured kind (see Key concept 9.2), so that the questions were open ended. Charles and Kerr were working within a qualitative research strategy, but, for several of the questions that they asked, they found it helpful to quantify respondents’ answers. Thus, while the bulk of the presentation of their findings is in the form of passages from interview transcripts, which is the conventional way Asking questions 249 Closed questions The advantages and disadvantages of closed questions are in many respects implied in some of the considerations relating to open questions. Advantages Closed questions offer the following advantages to researchers. • It is easy to process answers. For example, the respondent in a self-completion questionnaire or the interviewer using a structured interview schedule will place a tick or circle an answer for the appropriate response. The appropriate code can then be almost mechanically derived from the selected answer, since the pre-codes are placed to the side of the fixed-choice answers. See Tips and skills ‘Processing a closed question’ for an example based on Tips and skills ‘Closed question with a vertical format’ (see Chapter 10). • Closed questions enhance the comparability of answers. With post-coding there is always a problem of knowing how far respondents’ answers that receive a certain code are genuinely comparable. As previously noted, the assignment of codes to people’s answers may be unreliable (see the sixth point in Thinking deeply 9.1). Checks are necessary to ensure that there is a good deal of agreement between coders and that coders do not change their coding conventions over of presenting such findings in qualitative research (see Chapter 20), some of their findings were far more redolent of the kind typically encountered in quantitative research. As the authors say: ‘The material that we have is . . . qualitative, as we included many open-ended questions which gave the women the chance to talk freely, and quantitative, as the sample was large enough to produce useful statistical data’ (Charles and Kerr 1988: 7). One of their analyses is a contingency table (see Chapter 15), which shows the relationship between social class of male partner and responsibility for meal preparation. The latter variable was coded so that five categories of responsibility were generated: self prepares all meals; self mainly, partner sometimes; either or both (50/50); self mainly with help from partner and/or children sometimes; and other. Men in classes I and II were more likely to participate than those in the other social classes. Another table shows the relationship between class and the occasions that alcohol is drunk at home. The latter variable comprised the following categories: Christmas and special occasions; Christmas, special occasions and when having company; at other times but not frequently; once a week or more; never drink at home; and other. These categories were generated after the data had been collected and essentially entailed a process of discerning likely categories and then systematically coding each respondent’s answer to determine how it should be coded in terms of such considerations as responsibility for preparing meals or the occasions when alcohol is consumed in the home. Research in focus 11.2 Coding a very open question Foddy (1993) reports the results of an exercise in which he asked a small sample of his students ‘Your father’s occupation is (was) . . .’ and requested three details: nature of business; size of business; and whether owner or employee. In answer to the size of business issue, the replies were particularly variable in kind, including: ‘big’, ‘small’, ‘very large’, ‘3,000 acres’, ‘family’, ‘multinational’, ‘200 people’, and ‘Philips’. The problem here is obvious: you simply cannot compare and therefore aggregate people’s replies. In a sense, the problem is only partly to do with the difficulty of coding an open question. It is also due to a lack of specificity in the question. If, instead, Foddy had asked ‘How many employees are (were) there in your father’s organization?’, a more comparable set of answers should have been forthcoming. Whether his students would have known this information is, of course, yet another issue. However, the exercise does illustrate the potential problems of asking an open question, particularly one like this that lacks a clear reference point for gauging size. Asking questions250 time. Closed questions essentially circumvent this problem. • Closed questions may clarify the meaning of a question for respondents. Sometimes, respondents may not be clear about what a question is getting at, and the availability of answers may help to clarify the situation for them. • Closed questions are easy for interviewers and/or respondents to complete. Precisely because interviewers and respondents are not expected to write extensively and instead have to place ticks or circle answers, closed questions are easier and quicker to complete. • In interviews, closed questions reduce the possibility of variability in the recording of answers in structured interviewing. As noted in Chapter 9, if interviewers do not write down exactly what respondents say to them when answering questions, a source of bias and hence of invalidity is in prospect. Closed questions reduce this possibility, though there is still the potential problem that interviewers may have to interpret what is said to them in order to assign answers to a category. Disadvantages However, closed questions exhibit certain disadvantages. • There is a loss of spontaneity in respondents’ answers. There is always the possibility that they might come up with interesting replies that are not covered by the fixed answers that are provided. One solution to this possible problem is to ensure that an open question is used to generate the categories (see Research in focus 11.3). Also, there may be a good case for including a possible response category of ‘Other’ and to allow respondents to indicate what they mean by using this category. Research in focus 11.3 A comparison of results for a closed and an open question Schuman and Presser (1981) conducted an experiment to determine how far responses to closed questions can be improved by asking the questions first as open questions and then developing categories of reply from respondents’ answers. They asked a question about what people look for in work in both open and closed format. Different samples were used. They found considerable disparities between the two sets of answers (60 per cent of the open-format categories were not capable of being subsumed by the closed-format answers). They then revised the closed categories to reflect the answers they had received from people’s open-ended answers. They readministered the open question and the revised closed question to two large samples of Americans. The question and the answers they received are as follows. Tips and skills Processing a closed question What do you think of the Prime Minister’s performance in his job since he took office? (Please tick the appropriate response) Very good ____ 5 Good ____ 4 Fair ____ 3 Poor ____ 2 Very poor ____ 1 Asking questions 251 This next question is on the subject of work. People look for different things in a job. Which one of the following five things do you most prefer in a job? [closed question]. What would you most prefer in a job? [open question]. Closed format Open format Answer % Answer % Work that pays well 13.2 Pay 16.7 Work that gives a feeling of accomplishment 31.0 Feeling of accomplishment 14.5 Work where there is not too much supervision 11.7 Control of work 4.6 and you make most decisions yourself Work that is pleasant and people are 19.8 Pleasant work 14.5 nice to work with Work that is steady + little chance 20.3 Security 7.6 of being laid off 96% 57.9% of sample of sample Opportunity for promotion 1.0 Short hours/lots of free time 1.6 Working conditions 3.1 Benefits 2.3 Satisfaction/liking a job 15.6 Other/DK/NA 4.0 Other responses 18.3 With the revised form for the closed question, Schuman and Presser were able to find a much higher proportion of the sample whose answers to the open question corresponded to the closed one. They argue that the new closed question was superior to its predecessor and is also superior to the open question. However, it is still disconcerting that only 58 per cent of respondents answering the open question could be subsumed under the same categories as those answering the closed one. Also, the distributions are somewhat different: for example, twice as many respondents answer in terms of a feeling of accomplishment with the closed format than with the open one. Nonetheless, the experiment demonstrates the desirability of generating forced-choice answers from open questions. Student experience Closed questions and quantitative data analysis Joe Thomson and his fellow students who formed a team conducting research on students at their university favoured closed questions because of the ease with which they could be analysed using SPSS, the software that is covered in Chapter 16. When they reviewed the interview schedule they had devised after it had been piloted, they focused on such issues as: were there too many open or closed questions so not providing enough qualitative or quantitative data, or should the questions be on a dichotomous or ranking scale. As the results of the questionnaire were to be analysed using a data analysis computer program (SPSS), the group tended to favour closed questions to give definite answers that could be correlated to show trends. However, Sophie Mason, who was also a member of a team doing survey research at Joe’s university, felt that the combination of closed and open questions did offer certain advantages: ‘By using both open and closed questions it was possible to gain the necessary statistics as well as opinions and experiences unique to each student.’ To read more about Joe’s and Sophie’s research experiences, go to the Online Resource Centre that accompanies this book at: www.oxfordtextbooks.co.uk/orc/brymansrm4e/ Asking questions252 • It can be difficult to make forced-choice answers mutually exclusive. The fixed answers that respondents are provided should not overlap. If they do overlap, the respondents will not know which one to choose and so will arbitrarily select one or the other or alternatively may tick both answers. If a respondent were to tick two or more answers when one is required, it would mean that you would have to treat the respondent’s answer as missing data, since you would not know which of the ticked answers represented the true one. One of the most frequently encountered forms of this problem can be seen in the following age bands: 18–30 30–40 40–50 50–60 60 and over In which band would a 40-year-old position him- or herself? • It is difficult to make forced-choice answers exhaustive. All possible answers should really be catered for, although in practice this may be difficult to achieve, since this rule may result in excessively long lists of possible answers. Again, a category of ‘Other’ may be desirable to provide a wide range of answers. • There may be variation among respondents in the interpretation of forced-choice answers. There is always a problem when asking a question that certain terms may be interpreted differently by respondents. If this is the case, then validity will be jeopardized. The presence of forced-choice answers can exacerbate this possible problem, because there may be variation in the understanding of key terms in the answers. • Closed questions may be irritating to respondents when they are not able to find a category that they feel applies to them. • In interviews, a large number of closed questions may make it difficult to establish rapport, because the respondent and interviewer are less likely to engage with each other in a conversation. The interview is more likely to have an impersonal feel to it. However, given the fact that the extent to which rapport is a desirable attribute of structured interviewing is somewhat difficult to determine (see Chapter 9), this is not necessarily too much of a problem. Student experience The dilemmas of open and closed questions Joe Thomson encountered the classic dilemmas with the use of open and closed questions in the course of his research on students at the University of East Anglia. He writes: As the results were analysed using SPSS, more closed questions were asked, which I feel restrains scope and didn’t give the interviewee a chance for personal expression through providing a specific range of answers. This was an issue that was decided would be overlooked, as the most important thing was that the results could be analysed and patterns drawn. Although open questions provide more qualitative data, they are difficult to apply to any kind of scale and therefore are not easy to compare. As he notes, closed questions do not readily give respondents ‘a chance for personal expression’, but the data deriving from them are easier to analyse. On the other hand, open questions may give richer qualitative data but are not easy to analyse for quantitative analysis. To read more about Joe’s research experiences, go to the Online Resource Centre that accompanies this book at: www.oxfordtextbooks.co.uk/orc/brymansrm4e/ Asking questions 253 It is worth bearing in mind that, when you are employing a structured interview or self-completion questionnaire, you will probably be asking several different types of question. There are various ways of classifying these, but here are some prominent types of question: • Personal factual questions. These are questions that ask the respondent to provide personal information, such as age, education, occupation, marital status, income, and so on. This kind of question also includes questions about behaviour. Such factual questions may have to rely on the respondents’ memories, as when they are asked about such things as frequency of church attendance, how often they visit the cinema, or when they last ate out in a restaurant. • Factual questions about others. Like the previous type, this one asks for personal information about others, sometimes in combination with the respondent. An example of such a question would be one about household income, which would require respondents to consider their own incomes in conjunction with those of their partners. Charles and Kerr’s (1988) question aboutwhoisinvolvedinmealpreparation(seeResearch in focus 11.1) is one that asked wives what they and their husbands do when preparing meals. Indeed, a criticism of such research is precisely that it relies on the possibly distorted views of respondents concerning their own and others’ behaviour (Beardsworth and Keil 1997). Like personal factual questions, an element of reliance on memory recall is likely to be present. • Informant factual questions. Sometimes, we place people who are interviewed or who complete a questionnaire in the position of informants rather than of respondents answering questions about themselves. This kind of question can also be found in certain contexts, as when people are asked about such things as the size of the firm for which they work, who owns it, whether it employs certain technologies, and whether it has certain specialist functions. Such questions are essentially about characteristics of an entity of which they have knowledge, in this case, a firm. However, informant factual questions may also be concerned with behaviour: the Charles and Kerr (1988) questions are examples of this kind, in that the person being asked the question is being asked to reply about behaviour in terms of the household or family. • Questions about attitudes. Questions about attitudes are very common in both structured interview and self-completion questionnaire research. The Likert scale is one of the most frequently encountered formats for measuring attitudes. • Questions about beliefs. Respondents are frequently asked about their beliefs, possibly religious and political beliefs. Another form of asking questions about beliefs is when respondents are asked whether they believe that certain matters are true or false—for example, a question asking whether the respondent believes the UK is better off as a result of being a member of the European Union. Alternatively, in a survey about crime, respondents might be asked to indicate whether they believe that the incidence of certain crimes is increasing. • Questions about normative standards and values. Respondents may be asked to indicate what principles of behaviour influence them or they hold dear. The elicitation of such norms of behaviour is likely to have considerable overlap with questions about attitudes and beliefs, since norms and values can be construed as having elements of both. • Questions about knowledge. Questions can sometimes be employed to ‘test’ respondents’ knowledge in an area. For example, as part of their study of the role of the mass media in the public understanding of science, Hargreaves et al. (n.d.) asked survey respondents to answer a large number of knowledge questions relating to scientific issues. The questions were asked in 2002 of 1,000 respondents on two separate occasions to establish whether there had been any change in knowledge levels. One question asked: ‘Some recent research has suggested that there might be a link between the MMR [mumps, measles, and rubella] vaccine and which medical disorder?’ Respondents were given five response alternatives: blindness; dyslexia; Down’s syndrome; autism; and don’t know. More or less equal proportions of the sample (two-thirds) gave the correct answer of autism on each occasion. Most structured interview schedules and self-completion questionnaires will comprise more than one, and often several, of these types of question. It is important to bear in mind the distinction between different types of question. There are a number of reasons for this. Types of questions Asking questions254 • It is useful to keep the distinctions in mind because they force you to clarify in your own mind what you are asking about, albeit in rather general terms. • It will help to guard against asking questions in an inappropriate format. For example, a Likert scale is entirely unsuitable for asking factual questions about behaviour. Over the years, numerous rules (and rules of thumb) have been devised in connection with the dos and don’ts of asking questions. In spite of this, it is one of the easiest areas for making mistakes. There are three simple rules of thumb as a starting point; beyond that the rules specified below act as a means of avoiding further pitfalls. General rules of thumb Always bear in mind your research questions The questions that you will ask in your self-completion questionnaire or structured interview should always be geared to answering your research questions. This first rule of thumb has at least two implications. First, it means that you should make sure that you ask questions that relate to your research questions. Ensure, in other words, that the questionnaire questions you ask will allow your research questions to be addressed. You will definitely not want to find out at a late stage that you forgot to include some crucial questions. Second, it means that there is little point in asking questions that do not relate to your research questions. It is also not fair to waste your respondents’ time answering questions that are of little value. What do you want to know? Rule of thumb number two is to decide exactly what it is you want to know. Consider the seemingly harmless question: Do you have a car? What is it that the question is seeking to tap? Is it car ownership? If it is car ownership, the question is inadequate, largely because of the ambiguity of the word ‘have’. The question can be interpreted as: personally owning a car; having access to a car in a household; and • When building scales like a Likert scale, it is best not to mix different types of question. For example, attitudes and beliefs sound similar, and you may be tempted to use the same format for mixing questions about them. However, it is best not to do this and instead to have separate scales for attitudes and beliefs. If you mix them, the questions cannot really be measuring the same thing, so that measurement validity is threatened. ‘having’ a company car or a car for business use. Thus, an answer of ‘yes’ may or may not be indicative of car ownership. If you want to know whether your respondent owns a car, ask him or her directly about this matter. Similarly, there is nothing wrong with the question: How many children do you have? However, if what you are trying to address is the standard of living of a person or household, the crucial issue is how many are living at home. How would you answer it? Rule of thumb number three is to put yourself in the position of the respondent. Ask yourself the question and try to work out how you would reply. If you do this, there is at least the possibility that the ambiguity that is inherent in the ‘Do you have a car?’ question will manifest itself, and its inability to tap car ownership would become apparent. Let us say as well that there is a follow-up question to the previous one: Have you driven the car this week? Again, this looks harmless, but if you put yourself in the role of a respondent, it will be apparent that the phrase ‘this week’ is vague. Does it mean the last seven days or does it mean the week in which the questioning takes place, which will, of course, be affected by such things as whether the question is being asked on a Monday or a Friday? In part, this issue arises because the question designer has not decided what the question is about. Equally, however, a moment’s reflection in which you put yourself in the position of the respondent might reveal the difficulty of answering this question. Taking account of these rules of thumb and the following rules about asking questions may help you to avoid the more obvious pitfalls. Rules for designing questions Asking questions 255 Specific rules when designing questions Avoid ambiguous terms in questions Avoid terms such as ‘often’ and ‘regularly’ as measures of frequency. They are very ambiguous, because respondents will operate with different frames of reference when employing them. Sometimes, their use is unavoidable, but when there is an alternative that allows actual frequency to be measured, this will nearly always be preferable. So, a question like: How often do you usually visit the cinema? Very often ____ Quite often ____ Not very often ____ Not at all ____ suffers from the problem that, with the exception of ‘not at all’, the terms in the response categories are ambiguous. Instead, try to ask about actual frequency, such as: How frequently do you usually visit the cinema? (Please tick whichever category comes closest to the number of times you visit the cinema) More than once a week ____ Once a week ____ 2 or 3 times a month ____ Once a month ____ A few times a year ____ Once a year ____ Less than once a year ____ Alternatively, you might simply ask respondents about the number of times they have visited the cinema in the previous four weeks. Words like ‘family’ are also ambiguous, because people will have different notions of who makes up their family. As previously noted, words like ‘have’ can also be sources of ambiguity. It is also important to bear in mind that certain common words, such as ‘dinner’ and ‘book’, mean different things to different people. For some, dinner is a midday snack, whereas for others it is a substantial evening meal. Similarly, some people refer to magazines or to catalogues and brochures as books, whereas others work with a more restricted definition. In such cases, it will be necessary to define what you mean by such terms. Avoid long questions It is commonly believed that long questions are undesirable. In a structured interview the interviewee can lose the thread of the question, and in a self-completion questionnaire the respondent may be tempted to omit such questions or to skim them and therefore not give them sufficient attention. However, Sudman and Bradburn (1982) have suggested that this advice applies better to attitude questions than to ones that ask about behaviour. They argue that, when the focus is on behaviour, longer questions have certain positive features in interviews— for example, they are more likely to provide memory cues and they facilitate recall because of the time taken to complete the question. However, by and large, the general advice is to keep questions short. Avoid double-barrelled questions Double-barrelled questions are ones that in fact ask about two things. The problem with this kind of question is that it leaves respondents unsure about how best to respond. Take the question: How satisfied are you with pay and conditions in your job? The problem here is obvious: the respondent may be satisfied with one but not the other. Not only will the respondent be unclear about how to reply, but any answer that is given is unlikely to be a good reflection of the level of satisfaction with pay and conditions. Similarly, How frequently does your husband help with cooking and cleaning? suffers from the same problem. A husband may provide extensive help with cooking but be totally uninvolved in cleaning, so that any stipulation of frequency of help is going to be ambiguous and to create uncertainty for respondents. The same rule applies to fixed-choice answers. In Research in focus 11.3, one of Schuman and Presser’s (1981) answers is: Work that is pleasant and people are nice to work with. While there is likely to be a symmetry between the two ideas in this answer—pleasant work and nice people— there is no necessary correspondence between them. Pleasant work may be important for someone, but he or she may be relatively indifferent to the issue of how pleasant their co-workers are. A further instance of a double-barrelled question is provided in Thinking deeply 11.1. Asking questions256 Double-barrelled questions are quite a common feature of even quite well-known surveys. Timming (2009) has point out that there are several such questions in the Workplace Employment Relations Survey (WERS) of 2004. The questionnaire can be found at: www.wers2004.info/pdf/Vol%201%20(part% 202)%20-%20Technical%20Report.pdf (accessed 1 February 2011). This survey is referred to in Chapter 14. For example, one of the questions asks employees: Overall, how good would you say managers at this workplace are at . . . It then lists three areas and the respondent has to reply on a scale: very good, good, neither good nor poor, poor, very poor (there is also a ‘Don’t know’ option). The three areas are: Seeking the views of employees or employee representatives Responding to suggestions from employees or employee representatives Allowing employees or employee representatives to influence final decisions In the case of each of these questions, the WERS researchers use the phrase ‘employees or employee representatives’. Timming argues that respondents could hold quite different views for employees as against employee representatives regarding how good managers are in these three respects. Strictly speaking the researchers should ask separate questions with respect to both employees and employee representatives. Further, he identifies several other double-barrelled questions in the WERS questionnaire. Regarding one of the other doublebarrelled questions, Forth et al. (2010: 58) in a reply to Timming’s article argue that asking separate questions ‘would arguably add little to the overall stock of knowledge emerging from WERS, yet would inevitably lengthen the questionnaire’. This is a reasonable point to make, and the point has been made several times in this book that all researchers have to wrestle with such practical considerations. However, the problem remains: respondents will be unsure how to reply to most doublebarrelled questions. Thinking deeply 11.1 Matching question and answers in closed questions (and some double-barrelled questions too) While the first edition of this book was being prepared, I was reading a novel whose publisher had inserted a feedback questionnaire within its pages. At one point in the questionnaire there is a series of Likert-style items regarding the book’s quality. In each case, the respondent is asked to indicate whether the attribute being asked about is: poor; acceptable; average; good; or excellent. However, in each case, the items are presented as questions, for example: Was the writing elegant, seamless, imaginative? The problem here is that an answer to this question is ‘yes’ or ‘no’. At most, we might have gradations of yes and no, such as: definitely; to a large extent; to some extent; not at all. However, ‘poor’ or ‘excellent’ cannot be answers to this question. The problem is that the questions should have been presented as statements, such as: Please indicate the quality of the book in terms of each of the following criteria: The elegance of the writing: Poor ____ Acceptable ____ Average ____ Good ____ Excellent ____ Asking questions 257 Avoid very general questions It is easy to ask a very general question when in fact what is wanted is a response to a specific issue. The problem with questions that are very general is that they lack a frame of reference. Thus, How satisfied are you with your job? seems harmless but it lacks specificity. Does it refer to pay, conditions, the nature of the work, or all of these? If there is the possibility of such diverse interpretations, respondents are likely to vary in their interpretations too, and this will be a source of error. My favourite general question comes from Karl Marx’s Enquête ouvrière, a questionnaire that was sent to 25,000 French socialists and others (though there is apparently no record of any being returned). The final (one-hundredth) question reads: What is the general, physical, intellectual, and moral condition of men and women employed in your trade? (Bottomore and Rubel 1963: 218) Avoid leading questions Leading or loaded questions are ones that appear to lead the respondent in a particular direction. Questions of the kind ‘Do you agree with the view that . . . ?’ fall into this class of question. The obvious problem with such a question is that it is suggesting a particular reply to respondents, although invariably they do have the ability to rebut any implied answer. However, it is the fact that they might feel pushed in a certain direction that is undesirable. Such a question as: Would you agree to cutting taxes further even though welfare provision for the most needy sections of the population might be reduced? is likely to make it difficult for some people to answer in terms of fiscal probity. But, once again, Marx is the source of a favourite leading question: If you are paid piece rates, is the quality of the article made a pretext for fraudulent deductions from wages? (Bottomore and Rubel 1963: 215) Avoid questions that are actually asking two questions The double-barrelled question is a clear instance of the transgression of this rule, but in addition there is the case of a question like: Which political party did you vote for at the last general election? What if the respondent did not vote? It is better to ask two separate questions: Did you vote at the last general election? Yes ____ No ____ If YES, which political party did you vote for? Another way in which more than one question can be asked is with a question like: How effective have your different job search strategies been? Very effective ____ Fairly effective ____ Not very effective ____ Not at all effective ____ The obvious difficulty is that, if the respondent has used more than one job search strategy, his or her estimation Of course, I have changed the sense slightly here, because, as it is stated, a further problem with the question is that it is a double-barrelled question. In fact, it is ‘treble-barrelled’, because it actually asks about three attributes of the writing in one. The reader’s views about the three qualities may vary. A similar question asks: Did the plot offer conflict, twists, and a resolution? Again, not only does the question imply a ‘yes’ or ‘no’, it actually asks about three attributes. How would you answer if you had different views about each of the three criteria? It might be argued that the issue is a nit-picking one: someone reading the question obviously knows that he or she is being asked to rate the quality of the book in terms of each attribute. The problem is that we simply do not know what the impact might be of a disjunction between question and answer, so you may as well get the connection between question and answers right (and do not ask double- or treble-barrelled questions either!). Asking questions258 of effectiveness will vary for each strategy. A mechanism is needed for assessing the success of each strategy rather than forcing respondents to average out their sense of how successful the various strategies were. Avoid questions that include negatives The problem with questions with ‘not’ or similar formulations in them is that it is easy for the respondent to miss the word out when completing a self-completion questionnaire or to miss it when being interviewed. If this occurs, a respondent is likely to answer in the opposite way from the one intended. There are occasions when it is impossible to avoid negatives, but a question like the following should be avoided as far as possible: Do you agree with the view that students should not have to take out loans to finance higher education? Instead, the question should be asked in a positive format. Questions with double negatives should be totally avoided, because it is difficult to know how to respond to them. Oppenheim (1966) gives the following as an example of this kind of question: Would you rather not use a non-medicated shampoo? It is quite difficult to establish what an answer of ‘yes’ or ‘no’ would actually mean in response to this question. One context in which it is difficult to avoid using questions with negatives is when designing Likert-scale items. Since you are likely to want to identify respondents who exhibit response sets and will therefore want to reverse the direction of your question asking (see Chapter 9), the use of negatives will be difficult to avoid. Avoid technical terms Use simple, plain language and avoid jargon. Do not ask a question like: Do you sometimes feel alienated from work? The problem here is that many respondents will not know what is meant by ‘alienated’, and furthermore are likely to have different views of what it means, even if it is a remotely meaningful term to them. Consider the following question: The influence of the TUC on national politics has declined in recent years. Strongly agree ____ Agree ____ Undecided ____ Disagree ____ Strongly disagree ____ The use of acronyms like TUC can be a problem, because some people may be unfamiliar with what they stand for. Does the respondent have the requisite knowledge? There is little point in asking respondents lots of questions about matters of which they have no knowledge. It is very doubtful whether meaningful data about computer use could be extracted from respondents who have never used or come into direct contact with one. Make sure that there is a symmetry between a closed question and its answers A common mistake is for a question and its answers to be out of phase with each other. Thinking deeply 11.1 describes such an instance. Make sure that the answers provided for a closed question are balanced A fairly common error when asking closed questions is for the answers that are provided to be unbalanced. For example, imagine that a respondent is given a series of options such as: Excellent ____ Good ____ Acceptable ____ Poor ____ In this case, the response choices are balanced towards a favourable response. Excellent and Good are both positive; Acceptable is a neutral or middle position; and Poor is a negative response. In other words, the answers are loaded in favour of a positive rather than a negative reply, so that a further negative response choice (perhaps Very poor) is required. Memory problems Do not rely too much on stretching people’s memories to the extent that the answers for many of them are likely to be inaccurate. It would be nice to have accurate replies to a question about the number of times respondents have visited the cinema in the previous twelve months, but it is highly unlikely that most will in fact recall events accurately over such a long space of time (other perhaps than those who have not gone at all or who have gone only once or twice in the preceding twelve months). It was for this reason that, in the similar question referred to above, the time frame was predominantly just one month. Asking questions 259 Forced-choice rather than tick all that apply Sometimes, when asking a question that allows the respondent to select more than one answer, there is an instruction that says something like ‘Please tick all that apply’. An example might be a question that asks which of a list of source of regular exercise the respondent has engaged in during the previous six months. The question might look something like this: Which of the following sources of exercise have you engaged in during the last six months? (Please tick all that apply) Going to a gym ᮀ Sport ᮀ Cycling on the road ᮀ Jogging ᮀ Long walks ᮀ Other (please specify) ᮀ An alternative way of asking a question like this is to use a conventional forced-choice format, such as: Have you engaged in the following sources of exercise during the last six months? Yes No Going to a gym ᮀ ᮀ Sport ᮀ ᮀ Cycling on the road ᮀ ᮀ Jogging ᮀ ᮀ Long walks ᮀ ᮀ Other (please specify) ᮀ ᮀ It is easy to presume that these two ways of asking questions like this where there is the potential for more than one answer are equivalent. However, there is compelling evidence that the second of these two formats (the forced-choice one) is superior. Smyth et al. (2006) have shown that the forced-choice format results in more options being selected. As a result, Dillman et al. (2009) advocate the use of the forced-choice format for this kind of question situation. Don’t know One area of controversy when asking closed questions is whether to offer a ‘don’t know’ or ‘no opinion’ option. The issue chiefly relates to questions concerning attitudes. The chief argument for including the ‘don’t know’ option is that not to include one risks forcing people to express views that they do not really hold. Converse and Presser (1986: 35–6) strongly advocate that survey respondents should be offered a ‘don’t know’ option but argue that it should be implemented by using a filter question to filter out those who do not hold an opinion on a topic. This means that the interviewer needs to ask two questions, with the second question just relating to those respondents who do hold an opinion. The alternative argument in connection with ‘don’t know’ is that presenting it as an option allows respondents to select it when they cannot be bothered to think about the issue. In other words, presenting the option may prevent some respondents from doing the required thinking on an issue. A series of experiments conducted in the USA suggest that many respondents who express a lack of opinion on a topic do in fact hold an opinion (Krosnick et al. 2002). It was found that respondents with lower levels of education were especially prone to selecting the ‘don’t know’ option and that questions that are later on in a questionnaire are more likely to suffer from a tendency for ‘don’t know’ to be selected. The latter finding implies a kind of question order effect, a topic that was addressed in Chapter 8. It implies that respondents become increasingly tired or bored as the questioning proceeds and therefore become prone to laziness in their answers. The researchers conclude that data quality is not enhanced by the inclusion of a ‘don’t know’ option and that it may even be the case that some respondents become inhibited from expressing an opinion that they probably hold. Consequently, these researchers err on the side of not offering a ‘don’t know’ option unless it is felt to be absolutely necessary. Tips and skills Common mistakes when asking questions Over the years, I have read many projects and dissertations based on structured interviews and self-completion questionnaires. I have noticed that a small number of mistakes recur. Here is a list of some of them. • An excessive use of open questions. Students sometimes include too many open questions. While a resistance to closed questions may be understandable, although not something I would agree with, open questions are Asking questions260 likely to reduce your response rate and will cause you analysis problems. Keep the number to an absolute minimum. • An excessive use of yes/no questions. Sometimes students include lots of questions that provide just a yes/no form of response. This is usually the result of lazy thinking and preparation. The world rarely fits into this kind of response. Take a question like: Are you satisfied with opportunities for promotion in the firm? Yes ____ No ____ This does not provide for the possibility that respondents’ feelings will not be a simple case of being satisfied or not. People invariably vary in the intensity of their feelings about such things. So why not rephrase it as: How satisfied are you with opportunities for promotion in the firm? Very satisfied ____ Satisfied ____ Neither satisfied nor dissatisfied ____ Dissatisfied ____ Very dissatisfied ____ • Students often fail to give clear instructions on self-completion questionnaires about how the questions should be answered. Specify whether you want a tick, something to be circled or deleted, or whatever. If only one response is required, make sure you say so—for example, ‘tick the answer that comes closest to your view’. • Be careful about letting respondents choose more than one answer. Sometimes it is unavoidable, but questions that allow more than one reply are often a pain to analyse. If you do want to ask a question for more than one answer, note the previous advice suggesting that a forced-choice format (which is less of a pain to analyse) tends to be superior to a ‘tick all that apply’ one. • In spite of the fact that I always warn about the problems of overlapping categories, students still formulate closed answers that are not mutually exclusive. In addition, some categories may be omitted. For example: How many times per week do you use public transport? 1–3 times ____ 3–6 times ____ 6–9 times ____ More than 10 times ____ Not only does the respondent not know where to answer if his or her answer might be 3 or 6 times; there is no answer for someone who would want to answer 10. • Students sometimes do not ensure the answers correspond to the question. For example: Do you regularly go to your gym? More than once a week ____ Once a week ____ 2 or 3 times a month ____ Once a month ____ The problem here is that the answer to the question is logically either ‘yes’ or ‘no’. However, the student quite sensibly wants to gain some idea of frequency (something that I would agree with in the light of my second point in this list!). The problem is that the question and the response categories are out of kilter. The student Asking questions 261 A form of asking mainly closed questions that has been used in connection with the examination of people’s normative standards is the vignette technique. The technique essentially comprises presenting respondents with one or more scenarios and then asking them how they would respond when confronted with the circumstances of that scenario. Research in focus 11.4 describes a vignette that was employed in the context of a study of family obligations in Britain. The aim was to elicit respondents’ normative judgements about how family members should respond to relatives who are in need and indeed who should do the responding. first needs to ascertain whether the respondent goes to a gym and then should ask a question about frequency, like: How frequently do you go to your gym in any month? More than once a week ____ Once a week ____ 2 or 3 times a month ____ Once a month ____ • Students sometimes fail to provide a time frame (and one that is appropriate) with their questions. Thus, the question ‘How much do you earn?’ is hopeless because it fails to provide the respondent with a time frame. Is it per week, per month, or per annum? A further though separate problem is that respondents need to be told whether the figure required should be gross (i.e. before deductions for tax, national insurance, etc.) or net (i.e. after deductions). In view of the sensitivities surrounding a person’s salary, it is often best not to ask the question this way but to provide instead a set of income groupings on a show card (for example, below £10,000; £10,000–£19,999; £20,000–£29,999, etc.). • Do remember the advice given in the text about the importance of formatting that makes it easy for respondents to answer and that also reduces the likelihood of them making mistakes in answering. While I was writing this revision, I was given a card by someone who had carried out some work on my house that had to be sent to my local trading standards office. It contained a number of questions about my satisfaction with aspects of his work. At the end of the short questionnaire, the following question (or is it two questions?) was presented: Please tick your age category Under 50 60–64 65–74 75+ Male Female It is difficult to know where to start with this question. One obvious problem is that it seems to assume that nobody will be aged in the 50–59 age range. The second problem is that the answer categories for someone’s age are wrapped around onto a second line. This is really not desirable. If your answer categories are to have a horizontal format, keep them on one line. If you cannot do that because of space problems, make the answers vertical. However, the most bizarre aspect is the way the categories Male and Female appear apparently on the same line as an age band. Also, they appear without a question! Do try to bear in mind the importance of good formatting and do remember that people can be aged between 50 and 59! If you never committed any of these ‘sins’, you would be well on the way to producing a questionnaire that would stand out from the rest, provided you took into account the other advice I give in this chapter as well! Vignette questions Asking questions262 The vignette is designed to tease out respondents’ norms concerning family obligations in respect of several factors: the nature of the care (whether long or short term and whether it should entail direct involvement or just the provision of resources); the significance of geographical proximity; the dilemma of paid work and care; and the gender component of who should give up a job if that was deemed the appropriate course of action. There is a gradual increase in the specificity of the situation facing Jim and Margaret and therefore the respondent. Initially, we are not aware of whether Jim and Margaret are prepared to move; then we know they are; and then we learn they do in fact decide to move, which leads to the question of whether one of them should become a full-time carer. Many aspects of the issues being tapped by the series of questions could be accessed through attitude items, such as: Research in focus 11.4 A vignette to establish family obligations Jim and Margaret Robinson are a married couple in their early forties. Jim’s parents, who live several hundred miles away, have had a serious car accident and they need long-term daily care and help. Jim is their only son. He and his wife both work for the Electricity Board and they could both get transfers so they could work near his parents. Card E (a) From the card, what should Jim and Margaret do? Move to live near Jim’s parents Have Jim’s parents move to live with them Give Jim’s parents money to help them pay for daily care Let Jim’s parents make their own arrangements Do something else (SPECIFY) Don’t know (b) In fact, Jim and Margaret are prepared to move and live near Jim’s parents, but teachers at their children’s school say that moving might have a bad effect on their children’s education. Both children will soon be taking O-levels [predecessors to the current GCSE examinations]. What should Jim and Margaret do? Should they move or should they stay? Move Stay (c) Why do you think they should move/stay? Probe fully verbatim (d) Jim and Margaret do decide to go and live near Jim’s parents. A year later Jim’s mother dies and his father’s condition gets worse so that he needs full-time care. Should Jim or Margaret give up their jobs to take care of Jim’s father? IF YES: Who should give up their job, Jim or Margaret? Yes, Jim should give up his job Yes, Margaret should give up her job No, neither should give up their jobs Don’t know/Depends Source: Finch (1987: 108). Asking questions 263 When a working couple decides that one of them should care for parents, the wife should be the one to give up her job. Strongly agree ____ Agree ____ Undecided ____ Disagree ____ Strongly disagree ____ The advantage of the vignette over such an attitude question is that it anchors the choice in a situation and as such reduces the possibility of an unreflective reply. Finch (1987) also argues that, when the subject matter is a sensitive area (in this case, dealing with family relationships), there is the possibility that the questions may be seen as threatening by respondents. Respondents may feel that they are being judged by their replies. Finch argues that the fact that the questions are about other people (and imaginary ones at that) permits a certain amount of distance between the questioning and the respondent and results in a less threatening context. It is always desirable, if at all possible, to conduct a pilot study before administering a self-completion questionnaire or structured interview schedule to your sample. In fact, the desirability of piloting such instruments is not solely to do with trying to ensure that survey questions operate well; piloting also has a role in ensuring that the research instrument as a whole functions well. Pilot studies may be particularly crucial in relation to research based on the self-completion questionnaire, since there will not be an interviewer present to clear up any confusion. Also, with interviews, persistent problems may emerge after a few interviews have been carried out, and these can then be addressed. However, with selfcompletion questionnaires, since they are sent or handed out in large numbers, considerable wastage may occur before any problems become apparent. Here are some uses of pilot studies in survey research. • If the main study is going to employ mainly closed questions, open questions can be asked in the pilot to generate the fixed-choice answers. Glock (1988), for example, extols the virtues of conducting qualitative interviews in preparation for a survey for precisely this kind of reason. However, it is hard to believe that respondents will not feel that their replies will at least in part be seen as reflecting on them, even if the questions are not about them as such. One obvious requirement of the vignette technique is that the scenarios must be believable, so that considerable effort needs to go into the construction of credible situations. Finch points to some further considerations in relation to this style of questioning. It is more or less impossible to establish how far assumptions are being made about the characters in the scenario (such as their ethnicity) and what the significance of those assumptions might be for the validity and comparability of people’s replies. It is also difficult to establish how far people’s answers reflect their own normative views or indeed how they themselves would act when confronted with the kinds of choices revealed in the scenarios. However, in spite of these reservations, the vignette technique warrants serious consideration when the research focus is concerned with an area that lends itself to this style of questioning. • Piloting an interview schedule can provide interviewers with some experience of using it and can infuse them with a greater sense of confidence. • If everyone (or virtually everyone) who answers a question replies in the same way, the resulting data are unlikely to be of interest because they do not form a variable. A pilot study allows such a question to be identified. • In interview surveys, it may be possible to identify questions that make respondents feel uncomfortable and to detect any tendency for respondents’ interest to be lost at certain junctures. • Questions that seem not to be understood (more likely to be realized in an interview than in a self-completion questionnaire context) or questions that are often not answered should become apparent. The latter problem of questions being skipped may be due to confusing or threatening phrasing, poorly worded instructions, or confusing positioning in the interview schedule or questionnaire. Whatever the cause might be, such missing data are undesirable, and a pilot study may be instrumental in identifying the problem. Piloting and pre-testing questions Asking questions264 • Pilot studies allow the researcher to determine the adequacy of instructions to interviewers, or to respondents completing a self-completion questionnaire. • It may be possible to consider how well the questions flow and whether it is necessary to move some of them around to improve this feature. The pilot should not be carried out on people who might have been members of the sample that would be One final observation regarding the asking of questions is that you should also consider using questions that have been employed by other researchers for at least part of your questionnaire or interview schedule. This may seem like stealing, and you would be advised to contact the researchers concerned regarding the use of questions they have devised. However, employing existing questions allows you to use questions that have in a sense been piloted for you. If any reliability and validity testing has taken place, you will know about the measurement qualities of the existing questions you use. A further advantage of using existing questions is that they allow you to draw comparisons with other research. This might allow you to indicate whether change has occurred or whether the findings apply to your sample. For example, if you are researching job satisfaction, using one of the standard job satisfaction scales would allow you to compare your findings with another researcher’s. Alternatively, using the same questions as another researcher may allow you to explore whether the location of your sample appears to make a difference to the findings. While you need to be cautious about inferring too much from such comparisons between your own and other researchers’ data, the findings can nonetheless be illuminating. At the very least, examining questions used by others might give you some ideas about how best to approach your own questions, even if you decide not to make use of them as they stand. The use of existing questions is a common practice among researchers. For example, employed in the full study. One reason for this is that, if you are seeking to employ probability sampling, the selecting-out of a number of members of the population or sample may affect the representativeness of any subsequent sample. If possible, it is best to find a small set of respondents who are comparable to members of the population from which the sample for the full study will be taken. the researchers who developed the scale designed to measure attitudes to vegetarians (Research in focus 7.5) used several existing questions devised for measuring other concepts in which they were interested, such as measures of authoritarianism and political conservatism. These other measures had known properties in terms of their reliability and validity. Similarly, Walklate (2000: 194) describes how, in developing a survey instrument to be administered to possible victims of crime, she and her colleagues used ‘tried and tested questions taken from pre-existing criminal victimization surveys amended to take account of our own more localized concerns’. The UK Data Archive (UKDA), which aims to improve standards in UK survey research, has a very good question bank providing access to questionnaires from major surveys (including the census) and associated commentary to assist survey design. It is freely available and can be found at the following site: http://surveynet.ac.uk/sqb (accessed 28 September 2010). The question bank includes questions from major surveys. They are presented in the context of the questionnaire in which they appeared and are accompanied by technical details. The search mechanism allows you to search for a particular questionnaire or it allows you to input keywords to find cases of the use of topics in questions. Using existing questions Asking questions 265 Checklist Issues to consider for your structured interview schedule or self-completion questionnaire ᭺ Have you devised a clear and comprehensive way of introducing the research to interviewees or questionnaire respondents? ᭺ Have you considered whether there are any existing questions used by other researchers to investigate this topic that could meet your needs? ᭺ Do the questions allow you to answer all your research questions? ᭺ Could any questions that are not strictly relevant to your research questions be dropped? ᭺ Have you tried to put yourself in the position of answering as many of the questions as possible? ᭺ Have you piloted the questionnaire with some appropriate respondents? ᭺ If it is a structured interview schedule, have you made sure that the instructions to yourself and to anyone else involved in interviewing are clear (for example, with filter questions, is it clear which questions should be missed out)? ᭺ If it is a self-completion questionnaire, have you made sure that the instructions to respondents are clear (for example, with filter questions, is it clear which questions should be missed out)? ᭺ Are instructions about how to record responses clear (for example, whether to tick or circle or delete; whether more than one response is allowable)? ᭺ Have you included as few open questions as possible? ᭺ Have you allowed respondents to indicate levels of intensity in their replies, so that they are not forced into ‘yes’ or ‘no’ answers where intensity of feeling may be more appropriate? ᭺ Have you ensured that questions and their answers do not span more than one page? ᭺ Have socio-demographic questions been left until the end of the questionnaire? ᭺ Are questions relating to the research topic at or very close to the beginning? Tips and skills Getting help in designing questions When designing questions, as I suggested above, try to put yourself in the position of someone who has been asked to answer the questions. This can be difficult, because some (if not all!) of the questions may not apply to you—for example, if you are a young student doing a survey of retired people. However, try to think about how you would reply. This means concentrating not just on the questions themselves but also on the links between the questions. For example, do filter questions work in the way you expect them to? Then try the question out on some people you know, as in a pilot study. Ask them to be critical and to consider how well the questions connect to each other. Also, do look at the questionnaires and structured interview schedules that other experienced researchers have devised. They may not have asked questions on your topic, but the way they have asked the questions should give you an idea of what to do and what to avoid when designing such instruments. Asking questions266 ᭺ Have you taken steps to ensure that the questions you are asking really do supply you with the information you need? ᭺ Have you taken steps to ensure that there are no: ᭺ Ambiguous terms in questions or response choices? ᭺ Long questions? ᭺ Double-barrelled questions? ᭺ Very general questions? ᭺ Leading questions? ᭺ Questions that are asking about two or more things? ᭺ Questions that include negatives? ᭺ Questions using technical terms? ᭺ Have you made sure that your respondents will have the requisite knowledge to answer your questions? ᭺ Is there an appropriate match between your questions and your response choices? ᭺ Have you made sure that your response choices are properly balanced? ᭺ Do any of your questions rely too much on your respondents’ memory? ᭺ Have you ensured that there is a category of ‘other’ (or similar category such as ‘unsure’ or ‘neither agree nor disagree’) so that respondents are not forced to answer in a way that is not indicative of what they think or do? If you are using a Likert-scale approach: ᭺ Have you included some items that can be reverse scored in order to minimize response sets? ᭺ Have you made sure that the items really do relate to the same underlying cluster of attitudes so that they can be aggregated? ᭺ Have you ensured that your response choices are exhaustive? ᭺ Have you ensured that your response choices do not overlap? Key points ● While open questions undoubtedly have certain advantages, closed questions are typically preferable for a survey, because of the ease of asking questions and recording and processing answers. ● This point applies particularly to the self-completion questionnaire. ● Open questions of the kind used in qualitative interviewing have a useful role in relation to the formulation of fixed-choice answers and piloting. ● It is crucial to learn the rules of question-asking to avoid some of the more obvious pitfalls. ● Remember always to put yourself in the position of the respondent when asking questions and to make sure you will generate data appropriate to your research questions. ● Piloting or pre-testing may clear up problems in question formulation. Asking questions 267 Questions for review Open or closed questions? ● What difficulties do open questions present in survey research? ● Why are closed questions frequently preferred to open questions in survey research? ● What are the limitations of closed questions? ● How can closed questions be improved? Types of question ● What are the main types of question that are likely to be used in a structured interview or self-administered questionnaire? Rules for designing questions ● What is wrong with each of the following questions? What is your annual salary? Below £10,000 ____ £10,000–15,000 ____ £15,000–20,000 ____ £20,000–25,000 ____ £25,000–30,000 ____ £30,000–35,000 ____ £35,000 and over ____ Do you ever feel alienated from your work? All the time ____ Often ____ Occasionally ____ Never ____ How satisfied are you with the provision of educational services and social services in your area? Very satisfied ____ Fairly satisfied ____ Neither satisfied nor dissatisfied ____ Fairly dissatisfied ____ Very dissatisfied ____ What is your marital status? Single ____ Married ____ Divorced ____ Vignette questions ● In what circumstances are vignette questions appropriate? Asking questions268 Piloting and pre-testing questions ● Why is it important to pilot questions? Using existing questions ● Why might it be useful to use questions devised by others? Online Resource Centre www.oxfordtextbooks.co.uk/orc/brymansrm4e/ Visit the Online Resource Centre that accompanies this book to enrich your understanding of asking questions. Consult web links, test yourself using multiple choice questions, and gain further guidance and inspiration from the Student Researcher’s Toolkit.