9 PROGRAMMING THE SURVEY INTRODUCT©RV W** You are going-to cWdw,#igjii> type of answer format ^I'Ad] A. Radio buttons, B. Checkboxes, C. Dropdown menu, D. Text box. |$) l&'s educbfiana UeMefeWlijS] 9.1 Introduction 9.2 Survey layout 9.3 Access control 9.4 Scrolling versus paging design 9.5 Welcome screen and thank you message 9.6 Answer formats 9.6.1 Radio buttons 9.6.2 Checkboxes 9.6.3 Dropdown menus 9.6.4 Slider bars and visual analogue scales 9.6.5 Text fields 9.6.6 Grids or matrixes 9.6.7 Labelling of scalar questions 9.7 Visual design things you need to know 9.8 Question and instruction text 9.9 Routing 9.10 Empty answers 9.11 Going back in the survey 9.12 Error messages 9.13 Don't know and won't tell 9.14 Interactive features < 9.15 Progress indicator 9.16 Randomizations 9.17 Fills 9.18 Numbering 9.19 Dependent interviewing 9.20 Paradata 9.21 Pretesting Summary 137 137 138 138 141 142 142 144 145 146 147 148 148 149 150 151 152 152 153 153 153 154 155 ,155 155 155 156 157 157 9.1 Introduction Designing a survey is not as easy as it may seem, especially when a researcher has many tools available as is the case with online surveys. A researcher has to make all kinds of design choices such as using a scrolling of paging design, answer options, routing, empty answers, error messages, don't know options, etc. This chapter discusses all these design choices and pretesting, which is important to determine the quality of the survey instrument. 9.2 Survey layout Before going into detail about programming the online survey, I want to make a note regarding the general survey layout. Many people think it is important that the survey 'looks good'. They want their survey to have a certain appearance, often linked to the company's house style. I think it is good for a survey to have a similar appearance to the company's house style since that can help in linking the survey to the organization. This can have a positive effect on response rates, especially when the company or organization has a good reputation and some kind of authority. On the other hand, research shows little evidence of the (positive) effect of the layout of the survey (in colour, font, placement of logos, etc.). It is therefore more important to devote time and energy to the survey instrument (questions, answer categories, checks, routing, etc.) than to the appearance of the survey. The intentions of the researcher for the online survey are mediated through the hardware, software and user preferences of the respondent. Make sure you are attentive to differences in the visual appearance of your survey that can result from different personal settings, browser systems, etc. It is very likely that the questionnaire seen by the respondent will not be exactly the same as that intended by the researcher, because of different operating systems, Web browsers and screen configurations. The fact that one respondent might see something quite different from another respondent forms a significant methodological challenge. It is important to test the survey using different devices, browsers and screen resolutions. 9.3 Access control Use a personal identification in order to allow people to complete the survey only oncol and to (try to) make sure that the right people are answering your survey. You catn| provide potential respondents with a personal link in an email message or provide general link with a username and password. The password should be kept simple an«I| should be adjustable for respondents' convenience. A personal link should launch the survey directly from the email client. The URL I should be short, understandable and easy to retype. This is because long URLs may wrap over two or more lines causing the link not to work or to be difficult to copy and paste. Good survey software allows users to login multiple times (so that they can break off the survey and login again in their own time), but once the survey has been finished the survey should be shut down. Research suggests that automatic logins result in higher responses compared to manual logins, possibly explained by the fact that less effort is needed from the respondent in the first case. 9.4 Scrolling versus paging design The first decision you will have to make when programming the survey is whether to use a scrollable or paging design. There is a continuum ranging from scrolling designs where the entire survey is a single page (single-page survey) to paging designs where each question is presented on a single page (single-question-per-page survey). There arc many design alternatives along this continuum, where survey pages contain several (often related) items. Note that the scrolling (single-page) survey has only one submit button for the entire survey; the paging (single-question-per-page) survey has a submit button for every page (question) in the survey. The scrolling design is most akin to a paper based self-administered survey where respondents can browse back and forth through the survey before submitting the response (Couper, 2008). An advantage of the scrolling design is the fact that the respondent has a (complete) overview of the entire questionnaire. He or she can see the length of the questionnaire and review forthcoming questions by scrolling down to the end of the questionnaire. The similarity to paper based surveys makes the scrolling design preferable for a multimode study where one wants to minimize the possibility of differential measurement errors arising through mode differences. An important drawback of scrolling designs is the fact that if a respondent closes the browser before pressing the submit button, no information is transmitted to the server and all answers are lost. In addition, checks (range, consistency, calculation, etc.) and routing are more difficult to add and, if they are added, may confuse respondents (e.g. because they pop up once the entire survey is submitted via the submit button). When to use a scrolling design: • In short questionnaires, • When context is important, • When there is little routing/skip questions, • When Item non-response is no problem or answers are mandatory, In other cases: paging design is preferable. In a paging design, the respondent answers one or more questions on that page and then presses a 'submit' or 'further' button. The answers are then transmitted to the server, which processes the information and transmits the next page. The paging design easily permits customization of question text, randomization, automated skips, feedback, etc. Data can be retained, so respondents can stop and complete the survey in multiple sessions. A disadvantage of the paging design is the lack of context for the respondent, who is unable to see the entire survey and estimate the survey length. Figure 9.1 shows a scrolling design, Figure 9.2 a mixture (several items per screen) and Figure 9.3 a true paging design (one item per page). The examples are in Dutch to emphasize the visual layout. In hoevirre bent uhtt esn» met de volgende s^lliaaui? Ik-serautier leiden dc scliildcrijcn in mija huk ureemfWcngtwoM kleren & ben vaak op socki idccCn of Ms mijn lsvtu «si vrotit, gaik op zoek naar nfeuvre en onbfiKende tfagen, Jk hou etvat) wn dingen aan te raten. B: heu er läet Van om puur voor utM vim alttniÄue gsneeswytea. jgtikismand ap. Jk Iwu rsn oscusiiw. Jkhoa van «inmwgm. Figure 9.1 Scrolling design (scroll bar at right hand side) h hoeverre bent u hut eens met do volgcnde itoUingsn? Ik verander zelden de schilderijen in mijn huis. Ik ben niet geiiiteresseerd in poezie. Het is niet pretty om mensen in vreemde/ongewone kleren te äen. Ik ben vaak op zoek naar nieuwe ideeen of enraringen. helemaal oneens oneens noch sens noch oneens eens helemaal eens O 0 O 0 0 o o O O O o * , G o Cj O o ■ ■ O - .0 0 O ——————.............. ==g=------=======^^ ________________________ Vorige Figure 9.2 Several items per page (e.g. grid) In hoeverre bent u het eens met de volgende Stelling? De verander zeldea de schilderijen in mijn hias. \ C O 0 helemaal mee mee oneens noch eens noch ,_ oneens _____....... oneens O O mee eens helemaal mee eens —_______________....... J _ ^— _.............. 1 Vorige _____......._ J Figure 9.3 Paging design: One item per screen Presenting questions in a matrix or grid (several questions per page) may be a good compromise: it reduces the number of screens without the need for scrolling. Many people do not like to scroll, and that can have an impact on the. evaluation of the questionnaire (as demonstrated by Toepoel et al., 2009) and hence response rates. Scrolling designs often have higher item non-response (Lozar Manfreda et al., 2002; Toepoel et al., 2009) and shorter completion times (mainly caused by the need to press the submit button only once, mcc (ioupcr et al., 2001; Lozar Manfreda et al., 2002; Toepoel etal., 2009). A negative side-effect of placing several items per screen is the (sometimes) higher inter-item correlations between items. Items are more likely to be seen as related if grouped on one screen, reflecting a natural assumption that blocks of questions bear on related issues, much as they would during ordinary conversations (Schwarz and Sudman, 1996; Sudman et al., 1996). Research has shown that correlations are higher among items appearing together on a screen than among items separated across several screens, but the effect is small, and differences between pairs of correlations are mostly insignificant (Couper et al., 2001; Toepoel et al., 2009). 9.5 Welcome screen and thank you message The survey always starts with some kind of welcoming screen. This is the first page the respondent sees when he or she opens the survey. Note that the majority of the break-offs drop out on this initial page. Unfortunately, little research exists on what should be on the welcome screen. The research that exists shows little evidence of text or layout that increases response rates or reduces drop out. In general, the welcome screen should assure the respondents that they have arrived at the right place and should encourage them to proceed to fill out the questionnaire. The welcome screen should contain some identifying information (where the survey is from and what the survey is about), some lines about privacy and confidentiality, and the estimated time needed to complete the survey. Some additional information can be given, but it is best to keep the text on the welcome screen as brief as possible to prevent respondents from breaking off before they actually start filling out a question. Remember that the welcome screen serves as a 'business card' and should encourage people to proceed to fill out the survey. Social Integration and Leisure This survey contains questions about social integration and leisure. The questionnaire is developed by researchers from Tilburg University, Completing the questionnaire takes about ten minutes. There are no right or wrong answers. The researchers are interested in your behavior and opinions. Data will be treated confidentially. Results will be reported on an aggregated level without any identification possible, You can dick 'start' to begin. Figure 9.4 Welcome screen The survey should always end with some kind of thank you mejwnge. horizontally aligned. RcHCttrchcru disagree on what is the best alignment, and you see both versions commonly implemented,1 Thank you for filling out our survey, if you want to get updates about this ratullx, plaass send an email to V,Toepoel@uu,nl Figure 9.5 Thank you message 9.6 Answer formats There are several elements available to form questions and answer formats in Web surveys. The most important ones are radio buttons, checkboxes, dropdown menus, slider bars and text fields. Many Web surveys (researchers/programmers) use different formats for the same type of questions. For example, standard questions such as gender, date of birth and education are often presented in different ways: radio buttons (horizontally or vertically aligned), text fields and drop boxes can all be used. Decisions about which element to use are often based on random decision making on the part of the researcher or programmer. Choosing the right element for the right task is the most crucial issue. Answer categories send signals to respondents on what is or is not to be expected. Answer spaces, appropriate labels on answer categories and the use of visual signals can dramatically improve the number of people who provide what the researchers want. For example, Christian et al. (2007) found that a successive series of visual language manipulations improved respondents' use of the desired format (two digits for the month and four digits for the year) for reporting dates from 45 per cent to 96 per cent. Fewer digits for the month than the year, the use of symbols (MM/YYYY) instead of words (month year) and the placement of instructions within the foveal view improved the use of the desired format by respondents. Survey fact: Attentive processing of survey questions refers to giving attention to a small visual field, usually within a visual span of about eight characters (also known as the 'foveal view'), bringing those few elements into the visual working memory, where they are better recalled later (Dillman, 2007). On a scale of 1 to 5, where 1 means very dissatisfied and 5 means very satisfied, how satisfied are you with the Dutch education system? O 1 very dissatisfied O 2 somewhat dissatisfied 0 3 neutral 0 A somewhat satisfied O 5 very satisfied O don't know Figure 9.6 Example of radio buttons (vertically aligned) Radio buttons can also be multiple banked (presented in several rows/columns, see Figure 9.7), This is often done to avoid the need to scroll. Multiple banking has severe ordering effects though. If you presented your response options multiply banked as in Figure 9.7, you would probably see that more people would select the options 'skating' and 'swimming' compared to one single column of response options. It is best to avoid multiple banking. If you have such a long list of response options that you really think you need to use multiple columns, it is best to randomize response options if this is possible in your survey software.2 In questions where a 'don't know' option is added, make sure that the 'don't know' option is visually different from the substantive options and not randomized but at the end of the list.3 Visual separation can be made by adding extra space between the substantive and non-substantive options (see Figure 9.7), or by using a separate 'don't know' button. 9.6.1 Radio buttons Radio buttons are round buttons that can be clicked on tovprovide an answer. They should be used when a respondent has to select only one response in a range of answer categories. Answer categories should be mutually exclusive. Once a radio button is selected it is impossible to deselect it, unless another alternative is chosen. Radio buttons can be vertically aligned (as in the example in Figure 9.6) but they can also be ^n-grid questions (where several questions are placed on a single screen) answer options are almost always horizontally aligned. One-item-per-screen formats often use a vertical alignment of response options. This is more often caused by survey software settings than deliberation on the part of the researcher/programmer. randomizing response options is not logical with ordinal questions (the order of response options would be lost) and with nominal questions that have some kind of logical order. 3The same goes for response options such as 'other, namely'. What type of sport do you practice? Chooso your main sport, 0 soccer 0 hockey 0 handball o volleyball 0 badminton 0 tennis 0 gymnastics 0 running <"> skating O swimming O cycling O walking O golf 0 fitness O other Figure 9.7 Example of radio buttons in multiple columns (multiple banked) One commonly made mistake w to line checkboxes when radio buttons should be used (single response). One could argue, however, that in the case of mixed-mode surveys, the use of checkboxes for both single-choice responses and check-alkhat-apply responses in a Web survey most closely parallels a paper version (Couper, 2008) and should therefore be used in a mixed-mode survey with a paper and an electronic version of the survey. 9.6.3 Dropdown menus In dropdown menus answer options are presented in a list that only becomes visible when the respondent presses the arrow on the right hand side (and the list drops down). The researcher can choose to make none, one or several items visible before clicking on the arrow. In the example in Figure 9.9, one item in the list is visible ('French'), in Figure 9.10 the first three options are visible. 9.6.2 Checkboxes Checkboxes are squares that can be ticked to provide an answer. They are used when more than one answer is possible (check-all-that-apply questions). One can check none, one, several or all options, and options can be turned off by clicking on the checkbox again. Good software allows you to program soft or hard checks in a checkbox item, for example, that the option 'none of the above' should not be selected in the case of other selections or to restrict the number of options selected (e.g. a maximum of three). Please tell me which of the following foods you use on a daily basis? □ Milk products H Meat □ Vegetables 0 Bread □ Fruit □ Butter or oil Figure 9.8 Example of checkboxes Many researchers use checkboxes, but in some cases methodologists argue against their use. Smyth et al. (2006), for example, found that a forced-choice version (yes/no for every item) produced a greater number of endorsed items than the check-all-that-apply version (checkboxes). Figure 9.9 Dropdown menu with one item initially displayed WMchwitte do you prefer? French foil Italian ~J Figure 9.10 Dropdown menu with three out of ten options initially displayed Respondents are more likely to notice and use information that is visible than information that is hidden or outside the foveal view. Couper et al. (2004) find evidence that visible response options in a dropdown menu are endorsed more frequently. In addition, dropdown boxes increase a primacy effect (more use of one of the first response options), since they require more action on the part of the respondent before they can select a response option at the bottom of the list. One commonly made mistake is to use dropdown menu* for birth year. It often takes a lot of time to find the correct number; especially the further one has to scroll to select the appropriate year. In all, a researcher should be careful when using a dropdown menu and see whether it is the best response option, 9.6.4 Slider bars and visual analogue scales One of the most widely used types of questions is scalar questions, in which a respondent has to provide an answer on a scale. Slider bars or visual analogue scales are often used for these types of questions, especially in market research. Although this type of element is widely used, there is not much research supporting its superiority over other elements such as text boxes and radio buttons. 0 10 20 30 40 50 60 70 80 90 100 © Worst imaginable health state Best imaginable health state Figure 9.11 Example of visual analogue scale Tot slut, 4b » uw tot» fcesteiijp !t«j'*/i wm&mimmtfam Mflifen welk deal i< Voor tie mM&M ktedins online (»ia internst) bestuedl? ® Da Qndel'-.ich Qrusfj Slider bars make use of the 'drag and drop' principle. They are widely used in market research, but research shows that' t'hcy are very sensitive to measurement error. The initial position of the handle is an ongoing debate, but also their usability is questionable. In visual analogue scales, the respondent has to point and click to provide an answer, The main advantage of visual analogue scales is their extensive range, so that data can be treated as continuous. Funke (2013) has compared a radio button format to a visual analogue scale and a slider bar, and found that the simple radio button format performed better with regard to break offs, item non-response, use of middle category and response times. 9.6.5 Text fields Text fields can be divided into text boxes and text areas. Text boxes are small and should be used for relatively short input such as one word or a few numbers. Text areas allow lengthy responses and should be used for open-ended questions. In both cases, text and numeric input are allowed. Good software allows you to build soft checks (e.g. an '@' is necessary for an email address). The size of the box or area should match the desired length of the answer. Research has shown that respondents provide the wrong answer if the text field is too long and that longer text fields produce longer responses (Christian and Dillman, 2004; Couper et al., 2001; Smith, 1995). Hovm old are you? m Figure 9.13 Small text box What is your address? street 2ip code city country email address nzir Figure 9.12 Example of slider bar Figure 9.14 Multiple text boxes with size adapted to the required response 9.6.6 Grids or matrixes Grids or matrixes are a widely used tool in Web surveys, (iritis or matrixes are a series of items where the rows are a set of items and the columns the response options. These response options are commonly the same for all items. Figure 9.15 shows an example of a grid or matrix question. To what extend do you agree or disagree with the following statements? 1 s * 3 * 5 * ?teta% .................,. ».....„................,.......................... disagree , agme- My skills coincides with the level of hockey that I play o o 0 0 O O 0 O C O 0 o o o O 0 . ô ■ ů ©• .-.-o o d o o o p o o I am net worried how others will think of me while playing hockey Playing hockey has made me mora knowledgable Other people see me as a hockey player Figure 9.15 Matrix or grid question with endpoint labels and numbers The major advantage of matrix or grid questions is the, efficient use of space: many questions can be presented on a single screen, speeding up the response process. On the other hand, these types of questions are relatively difficult for respondents since so much text is presented on a single screen and response quality tends to be lower compared to single questions (higher item non-response, higher inter-item correlations). 9.6.7 Labelling of scalar questions The researcher should think about how to label the response categories: only endpoints (as in Figure 9.15) or fully labelled, with or without numbers, and with or without headers per item. Response quality tends to be better in fully labelled scales. In case of many response options, labelling endpoints and providing numbers for every option is the best alternative. In Figure 9.16 an example is given of a matrix or grid question with a header for each item. Respondents sometimes complain that if they scroll to fill out the remainder of a grid or matrix question (in long grids or matrixes with questions that 'fall' off the screen) they don't see the header anymore. Adding headers per item helps to improve the clarity of the answer options, but on the other hand make the layout a little crowded due to the increase in text on a single screen. One could also choose to place a header every five or 10 items (so that there is always a header on a screen when the page needs scrolling). Research shows no effect of headers on response quality (Toepoel et al., 2009). Inhoeverre bent uhel; emu mul els volpiwle itflUinjjen? helemaal one ens oneens noch eens noch oneens eens helemaal eens Jk verander zelden de schilderijen in mijn hnis. 0 O 0 O 0 helemaal oneens oneens noch eens noch oneens eens helemaal eens Tk ben niet geinteresseerd in poézie. C C O O 0 helemaal oneens noch eens noch oneens eens helemaal oneens eens Het is niet prettig om mensen in vreemde/ongewone kleren te sien. O C O 0 0 helemaal oneens oneens noch eens noch oneens eens helemaal eens j Ik ben vaak op zcek naar nieuwe ideeén of C 0 O 0 O ervaringen. j _________ j Figure 9.16 Matrix or grid question with headers for every item (in Dutch) 9.7 Visual design things you need to know Toepoel and Dillman (2012) have written an extensive review on how visual design affects respondents' answers. I advise every researcher using online surveys to read this book chapter to understand how the programming of a survey can affect the answers obtained. Toepoel and Dillman propose the following guidelines for effective programming of survey questions:4 1. The size of the answer box should match the size of the answer desired. 2. Use visuals to help respondents in interpreting a question. 3. Make sure every (substantive) answer option gets the same visual emphasis. 4Read their book chapter for the argument why. 4. Place ordinal scales consistently in a deeremeimil or Incremental order. 5. Present ordinal scales with radio buttons, in a linear format, with even spacing of response options, so that the graphical language conveying the scale is clear to respondents. 6. Make sure that the visual midpoint of a scale coincides with the conceptual midpoint. 7. If you present multiple items per screen, be aware that correlations might be higher between items, especially in polar point scales. 8. Use fully labelled scales. If this is not desirable, for example, in the case of a mixed-mode survey involving the telephone, add numbers starting with 1 to the polar point scale. 9. Use a logical order of response options (e.g. a progression) and be aware that respondents extract meaning from that order. 10. Preferably, present answer options randomly, to avoid order effects. 11. Use instructions right in front of the answer options (within the 'foveal view') and make sure respondents do not have to put effort into reading them. 12. Avoid using gratuitous visual language like pictures, numbers and colours unnecessary for the correct interpretation of questions. 13. When you want to compare results from different studies, make sure respondents get the same (visual) stimulus. Researchers using surveys should keep in mind that the following question elements account for variance in survey responding: 1. (Size of) answer space, 2. Spacing, 3. Ordering, 4. Colour, 5. Numbers, 6. Pictures, sounds and videos, 7. Labels, 8. Symbols/signals, 9. Instructions, 10. Visibility. Use these elements with care! 9.8 Question and instruction text The question text should be programmed before the answer options, as in normal reading order. It goes without saying that the question text should be clear and unambiguous in what in fluked front the respondent. Sometimes it can be helpful to put some emphasis on certain words. For example, by using bold or italics. Never underline words to emphasize them - the respondent might think it is a hyperlink that is not working. Sometimes you want to explain what a certain word means, for example, what you consider to be a vacation (minimum number of nights, does a stay at a friend's house count as a vacation?). You can place a hyperlink on the word vacation with a pop-up screen explaining the concept. Note, however, that not every respondent will click on the hyperlink and therefore not every respondent will treat the concept in the same manner. In addition, the usability of pop-up screens depends on personal settings and devices used. Some people place instruction text near the question text, others place them near the answer categories. If you place the instruction text beneath the answer categories (as some people do) there is a pretty high chance that respondents will forget to read them; therefore, it is always better to place them between the question text and the answer categories. Make sure you place the same emphasis on the instruction text throughout the survey. For example, always use a blank line between the question text and the instruction text and put the instruction text in italics (and not between brackets). 9.9 Routing The rules that control the flow of the survey are variously called: skips, branching, filters, routing, etc. One of the major advantages of online surveys is that they can lead the respondent through the questionnaire. Decisions on what question should be answered (often based on prior responses) can be made by the researcher or programmer, and can be left out of the hands of the respondent. There are many different types of routing and there are many different ways to program them. The options and easiness of adding routing to the survey is often one of the most important attributes in survey software: the more complex (and often expensive) survey software allows you to program the most complex algorithms, for example, based on a series of prior responses. There are basically two approaches to routing and you really need to evaluate your survey software to find out which one the package uses and if this is most convenient for you. The first approach is linear programming ('go to'): the selection of a response option triggers the system to display the next applicable question. The second approach is more object-related (if this, then that, otherwise that). The latter approach offers more flexibility and is, in my humble view, always preferable to linear programming. Although in 99 per cent of cases the linear way of programming is sufficient, you will always find that there is one question that cannot be programmed because it is too complex for linear programming. Different types of routing: Simple: if answer A to Ql, go to Q2, otherwise If answer B to Ql, go to Q3, Disjunctive: if answer A OR B to Ql, go to Q2, otherwise go to Q3. Conjunctive; if answer A AND B to Ql, go to Q2, otherwise go to Q3. Boolean: if answer A AND NOT B in Ql, go to Q2, otherwise go to Q3. Set: if at least three answers to Ql, go to Q2. Inter-question: if answer A to Ql and answer A to Q2, go to Q3, otherwise go to Q4. Inter-survey: if answer in previous wave was A, go to Ql. Range: if age >18 go to Ql, elseif go to Q2. Couper (2008) Always test all paths of the survey! A simple programming mistake can cause the respondent to get in a maze without ending or jump immediately to the end of the survey! Routing can be implemented in scrolling surveys, but routing needs to be very simple in order to not confuse respondents. While respondents are not aware of routing in page-by-page designs (this is done in the 'back office' and not visible to respondents), in scrolling designs new questions can pop up or disappear due to a previous response, and this might confuse respondents. Note that in both designs, routing affects elements such as question numbering and progress indicators, and researchers should think carefully about how to present these elements to respondents if there are major skips in the questionnaire. 9.10 Empty answers Should the respondent be allowed to leave questions empty? An attribute of Web surveys (relative to, for example, paper surveys) is the ability to make it necessary to provide an answer before going forward in the survey (mandatory fields). But should you make use of this possibility? Or is it more ethical to always make it possible for a respondent to decline to give a response? I leave that to the knowledge and intuition or the researcher. 9.11 Going back in the survey In addition to leaving answers blank, should you allow respondents to go back in the survey (provide a back button)? For some questions, it is important that this option should not be provided since knowledge gained in the remainder of the survey should not affect the answer to a prior question. On the other hand, follow-up questions may bring to mind new msights that the respondent did not initially take into account Again, I leave this decision to the knowledge and intuition of the researcher. Decisions can be made on a question-by-question basis (in good survey software), but respondents can get confused if the option does not appear consistently throughout the survey. 9.12 Error messages Error messages can be divided into hard and soft checks. Hard checks make it impossible for the respondent to proceed without submitting or changing a response. Soft checks are warnings that can be ignored by the respondent. For example, 'You said you watched TV for 20 hours a day. Is that correct? Please go back and change your answer.' Personally I always like to give the respondent the opportunity to go further without having to give an explicit answer (what would that answer mean if they are forced to give it?). But in the case of edit checks it can be helpful to program a hard check, for example, if people say they are 0 or 777 years old. If you program an error message, make sure it is polite and friendly because people can get frustrated by them and abandon the survey. 9.13 Don't know and won't tell 'Don't know' and 'won't tell' are non-substantive answers that many researchers want to avoid. For some questions it is good to add these answer types since it may prevent respondents from abandoning the survey. Important questions in which you should include a 'don't know' and 'won't tell' option is on questions relating to political preference, income and ethnicity. Some people simply do not like to give this information, and therefore they should be given the opportunity to proceed in the survey without giving a substantive answer. Other questions that are sensitive or may lead to socially desirable as opposed to true answers should also come with 'don't know' and/or 'won't tell' options. Of course you could also give the respondent the opportunity to leave the answer empty, but a 'don't know' or 'won't tell' option gives you a little more information (the respondent did not forget to give an answer but did not know the answer or did not want to give the answer). It is best to visually separate the substantive answer options from the non-substantive answer options ('don't know' and 'won't tell') by adding extra space between the response options or using separate buttons for 'don't know' and 'won't tell'. 9.14 Interactive features One of the main advantages of Web surveys over other modes of administration is the use of interactive features, visual communication and multimedia. One could easily add animation, a video, sound, etc. This offers much appeal to researchers. Unfortunately, we still have to deal with bandwidth, personal settings and plugin requirements. Therefore, the interactive features will not work for every respondent, nor will they be seen by every respondent in the same way (due to personal settings). One should note that with the addition of new (interactive) elements, the meaning of a survey question to respondents may change. Therefore, one should use these features only when they add value to the survey, not simply for the sake of doing so. For example, Toepoel and Couper (2011) have demonstrated that the use of pictures can severely change the meaning of a question. The placement of a high frequency picture showed higher frequencies in answer distributions than the showing of no picture or a low frequency picture. The effect was also apparent in follow-up questions. s 9.15 Progress indicator The placement of a progress indicator is an important dilemma I want to discuss here. Many respondents want to know how far along they are in the survey. Unfortunately, the use of progress indicators may sometimes increase the number of break-offs because people want to abandon the survey if they think the remainder of the survey is too long. In addition to that, due to routing it is difficult to show the right progress in the survey. Sometimes people can skip from 5 per cent to 40 per cent after a single question because the answer to that question means they do not have to answer some follow-up questions. That means that people are actually further along in the survey than the progress bar can indicate. Some people might therefore unnecessarily abandon the survey. Research on progress indicators shows mixed results. It might be better to indicate the survey length at the beginning of the survey by a message saying: 'This survey will take about 10 minutes to complete.' 9.16 Randomizations Randomization is another major advantage of online surveys. One important reason to randomize is to control for measurement error, for example, context or order effects. Answer options can be randomized, but also questions and entire sections can be randomly offered to respondents. In addition, separate modules can be randomly assigned to different respondents to reduce the burden of the response task. The ease with which you can implement randomization again depends on the survey software and the complexity it can take. It is important to record not only the answer provided by the respondent, but also the order in which the elements were presented for analytical purposes. Note that non-substantive answers (such as 'don't know' or 'none of the above') should not be randomized. They should be placed at the end of the list. 9.17 Fills Fills are variable question texts that are often based on prior responses. For example, if the answer to Ql (What is the name of your oldest child?) is 'Aitor', then Q2 can be adapted to: 'How old is Aitor?' Fills are a way to personalize and customize the survey. It is very important to test all the fills to see that they work (e.g. that they do not result-in 'How old is ?' or 'How old is Achildl?'). 9.18 Numbering Numbering of questions is an important dilemma when designing a survey. Numbering helps to distinguish one question from the next and helps in determining the length of the survey. At times numbering may be unwise in case of routing (a respondent goes from Ql to Q10 and does not know why). The decision to add numbers depends on the specific survey. 9.19 Dependent interviewing Figure 9.17 Progress indicator Dependent interviewing is the practice of using the results from previous surveys in subsequent waves or questions. This can be done to improve the accuracy and reduce respondent burden. Information about each respondent known prior to the survey can be used to determine question routing and wording. In this way, the survey can be tailored to the respondent's situation. Prior information- can be used for checks ('In the previous question you said you had two children, but you only report on one child. Please go back and verify your answer'). Routing can be adapted (you know that the respondent does not have children, so he or she does not have to be bothered with questions about children) and text fills can personalize question wording. You can also use dependent interviewing to make sure that the respondent does not forget any information. Some people expect dependent interviewing to be lined ('I reported my income in a previous questionnaire!'), but others may be surprised when confronted with a previous answer and might have some confidentiality issues. You need to make sure that you present respondents with untouched (raw) data, otherwise they might be confronted with an answer they had not given (e.g. when you have capped outliers). Note that using dependent interviewing can vastly reduce survey length and has some methodological advantages (e.g. improved accuracy) but it can result in under-reporting as well. Questions for which dependent interviewing can be used: 1. Items that are unlikely to change (gender, birth country). 2. Items where changes in response are possible but unlikely or infrequent. Respondents can be asked to verify that the situation is as it was in the previous survey. 3. Items not applicable to specific subgroups (questions about work for retired people, questions about children for people without children, etc.). 4. Items that need accuracy and contain attributes that are easy to forget (eg. income questions). 5. Items that can benefit from personalization (e.g. children's names). 9.20 Paradata The paradata of a survey are data about the process by which the survey data were collected. Example paradata are IP addresses, browsers, the device the survey was completed on, the day the survey was done, duration of the survey, mouse clicks, keystroke files, etc. Respondents might sometimes be unaware of the existence or use of paradata (they did not provide that information actively, but paradata are captured from their participation without them knowing it). Paradata can be really useful in estimating survey quality, though. For example, duration of the questionnaire can be used to see how serious the respondent was in filling out the;survey. Respondents who spend very little time on a long grid can be deleted when it is impossible to read the question and answer options and provide an answer in the time recorded. Note however that response times can become very lengthy if the respondent is doing something else while completing the survey (e.g. watching TV), or got a cup of coffee halfway through. Response times tend to be skewed and you should use common sense if you want to delete people who have extremely low or high response times. Paradata can also be used to identify difficult questions, questions that lead to partial dropout (break offs) or browsers or devices in which the survey had some problems. Note that most survey software information about response times is only submitted when the 'submit' or 'further' button is pressed and hence response times per question are difficult to get when using grids or a scrollable design. 9.21 Pretesting Always pretest your survey instrument! Make sure every routing path is tested to ensure they work properly. In addition, test in different browser settings, mixed devices, and let other people test the survey instrument to see if it is understandable. You need to plan sufficient time in the survey for testing. Some researchers devote a lot of time and energy to the development of the questionnaire, but do not allow enough time for testing. This can result in a non-workable survey and ruined data. Testing is therefore as important as designing the survey! Summary Survey design choices are often based on random decision making on the part of the researcher or programmer. These design choices can significantly affect respondents' answers however. You need to think really carefully about the use of a scrolling or paging design. In addition, you should choose the right answer format for the question. Visual design issues should be taken into account while programming a survey. In addition, you should deliberate on empty answers, back buttons, error messages, don't know options, interactive features, progress indicators, randomizations, dependent interviewing (fills), numbering, etc. There is little evidence that the general survey layout (font, colour, logos, etc.) affect respondents' answers. Make sure you pretest your survey instrument carefully before going online. Paradata can give you information about the quality of survey responses. Note that many respondents are not aware of the use or registration of paradata, however, and may not want you to register that type of information. Key terms Branching Checkboxes Check-all-that-apply Dependent interviewing Filters Fovealview Grid Hard checks Hyperlink Primacy effect Progress indicator Radio buttons Routing Scrolling design Skips Don't know Linear programming Slider bar Drop boxes Matrix question Soft checks Dropdown menu Edit check Error messages Fills Object-related programming Paging design Paradata Text areas Text boxes Visual analogue scale Won't tell 1. 2. 3. 4. 5. 6. 7. 8. Exercises What is the difference between hard checks and soft checks? Why is it difficult to predict how a questionnaire will be seen on the respondent's computer? What is the difference between a paging and scrolling design? What is the difference between linear programming and object-related programming? Elaborate on the use of mandatory questions. Discuss a question where fills can be appropriate. What is the difficulty with the use of a progress indicator? Discuss the relation belween substantive and non-substantive answers and the visual and conceptual midpoint of a scale. Draw an example. Draw a vertically aligned scale, a horizontally aligned scale and a multiple banked scale. References Christian, L.M. and Dillman, D.A. (2004) 'The influence of graphical and symbolic language manipulations to self-administered questions', Public Opinion Quarterly, 68: 57-80. Christian, L.M., Dillman, D.A. and Smyth, J.D. (2007) 'Helping respondents get it right the first time: The influence of words, symbols, and graphics in web surveys', Public Opinion Quarterly Advance Access, 71: 113-25. Gouper, M.P. (2008) Designing Effective Web Surveys. Cambridge: Cambridge University Press. (For more information about designing online surveys) Couper, M.P., Tourangeau, R. and Conrad, EG. (2004) What they see is what we get. Response options for web surveys', Public Opinion Quarterly, 22: 111-27. Couper, M.P., Traugott, M.W. and Lamias, M.J. (2001) '"Web survey design and administration', Public Opinion Quarterly, 65: 230-53. Dillman, D.A. (2007) Mail and Internet Surveys. The Tailored Design Method. Hoboken, NJ: John Wiley and Sons, Inc. Funke, F. (2013) 'Slide to ruin data. Howslider scales may negatively affect data quality and what to do about it'. Retrieved from: http://research.frederikfunke.net Lozar Manfreda, K., Batagelj, Z. and Vehovar, V. (2002) 'Design of web survey questionnaires: Three basic experiments', Journal of Computer-Mediated Communication, 7'(3). Schwarz, N. and Sudman, S. (1996) Answering Questions. San Francisco, CA: Jossey-Bass Publishers. Smith, T.W. (1995) 'Little things matter: A sampler of how differences in questionnaire format can affect survey responses', Proceedings of the American Statistical Association,. Survey Research Methods Section, 1046-51. Retrieved 10 August 2015, by www.amstat.org/sections/ srms/Proceedings/papers/1995_l 82.pdf Smyth, J.D., Dillman, D.A., Christian, L.M. and Stern, M.J. (2006) 'Comparing check-all and forced-choice formats in web surveys', Public Opinion Quarterly, 70: 66-77. Sudman, S., Bradburn, N.M. and Schwarz, N. (1996) Thinking About Answers. San Francisco, CA: Jossey-Bass Publishers. Toepoel, V. and Couper, M.P. (2011) 'Can verbal instructions counteract visual context effects in web surveys?' Public Opinion Quarterly, 75, 1-18. Toepoel, V. and Dillman, D.A. (2012) 'How Visual Design Affects the Interprctabihty of Survey Questions', in Das, M., Ester, P. and Knczmirek, L. (eds), Social and Behavioral Research and the internet Advances in Applied Methods and Research Strategies. New York: Routledgc, (For more information about visual design) Toepoel, V., Das, M. and van Soest, A. (2009) 'Design of web questionnaires: The effect of number of items per screen', Fieldmethods, 21 (2): 200-13. * Items in bold are also suggested readings.