WmUm m Questions for review ® What is mixed methods research? The argument against mixed methods research • What are the main elements of the embedded methods and paradigm arguments in terms of their implications for the possibility of mixed methods research? Two versions of the debate about quantitative and qualitative research ® What are the main elements of the technical and epistemological versions of the debate about quantitative and qualitative research? What are the implications of these two versions of the debate for mixed methods research? Approaches to mixed methods research • What are the main differences between Hammersley's and Morgan's classifications of mixed methods research? • What are the chief ways in which quantitative and qualitative research have been combined? • Why might it be useful to distinguish between them? • What is the logic of triangulation? ® Traditionally, qualitative research has been depicted as having a preparatory role in relation to quantitative research. To what extent do the different forms of mixed methods research reflect this view? Reflections on mixed methods research • Why has mixed methods research become more prominent? • Is mixed methods research necessarily superior to single strategy research? • To what extent does the rise of mixed methods research suggest that the paradigm wars are over? Online Resource Centre http://www.oxfordtextbooks.co.uk/orc/brymansrm3e/ Visit the Online Resource Centre that accompanies this book to enrich your understanding of mixed methods research. Consult web links, test yourself using multiple choice questions, and gain further guidance and inspiration from the Student Researcher's Toolkit. ^^^^^^ ect and metho Chapter outline Introduction The Internet as object of analysis Using the Internet to collect data from individuals An ethnography of the Internet? Qualitative research using online focus groups Qualitative research using online personal interviews Online social surveys Email surveys Web surveys Mixing modes of survey administration Sampling issues Overview Ethical considerations in Internet research Key points Questions for review 628 629 632 633 637 642 644 644 645 646 647 649 654 658 658 Chapter guide This chapter is concerned with the ways in which the Internet can be used in research. Most readers will be familiar with the Internet as a means of searching for material on topics for essays, for further reading, and for various other uses. It can be very valuable for such purposes, but this kind of activity is not the focus of this chapter. Instead, I will be concerned with the ways in which Internet websites can be used as objects of analysis in their own right and with the ways the Internet can be used as a means of collecting data, much like the post and the telephone. The main topics covered in this chapter are: e World Wide Web sites or pages as objects of analysis; » ethnographic study of the Internet; 8 qualitative research using online focus groups; e qualitative research using online personal interviews; » online social surveys. In the process of presenting these topics, I will draw comparisons with traditional methods wherever appropriate. There can be little doubting that the Internet and online communication have proliferated since the early 1990s and it would be surprising if this boom did not have implications for social research methods. Between January and April 2006, 13.9 million British households could access the Internet from home, representing 57 per cent of all households. This represented an increase of 2.9 million households since 2002 (a 26 per cent increase) and an increase of 0.6 million since the previous year (a 5 per cent increase) (http://www.statistlcs.gov.uk/ pdfdir/inta0806.pdf (accessed on 12 June 2007) ). Use of the Internet is particularly high among university students, many of whom have been brought up with the technology and for whom it is a very natural medium. A survey of Internet use among American college students found particularly high rates of use in comparison with the population at large, with 86 per cent of students being regular users compared with 59 per cent of the population (Jones 2002). Nearly 80 per cent of university students felt that the Internet had had a positive impact on their college experience. The situation in the UK is likely to be similar to that found in the USA. Given this background, it is plausible that many students will be drawn to the Internet as an environment within which to conduct social research. The ongoing and burgeoning nature of the Internet and online communication makes it difficult to characterize this field and its impact on social research and its conduct in any straightforward and simple way. In this chapter, I will be concerned with the following areas of e-research: * World Wide Web sites or pages as objects of analysis; « using the World Wide Web or online communications as a means of collecting data from individuals and organizations. In addition, I will address some of the broader implications and ramifications of the Internet for conducting social research. While the choice of these two areas of online research and their classification is rather arbitrary and they tend to shade into each other somewhat, they provide the basis for a reasonably comprehensive overview in the face of a highly fluid field. This chapter does not consider the use of the Internet as an information resource or as a means of finding references. Some suggestions about the latter can be found in Chapter 4. The Internet is a vast information resource and has too many possible forms to be covered in a single chapter. Moreover, I advise caution about the use of such materials; while the Internet is a cornucopia of data and advice, it also contains a great deal of misleading and downright incorrect information. Healthy scepticism should guide your searches. The internet as object of analysis Websites and webpages are potential sources of data in their own right and can be regarded as potential fodder for both quantitative and qualitative content analysis of the kind discussed in Chapters 12 and 21. Indeed, in the latter chapter, there is a section on Virtual documents' that draws attention to websites as a form of document. Aldridge (1998), for example, examined both written consumer guides to personal finance published in the UK and several Internet websites. Through an analysis of such documents Aldridge extracted a number of themes that he saw as part of the 'promotional culture' in which we live. Examples of such themes are: 1. the necessity for members of the public to assume responsibility for their personal finances, particularly in the light of the reductions in welfare provision; and 2. the depiction of the professional-client relationship as unthreatening and relatedly as one in which the client may legitimately ask informed questions. However, there are clearly difficulties with using websites as sources of data in this way. Four issues to consider when assessing the quality of documents were mentioned in Chapter 21. In addition to the issues raised there, the following additional observations are worth considering. 1. You will need to find the websites relating to your research questions. This is likely to mean trawling the web using a search engine such as Google. However, any search engine provides access to only a portion of the web. This is why Dorsey etai. (2004) used several search engines to find websites that promote ecologically sensitive tourism (see Research in focus 26.1), and even then there is evidence that the combined use of several search engines will allow access to only just under a half of the total population of websites (see Research in focus 26.2). While this means that the use of several search engines is highly desirable when seeking out appropriate websites, it has to be recognized not only that they will allow access to just a portion of the available websites but also that they may be a biased sample. 2. Related to this point, seeking out websites on a topic can only be as good as the keywords that are employed in the search process. The researcher has to be very patient to try as many relevant keywords as possible (and combinations of them—known as Boolean searches) and may be advised to ask other people (librarians, supervisors, etc.) to advise on whether the most appropriate ones are being used. 3. New websites are continually appearing and others disappearing. Researchers basing their investigations on websites need to recognize that their analyses may be based on websites that no longer exist and that new ones may have appeared since data collection was terminated. 4. Websites are also continually changing, so that an analysis may be based upon at least some websites that have been quite considerably updated. 5. The analysis of websites and webpages is a new field that is very much in flux. New approaches are being developed at a rapid rate. Some draw on ways of interpreting documents that were covered in Chapters 20 and 21, such as discourse analysis and qualitative content analysis. Others have been developed specificially in relation to the web, such as the examination of hyperlinks between websites and their significance (Schneider and Foot 2004). Research in focus 26.1 Dorsey et al. (2004) note that a growing interest in environmental issues in the West has been associated with a growth in 'ecotourism' and in cultural tourism. The former term denotes tourism to remote areas that is ecologically sensitive; cultural tourism reflects an interest in people and their cultures in relation to their ecosystems. They argue that the Internet has played a significant role in promoting these newer forms of tourism and were interested in the ways in which they are promoted on websites. Using several search engines, they searched for such websites. They emerged with seven websites that met their criteria for inclusion: well-established companies; a claim to offer 'sustainable' tours; culture emphasized for at least some of the tours; developing societies as the main destinations; plenty and a variety of information, including photographs; and a variety of destinations advertised. Dorsey et al. submitted the websites to a detailed narrative analysis (see Key concept 22.4) guided by two research questions. First, do the companies advertising these tours on their websites do so in a manner consistent with the discourse of ecotourism and sustainable development? Second, does the online strategy of advertising these tours differ from traditional forms? Regarding the first research question, the researchers found that only two of the seven websites represented their tours in a manner consistent with the discourse of ecotourism and sustainable development. The others missed key elements of the discourse. Moreover, the authors found that most websites 'do not discuss the principles of sustainability and only rarely refer to the environmental dynamics of the communities and landscapes that are visited' (Dorsey et al. 2004:774). As regards the second research question, the researchers found that there was little difference between online advertising of such tourism and what had been found by previous researchers who had examined the advertising of ecotourism in traditional media. This is a study that has produced some interesting findings from an examination of websites. It provides useful findings for tourism researchers and for those with an interest in advertising and in the marketing of tourist destinations. It also provides an interesting use of previous findings, as reported in books and journals: the researchers did not conduct their own investigation of the advertising of ecotourism in traditional media in order to forge a comparison with their own findings regarding online advertising because they were able to employ previous findings generated by other researchers to do so. % Research in focus 26.2 Ho et al. (2002) were interested in 'alternative websites' in Singapore against the backcloth of a country whose government has been concerned to facilitate the spread of online access as a means of building a knowledge-based economy capable of participating fully in a global economic order. The Singapore government is depicted as concerned to create a knowledge-based workforce with the skills and mindset capable of responding to and capitalizing upon the diffusion of new technologies. Alternative websites are in a sense a paradox because they have been facilitated by the spread of Internet access but at the same time challenge the status quo. Hoef al. chose to emphasize websites concerned with politics, religion, and alternative sexuality. The research was conducted over a four-month period and employed five search engines (Yahoo, Alta Vista, Google, infoseek, and Webcrawler) to trawl for suitable sites. The team of researchers used for each website a two-page report sheet to record information about: (a) web-content in the form of 'alternative culture,' via recourse to complaints, criticisms, promotion of alternative values, practices and lifestyles; (b) the target audience: whetherthe site targets its own community or 'outsiders'; and (c) website characteristics which included guest book entries, links, sponsors, interactive functions, and evidence of Web-rings (clusters of inter-linked sites representing a general theme). (Ho etal. 2002:135) On the basis of their analyses, the authors proceeded to classify the different types of alternative website. For example, some were concerned with civil rights and used the Singapore constitution or general civil rights issues as the basis for advancing their position. Another type was made up of websites that sought to reduce or even purge censorship. Thus, while the Internet is often associated with technologies of surveillance and control, it also contains spaces in which surveillance and control can be resisted. Most researchers who use documents as the basis for their work have to confront the issue that it is difficult to determine the universe or population from which they are sampling. Therefore, the problems identified here and in Chapter 21 are not entirely unique to websites. However, the rapid growth and speed of change in the web accentuate these kinds of problems for social researchers who are likely to feel that the experience is like trying to hit a target that not only continually moves but is in a constant state of metamorphosis. The crucial issue is to be sensitive to the limitations of the use of websites as material that can be content analysed, as well as to the opportunities they offer. Employing both printed and website materials, as Aldridge (1998) did, has the potential to bring out interesting contrasts in such sources and can also provide the basis for cross-validating sources. It was noted in Chapters 17 and 21 that there has been a growing interest in visual materials. The use of images in websites is also potentially interesting. Crook and Light (2002) analysed the photographs in ten university prospectuses. The authors note that the images that accompany departmental entries often include photographs of students apparently studying but that they are rarely shown in the archetypal formal contexts of university learning; such as lectures or private study. Instead, they are usually shown in 'social' forms of learning where they are also active, engaged and frequently out of doors. The authors argue that these less typical learning contexts are chosen because of their capacity to connect with familiar routines for many applicants. In addition, emails and other forms of Internet-based communications (such as listservs, discussion groups, and chat rooms) have been used as objects of analysis. For their study of online self-help and social support groups in the UK, Nettleton et al. (2002) examined online interactions in newsgroups, discussion lists, chat rooms, and so on. In a study of the use of email in two organizations, Brown and Lightfoot (2002) found that online communication was used in a manner that was strongly influenced by pre-existing, non-online forms of communication (such as memos) so that previous ways of working were not displaced. In addition, they note that email was often employed as a means of establishing who is responsible (and not responsible) for certain actions. As such, the internal politics of organizations in terms of establishing personal responsibility loomed large in the use of email. In addition, it is important to bear in mind the four quality criteria recommended by Scott (1990) in connection with documents (see Chapter 21). Scott's suggestions invite us to consider quite why a website is constructed. Why is it there at all? Is it there for commercial reasons? Does it have an axe to grind? In other words, we should be no less sceptical about websites than about any other kind of document. One final point to register is that, just like most documents, websites can be subjected to both qualitative and quantitative forms of analysis. The examples in Research in focus 26.1 and 26.2 involve a qualitative approach to the analysis, but it is worth looking back to Research in focus 12.3, which outlines a study that conducted a quantitative content analysis of postings to cancer support websites. Tips and skills 9 it There is a growing practice in academic work that, when referring to websites, you should include the date you consulted them. This convention is very much associated with the fact that websites often disappear and frequently change, so that, if subsequent researchers wanted to follow up your findings, or even to check on them, they may find that they are no longer there or that they have changed. Citing the date you accessed or consulted the website may help to relieve any anxieties about someone not finding a website you have referred to or finding it has changed (e.g. http://www.oxfordtextbooks.co.uk/orc/brymansrm3e/ (accessed on 3 July 2007)). This does mean, however, that you will have to keep a running record of the dates you consulted the websites to which you refer. In this section, I examine research methods that entail the use of either the web or online communications, such as email, as a platform for collecting data from individuals. At the time of writing, the bulk of the discussion concerned with this issue has emphasized four main areas: 1. an ethnography of the Internet; 2. qualitative research using online focus groups; 3. qualitative research using online personal interviews; 4. online social surveys. These types of Internet-based research methods do not exhaust the full range of possibilities but they do represent recurring emphases in the emerging literature on this subject. All of them offer certain advantages over their traditional counterparts because: e they are usually more economical in terms of time and money; a they can reach large numbers of people very easily; e distance is no problem, since the research participant only has to be accessible by computer—it does not matter whether he or she is in the same building or across the world; • data can be collected and collated very quicldy. The chief general disadvantages tend to revolve around the following issues: e access to the Internet is still nowhere near universal, so that certain people are likely to be inaccessible; » invitations to take part in research may be viewed as jusr another nuisance email; « loss of the personal touch—lack of rapport between interviewer and interviewee, inability to pick up visual or auditory cues; » concerns among research participants about confidentiality of replies at a time of widespread anxiety about fraud and hackers. More specific balance sheets of advantages and disadvantages relating to some of the individual e-research methods will be covered below. There are two crucial distinctions that should be borne in mind when examining Internet-based research methods: 1. There is a distinction between web-based and commim-ication-based methods. The former is a research method whereby data are collected through the Internet, for example, a questionnaire that forms a webpage and that the respondent then completes. A communication-based research method is one where email or a similar communication medium is the platform from which the data-collection instrument is launched. 2. There is a distinction between synchronous and asynchronous methods of data collection. The former occur in real time. An example would be an interview in which an online interviewer asks a question and the respondent, who is also online, replies immediately, as in a chat room. An asynchronous method is not in real time so that there is no immediate response from the respondent, who is unlikely to be on online at the same time as the interviewer (or, if the respondent is on online, he or she is extremely unlikely to be in a position to reply imme- diately). An example would be an interview question posed by the interviewer in an email that is opened and answered by the respondent some time later, perhaps days or weeks later. With these distinctions in mind I can now move on to examine the four main forms of online research methods previously identified. An ethnography of the Internet? Ethnography may not seem to be an obvious method for collecting data on Internet use. The image of the ethnographer is that of someone who visits places or locations such as communities and organizations. The Internet seems to go against the grain of ethnography, in that it seems a decidedly placeless space. In fact, as Hine (2000) has observed, conceiving of the Internet as a place—a cyberspace—has been one strategy for an ethnographic study of the Internet, and from this it is just a short journey to the examination of communities in the form of online communities or virtual communities. In this way, our concepts of place and space that are constitutive of the way in which we operate in the real world are grafted onto the Internet and its use. A further issue is that, as noted in Chapter 17, ethnography entails participant observation, but in cyberspace what is the ethnographer observing and in what is he or she participating? Markham's (1998) approach to an ethnography of life on the Internet involved interviews. The interviews followed a period of 'lurking' (reading but not participating) in computer-mediated communication forums like chat rooms and MUDs (multi-user domains). The interviews allowed synchronous questioning and answering; mother words, the asking and answering of questions were in real time, rather than the kind of questioning and answering that might occur via email, where a question might be answered several hours or days later. She used an inter-view guide, and the interviews lasted between one hour and over four hours. Such interviews are a very real challenge for both interviewer and interviewee because neither party can pick up on visual cues (for example, puzzlement, anxiety) or auditory cues (sighs, groans). One of Markham's interests lay in the reality or otherwise of online experiences. This can be seen in the following brief online interview sequence (Markham is Annette): Annette: 'How real are your experiences in the Internet?' Skene: 'How real are experiences off the internet?' (Markham 1998:115) In fact, Markham notes how her notion of 'real' was different from that of her interviewees. For Markham 'real' or 'in real life' carried a connotation of genuineness or authenticity, but for her interviewees it was more to do with distinguishing experiences that occur offline. Indeed, Markham increasingly felt that her interviewees were questioning the validity of the dichotomous distinction between the real and the non-real, so far as online interaction was concerned. However, it is likely that these distinctions between life online and life offline will become less significant as younger people who are growing up with the Internet conduct large portions of their lives online. This development would have considerable implications for social researchers, since, for many research participants, the online world may become very naturalistic. An interesting question about this research is—in what sense is it an ethnography? At one level, Markham was simply an interviewer who used a semi-structured interview guide to elicit information about the world view of her correspondents. At another level, she was indeed a participant in and observer of life online, although the life that she was participating in and observing was very much a product of her promptings, no matter how open the questions she asked and no matter how willing she was to allow her interviewees leeway in what they wanted to discuss. In much the same way that her interviewees were questioning the nature of reality, Markham's investigation invites us to question the nature of ethnography so far as research on the Internet is concerned. Kendall (1999) was probably closer to the traditional concept of the ethnographer in that she describes her research as comprising three years of online participant observation in a MUD, as well as face-to-face interviews and attendance at face-to-face gatherings. Such research is probably closer to the conventional notion of ethnographic research in its use of several methods of data collection and a sense of participation in the lives of those being studied, as well as interviewing them. She was interested in issues to do with identity and the presentation of self online and interestingly found (among other things) that the participants 'privilege embodied experience over mediated experience' (Kendall 1999: 62). A further example of the use of ethnography in relation to the study of online worlds can be found in Research in focus 26.3, which shows how the study of online discussion groups can be revealing about patterns of social relationships online. Miller and Slater's (2000) ethnography of Internet use takes an approach that is rather more redolent of traditional ethnography than Markham's (see Research in focus 26.4). Its location in a particular place (Trinidad), its use of several methods of gathering data, and its commitment to observation (e.g. in cybercafés) are probably for many people closer to the traditional meanings of ethnography. As the authors put it: 'For us an ethnography does include participating, which may mean going on a chat line for the eight hours that informants will remain online, or participating in a room full of people playing networked Quake . . (Miller and Slater 2000: 22). One of Miller and Slater's chief interests lay in the state of e-commerce in Trinidad. They found a situation in which there was considerable uncertainty at the time about how far to get into this business medium. There was recognition that the Internet should provide more than a means of advertising wares but little agreement beyond that. Thus, the clothing firms they studied recognized that the Internet could provide more than just a catalogue on a monitor screen, but the nature of any kind of alternative or additional interface represented an area of some uncertainty. Generally, however, consumers did little of their purchasing of goods on the Internet. Indeed, a Trinidadian cultural trait of seeking out things that are free would seem to militate against the use of the Internet for online purchasing. A feature that is striking about all these studies from the point of view of anyone familiar with organizational behaviour research on technology is the way in which they treat the Internet as a technology as a 'given'. Recent thinking on technology and work has preferred to view technologies as texts that have 'interpretive flexibility' (Grint and Woolgar 1997). This notion means that the researcher needs to approach any technology through an examination of both the principles inscribed into it and how it is interpreted by users. Such a position is very much associated with the notion, previously encountered in Chapter 21, which suggests that the audiences of texts need to be the focus of attention as much as the texts themselves. Taking this position, Hine (2000: 9) locates the Internet 'as a product of culture: a technology that was produced by particular people with contextually situated goals and priorities. It is also a technology which is shaped by the ways in which it is marketed, taught and used'. volunteers of this group... I took a position as a non-active participant in the participant observation of this study. Many members, including representatives of the group, knew of the researcher's personal background. I also conducted in-depth interviews with six female and seven male members with their agreement by telephone... Kanayama was interested in the experiences of the participants and particularly in patterns of interaction and the construction of online social relationships. She found that members of this virtual community were able to operate well with the medium and that they were able to construct real social relationships in cyberspace. As such they were able to provide a supportive environment for each other. Research in focus 26.4 ^ Research in focus 26.3 s Miller and Slater (2000) conducted ethnographic research on the use of the Internet in Trinidad. The first author had previously conducted research on the unfolding of modernity in Trinidad and drew on his prior research to help appreciate the context of the reception of the Internet in the country; Siater had previously conducted research on computer-mediated communication in the UK. This preamble is significant because it forms part of the authors' justification for describing their investigation as ethnographic. For Miller and Slater, an important component of ethnography is that it entails protracted involvement with the people one studies. In fact, the authors spent just five weeks in Trinidad for this study, so their prior experiences are invoked as a way of legitimizing the label 'ethnography'. Further, the authors consulted Internet websites on their return and interviewed Trinidadians in London and New York. They also maintained contact with informants after their return through email and 'chat'. Overall, the following were the main methods: 1. Interviews "largely devoted to the study of the political economy of the Internet, including businesses, the ISPs [Internet service providers] and government officers' (2000:22). 2. Hanging around 'in cybercafes watching people go online and chatting with them. We also interviewed them more formally' {2000: 22). 3. Exploring with friends how the Internet had become intertwined with their lives. 4. A house-to-house survey in the same four areas in which Miller had previously conducted a similar investigation {see Chapter 25) to ascertain levels of Internet usage. 5. in-depth interviews with some of those contacted through the survey. Like Markham, Milier and Slater found that the worlds of the internet and of everyday life beyond the Internet are highly intertwined. Kanayama (2003) describes her research on an online senior community in Japan as ethnographic and her own role as that of a 'participant observer'. She was interested in the nature of the virtual community that was formed out of an Internet group of senior citizens that had been founded in 1997. In the middle of 1999, there were 120 participants to the group, which is referred to as 'senior-ml'. Kanayama (2003:274) describes her research approach as follows: This study used participant observation to study all messages, which were posted on senior-ml from September 1999 to July 2000.1.had participated in this group as a technical volunteer supporter for two years to help senior members use PCs and the internet, and subscribed to senior-ml, as did most technical Hine describes her approach as one of 'virtual ethnography1 (see Research in focus 26.5), which has a deliberate dual meaning: it is both an ethnography of being online—that is, of the virtual—and a virtual ethnography —that is, not quite an ethnography in any of the term's conventional settings. In particular, a virtual ethnography requires getting away from the idea that an ethnography is in or of a place in any traditional sense. It is also an ethnography of a domain that infiltrates other spaces and times of its participants, so that its bounded-ness is problematic to participants and analysts alike. As regards the issue of the interpretative flexibility of the Internet, Hine shows that there was some general agreement about its purposes as a technology. However, these purposes were not inscribed in the technology in the way that a technological determinist position might imply, but had arisen in the course of the use of the Internet. Moreover, Hine argues that the nature and capacities of the Internet have not become totally stabilized in the minds of participants and that gradual increments of change are likely in response to particular needs and purposes of participants. Studies like these are clearly inviting us to consider the nature of the Internet as a domain for investigation, but they also invite us to consider the nature and the adapt- ability of our research methods. In the examples discussed in this section, the question of what is and is not ethnography is given a layer of complexity that adds to the considerations about this issue that were referred to in Chapter 20. But these studies are also cases of using Internet-based research methods to investigate Internet use. Future online ethnographic investigations of issues unrelated to the Internet will give a clearer indication of the possibilities that the method offers. Research in focus 26.5 Hine's(2000) research was concerned with the trial in 1997 in Boston of a British nanny (Louise Woodward) for the murder of the child in her charge, as well as the aftermath of the trial. Hine'sdata collection strategy included: searching out websites concerned with the case, which attracted a great deal of Internet interest; contacting web authors by email and asking a series of questions about their intentions, familiarity with the web, experiences, and so on; examining communication in newsgroups in which ten or more postings about the case had been made and posting a message in those groups; and contact with the official site that campaigned for Louise Woodward. In. contacting web developers and newsgroup participants; Hine (2000:78) writes: Tintroduced myself as a researchei in comrnunicarions who w?is looking at the specific case of Louise Woodward on the Internet. I explained that! was concerned with how people got interested in the case,, where they got their information from, and what they thought of the quality of information on newsgroups . and web pages... I offered people a promise of confidentiality and the chance to check my. own credentials through my website. Hine did not receive a very good response to the newsgroup postings, which may reflect a tendency noticed by other researchers for newsgroup, MUD, listserv, and other participants to be sceptical about the use of their cyberdomains for research and suspicious about researchers who contact them. In her examination of newsgroup communication, Hine employed an approach that was heavily influenced by discourse analysis (see. Chapter 20)—for example, by showing the discursive moves through which participants sought to construe the authenticity or factual nature of their information. ^ Research in focus 26.6 Williams (2007) has conducted a great deal of research on cybercrime. His central research question is: 'what is the prevalence and nature of online deviant activity and how is it best regulated within online graphical communities? (Williams 2007:457). He describes his research into Cyberworlds, an online social space that incorporates graphical representations and is an outgrowth of MUDs, which are text-based. Over a period of six-: months, Williams carried out more or less daily participant observation of this online community. He alerted community members through posted messages that he was a researcher studying them, an act that provoked a variety of responses. On the basis of their expressed concerns, Williams then decided in each case whether or not to continue his observations. As he notes, the ethical aspects of this process are not straightforward, as people are joining and leaving these online environments all the time. Once he had satisfied himself regarding the ethical issues, he joined Cyberworlds by becoming a citizen and using an online persona. He notes that one aspect of this virtual participant observation was easier than in conventional ethnography: he could take notes and use other data-collection devices without affecting the ongoing interaction in any way, since members were not able to see his offline actions. One practical difficulty that was encountered related to the fact that Cyberworlds members are from a variety of countries, which, because of time differences, caused Williams problems relating to timing. In particular, the site is dominated by American members, who were coming online quite late in the day in terms of UK time. As a result, he says he was plagued with a nagging worry that he was missing something significant when he was not observing online. Eventually, he focused his observational activities on two 'worlds' that were well established. He describes his participant observation as entailing'an overt but distanced approach ... allowing for minimal interference and wide visual scope for observation' (Williams 2007:463). His interests began to focus more and more on the language used (including the use of emoticons), such as turn-taking conventions. In addition, Williams carried out online focus groups with community members to gain an understanding of issues not amenable to observation. To recruit focus group participants, Williams sent out emails to over 1,000 individuals who were involved in relevant newsgroups. This procedure produced some concerned responses from newsgroup members who felt that their privacy had been violated. The focus groups were conducted asynchronously using a discussion list program. One interesting decision that Williams had to make was to choose what kind of virtual environment should be created as the 'venue'for the focus group (Stewart and Williams 2005). There is a crucial distinction between synchronous and asynchronous online focus groups. With the former, the focus group is in real time, so that contributions are made more or less immediately after previous contributions (whether from the moderator or other participants) among a group of participants all of whom are simultaneously online. Contributions can be responded to as soon as they are typed (and with some forms of software, the contributions can be seen as they are being typed). As Mann and Stewart (2000) observe, because several participants can type in a response to a contribution at the same time, the conventions of normal turn-taking in conversations are largely sidelined. If participants are in considerably different time zones, the organization of such groups can be slightly more difficult. With asynchronous groups, focus group exchanges are not in real time. Email is one form of asynchronous communication that is sometimes used (see Research in focus 26.7 for an example). For example, the moderator might ask a question and then send the email containing it to focus group participants. The latter will be able to reply to the moderator and to other group members at some time in the future. Such groups get around the time zone problem and are probably easier than synchronous groups for participants who are not skilled at using the keyboard. Conferencing software is used for synchronous groups and is often used for asynchronous groups as well. This may mean that focus group participants will require access to the software, which can be undesirable if the software needs to be loaded onto their computers. Participants may not feel confident about loading the software, and there may be compatibility problems with particular machines and operating systems. Research in focus 26.7 An asynchronous focus group study . Adrianssens and Cadman (1999) report their experiences of conducting a market research exercise to explore the launch of an online share-trading platform in the UK. Participants were in two groups: one group of active shareholders (20 participants) and a second group of passive shareholders (10). They were identified through : the MORI Financial Services database as 'upmarket shareholders who were also Internet users' (Adrianssens and Cadman 1999:418-19). The participants who were identified were very geographically spread, so online focus groups were ideal. Questions were emailed to participants in five phases with a deadline for returning replies, which were then copied anonymously to the rest of the participants. The questions were sent in.the body of the email, rather than as attachments, to solve problems of software incompatibility. After each phase, a summary document was produced and circulated to participants for comment, thus injecting a form of respondent validation into the project. The researchers found it difficult to ensure that participants kept to : the deadlines, which in fact were rather tight, although it was felt that having a schedule of deadlines that was kept to as far as possible was helpful in preventing dropouts. The researchers felt that the group of active shareholders was too large to manage and suggest groups of no more than ten participants: Selecting participants for online focus groups is potentially difficult, not least because they must normally have access to the necessary hardware and software. One possibility is to use questionnaires as a springboard for identifying possible participants. For their study of virtual communities concerned with consumption issues, Evans et al. (2001) used a combination of questionnaires (both paper and online) and focus groups made up of respondents to the questionnaires who had indicated a willingness to take further part in the research. The British focus groups were of the face-to-face kind, but, in addition, international respondents to the questionnaire who were prepared to be further involved in the research participated in an online focus group. Other sources of participants for online focus groups might involve postings on appropriate special interest websites or on such outlets as special interest bulletin boards or chat rooms. The requisite number of participants is affected by the quesrion of whether the online focus group is being conducted synchronously or asynchronously. Mann and Stewart (2000) advocate that, with the former type, the group should not be too large because it can make it difficult for some people to participate, possibly because of limited keyboard skills, and they recommend groups of between six and eight participants. Also, moderating the session can be more difficult with a large number. In asynchronous mode, such problems do not exist, and very large groups can be accommodated—certainly much larger ones than could be envisaged in a face-to-face context, although Adrianssens and Cadman (1999) suggest that large groups can present research managementproblems. Before starting the focus group, moderators are advised to send out a welcome message introducing the research and laying out some of the ground rules for the ongoing discussion. There is evidence that participants respond more positively if the researchers reveal something about themselves (Curasi 2001). This can be done in the opening message or by creating links to personal websites. One problem with the asynchronous focus group is that moderators cannot be available online twenty-four hours a day, although it is not inconceivable that moderators could have a shift system to deal with this limitation. This lack of continuous availability means that emails or postings may be sent and responded to without any ability of the moderator to intervene or participate. This feature may not be a problem but could become so if offensive messages were being sent or if it meant that the discussion was going off at a complete tangent from which it would be difficult to redeem the situation. Further, because focus group sessions in asynchronous mode may go on for a long time, perhaps several days or even weeks, there is a greater likelihood of participants dropping out of the study. Online focus groups are unlikeiy to replace their face-to-face counterparts. Instead, they are likely to be employed in connection with certain kinds of research topic and/or sample. As regards the latter, dispersed or inaccessible people are especially relevant to online focus group research. As Sweet (2001) points out, relevant topics are likely to be ones like those involving sensitive issues and ones concerned with Internet use; for example, Research in focus 26.7 and 26.8. Research in focus 26.8 O'Connor and Madge (2001i 2003; see also Madge and O'Connor 2002) employed conferencing software in connection with a virtual focus group study of the use of online information for parents. The research was specifically concerned with one UK parenting website (http://www.babyworld.co.uk (accessed 1 October 2007)). Initially, the researchers set up a web survey (see later in the chapterfor information about this technique) on the use of this website. When respondents sent in their questionnaire, they were thanked for their participation and asked to email the researchers if they were prepared to be interviewed in depth. Of the 155 respondents who returned questionnaires, 16 agreed to be interviewed. Interviewees were sent the... software to install on their own machines. The researchers tried to ensure that each group was asked more or less thesame questions* so the researchers worked in pairs whereby one cut and pasted questions into the . discussion (or otherwise typed questions) and the other acted as a focus group moderator by thinking about ; the evolution of the discussion and about when and how to intervene. For each session, the researchers . introduced themselves and asked participants to do likewise. In addition, they had placed descriptions and photographs of themselves on ä website to which participants were directed. An.iniportant part of the process' of building rapport was the fact that both of the researchers were mothers. One of the findings reported is that the greater anonymity afforded by the internet gave participants greater confidence to ask embarrassing questions, á finding that has implications for online focus groups. This can be seen in the following extract: Amy: I feel better askign BW [Babyworld] than my health visitor as they're not goign to see how bad I am . at housekeeping!!! : Kerry: I feel the same. Like the HV [health visitor] is judging even though she says.she isn't Kerry: Although my HV has been a life line as ľ sufferfrom PND [postnatal depression] Amy: ■ Also, there are some things that are so little that you don't want to feel like you're wasting anyone's time. Askign the HV or GP might get in the way of something mreo important; whereas sending an email, the person can answer it when convenient Amy:: My HV is very good, but her voice does sound patronising.Tmsure she doesn't mean it, but it does get to me...: Kerry: Being anon means that you don't get embarassed asking about a little point or something personal. . ■ (O'Connor and Madge 200!: 10.4) !t is striking that this brief extract reveals a good flow without intervention by the researchers. It contains . several misspellings and mistakes (e.g. Tmsure") but these are retained to preserve the reality of the interaction. The researchers did not have to transcribe the material because it was already in textual form. Also, the fact that participants appear to relish the anonymity of the Internet as a source of information has implications for online focus groups because it may be that participants find it easier to ask naive questions or to make potentially embarrassing comments than intace-to-face focus groups. Tips and skills 'Advantages and disadvantages of online focus group and personal interviews compared to face-to-face interviews in qualitative research' summarizes the chief advantages and disadvantages of online relative to face-to-face focus groups. The discussion is combined with online personal interviews, which are the subject of the next section, since most of the elements in the balance sheet of advantages and disadvantages are the same. if Tips and skills Hi i This box summarizes the main advantages and disadvantages of online focus groups and personal interviews compared to their face-to-face counterparts. The two methods are combined because the tally of advantages and disadvantages applies more or less equally well to both of them. Advantages » Online interviews and focus groups are extremely cheap to conduct compared to comparable face-to-face \ equivalents. They are likely to take longer, however, especially when conducted asynchronously........ © Interviewees or focus group participants who would otherwise normally be inaccessible (e.g. because they are located in another country) or hard to involve in research (e.g. very senior executives, people with almost no time for participation) can more easily be involved. 9 Interviewees and focus group participants are able to reread what they {and, in the case of focus groups, others) have previously written in their replies. * People participating in the research may be better able to fit the interviews into their own time. » People participating in the research do not have to make additional allowances for the time spent travelling to a focus group session. o The interviews do not have to be audio-recorded, thus eliminating interviewee apprehension about speaking and being recorded. » There is no need for transcription. This represents an enormous advantage because of time and cost involved in getting recorded interview sessions transcribed. » Because of the previous point, the interview transcripts can be entered more or less immediately into a CAQDAS program of the kind introduced in Chapter 23. s The transcripts of the interviews are more likely to be accurate because the problems that may arise from . mishearing or not hearing at all what is said do not arise. This is a particular advantage with focus group discussions, because it can be difficult to establish who is speaking and impossible to distinguish what is said when participants speak at the same time. a Focus group participants can employ pseudonyms so that their identity can be more easily concealed from others in the group. This can make it easier for participants to discuss potentially embarrassing issues or to divulge potentially unpopular views. It has also been suggested that electronic interviews may make the discussion of sensitive issues easier than in face-to-face interviews. o in focus groups, shy or quiet participants may find it easier to come to the fore. * Equally, in focus groups overbearing participants are less likely to predominate, but in synchronous groups variations in keyboard skills may militate slightly against equal participation. e Participants are less likely to be influenced by characteristics such as the age, ethnicity, or appearance (and possibly even gender if pseudonyms are used) of other participants in a focus group. © Similarly, interviewees and focus group participants are much less likely to be affected by characteristics of interviewers or moderators respectively, so that interviewer bias is less likely. ss When interviewees and participants are online at home, they are essentially being provided with an 'anonymous, safe and non-threatening environment' (O'Connor and Madge 2001:11.2) that may be especially helpful to vulnerable groups. » Similarly, researchers are not confronted with the potentially discomfiting experience of having to invade other people's homes or workplaces, which can themselves sometimes be unsafe environments. Disadvantages e Only people with access to online facilities and/or who find them relatively straightforward are likely to be in a position to participate. s It can be more difficult forthe interviewer to establish rapport and to engage with interviewees. However, when the topic is of interest to participants, this may not be a great problem. at It can be difficult in asynchronous interviews to retain over a longer term any rapport that has been built up. « Probing is more difficult though not impossible. Curasi (2001) reports some success in eliciting further information from respondents, but it is easier for interviewees to ignore or forget about the requests for further information or for expansion on answers given. o Asynchronous interviews may take a very long time to complete depending on cooperativeness. * With asynchronous interviews, there may be a greater tendency for interviewees to discontinue their participation than is likely to be the case with face-to-face interviews. i* There is less spontaneity of response, since interviewees can reflect on their answers to a much greater extent than is possible in a face-to-face situation. However, this can be construed as an advantage in some respects, since interviewees are likely to give more considered replies. s There may be a tendency for non-response to be higher in online personal interviews. * The researcher cannot be certain that the people who are interviewed are who they say they are (though this issue may apply on occasion to face-to-face interviews as well). e In synchronous focus groups, variations in keyboard skills may make equal levels of participation difficult. s Online interviews and focus groups from home require considerable commitment from interviewees and participants if they have to install software onto their computers and remain online for extended periods of time, thereby incurring expense (though it is possible to offer remuneration for such costs) and blocking their telephone lines. a The interviewer/moderator may not be aware that the interviewee/participant is distracted by something and in such circumstances will continue to ask questions as if he or she had the person's full attention. ® Online connections may be lost, so research participants need to know what to do in the case of such an eventuality. Relatedly, longer-term breakdowns can undermine confidence and any momentum in interaction that has been built up. s interviewers cannot capitalize upon body language that might suggest puzzlement or in the case of focus groups a thwarted desire to contribute to the discussion. Sources: Based on Clapper and Massey (19%); Adrianssens and Cadman (1999); Tse (1999); Mann and Stewart (2000); Curasi (2001); O'Connor and Madge (2001); Sweet (2001); Bampton and Cowton (2002); http://www.geog.le.ac.uk/orm/5ite/hoine.htm (accessed on 12 June 2007). personal interviews The issues involved in conducting online personal interviews for qualitative research are essentially the same as those to do with conducting online focus groups. In particular, the researcher must decide whether the interviews should take place in synchronous or asynchronous mode. The factors involved in deciding which to use are also largely the same, although issues to do with variable typing speed or computer-related knowledge among focus group participants will not apply. Although online interviews run the risk relative to face-to-face interviews that the respondent is somewhat more likely to drop out of the exchange (especially in asynchronous mode, since the interviews can sometimes be very protracted), Mann and Stewart (2000: 138-9) suggest that in fact a relationship of mutual trust can be built up. This kind of relationship can make it easier for a longer-term commitment to the interview to be maintained, but also makes it easier for the researchers to go back to their interviewees for further information or reflections, something that is difficult to do with the face-to-face personal interview. The authors also suggest rhat it is important for interviewers to keep sending messages to respondents to reassure them that their written utterances are helpful and significant, especially since interviewing through the Internet is still an unfamiliar experience for most people. A further issue for the online personal interviewer to consider is whedier to send all the questions at once or to interview on the basis of a question followed by a reply. The problem with the former tactic is that respondents may read all the questions and reply only to those that they feel interested in or to which they feel they can make a genuine contribution, so that it is likely that asking one question at a time is likely to be more reliable. Bampton and Cowton (2002) report their experiences of conducting email interviews by sending questions in small batches. They argue that this approach took pressure off interviewees to reply quickly, gave them the opportunity to provide considered replies (although the authors recognize that there may be a loss of spontaneity), and gave the interviewers greater opportunity to respond to interviewees' answers. There is evidence that prospective interviewees are more likely to agree to participate if their agreement is solicited prior to sending them questions and if the researcher uses some form of self-disclosure, such as directing the person being contacted to the researcher's website, which contains personal information, particularly information that might be relevant to the research issue (Curasi 2001; O'Connor and Madge 2001, 2003). The argument for obtaining prior agreement from interviewees before sending them questions to be asked is that unsolicited emails, often referred to as 'spamming', are regarded as a nuisance among online users and can result in an immediate refusal to take the message seriously. Curasi (2001) conducted a comparison in which twenty-four online interviews carried out through email correspondence (and therefore asynchronous) were contrasted with twenty-four parallel face-to-face interviews. The interviews were concerned with shopping on the Internet. She found that: ® face-to-face interviewers were better able than online interviewers to maintain rapport with respondents; e greater commitment and motivation are required for completing an online interview, but, because of this, replies were often more detailed and considered than with face-to-face interviews; ® online interviewers are less able to have an impact on whether the interview is successful or not because they are more remote; b online interviewees' answers tend to be more considered and grammatically correct because they have more time to ponder their answers and because they can tidy them up before sending them. Whether this is a positive feature is debatable: there is the obvious advantage of a 'clean' transcript, but there may be some loss of spontaneity; e follow-up probes could be carried out in online interviews, as well as in face-to-face ones. On the other hand, Curasi also found that the worst interviews in terms of the amount of detail forthcoming were from online interviews. It may be that this and the other differences are to do with the fact that, whereas a qualitative face-to-face interview is spoken, the parallel online interview is typed. The full significance of this difference in the nature of the respondent's mode of answering has not been fully appreciated. Some researchers have combined different types of interview in a single investigation. In addition to examining email and other forms of Internet-based communications for their study of online social support in the UK, Nettleton et at (2002) interviewed fifty-one people involved in these communications. The interviewees were all approached by email after they had submitted relevant postings in the various lists that were being studied. In addition, some interviewees had responded to postings submitted by the research group. Some of these interviews were conducted face to face, some on the telephone, and still others online. One of the online interviews was with a woman in her 60s with ME. She brings across the importance of online social support for someone with this condition: The mailing list MECHAT.. ■. in particular has been a -real lifeline, i check mail several times a day. I have * been able to discuss things with people who understand .. .important as ME is an especially misunderstood illness... make new friends and share ' experiences and laughter... It is a real comfort if any • trauma or upset occurs—death or illness of a loved; > one, relapse, relationship problems, or even just thoughtless remarks from folks who do not : > > understand ME, which we would otherwise have to bear alone. (Nettleton et al. 2002:183) Student experience Thus far, most of the discussion of online personal interviewing assumes that the exchange is conducted entirely in a textual context (particularly by email). However, the webcam may offer further possibilities for synchronous online personal interviews should the technology become widespread. Such a development would make the online interview similar to a telephone interview, in that it is mediated by a technology, but also similar to an in-person interview, since those involved in the exchange would be able to see each other. However, one of the main advantages of the online interview would be lost, in that the respondent's answers would need to be transcribed, as in traditional qualitative interviewing. At the time of writing, the possibilities associated with conducting online focus groups have probably attracted greater attention than online personal interviews, perhaps because the potential advantages are greater with the former. For example, with focus groups, a great deal of time and administration can be saved by online focus groups, whereas there is less comparable saving with online personal interviews unless a great deal of travel is involved. Isabella Robbins wanted to interview mothers whose children had been vaccinated and those who had not been vaccinated. However, she found it difficult to find mothers In the latter group. This passage shows how she enlisted the Internet to help her to find supplementary data on mothers' decision-making in relation to the decision not to vaccinate their children, but it is also interesting and significant for her reliance on theoretical saturation (see Key concept.17l6). '■ Recruitment of mothers was fairly straightforward in terms of the mothers who said they had vaccinated.■ However, recruiting mothers who had hot vaccinated proved to be problematic. Essentially, because : childhood vaccination is a moral issue, these mothers were careful about who they talked to about their resistance. They were a hard to get at community. With time running out I decided to use Internet message boards--from women/mothers forums—in order to supplement my data. This data helped to confirm that I had reached saturation. No new themes came out from it, but it provided some additional rich data. To read more about Isabella's research experiences, go to the Online Resource Centre that accompanies this book at http://www.oxfordtextbooks.co.uk/orc/brymansrm3e/. Online social surveys There has been a considerable growth in the number of surveys being administered online. It is questionable whether the research instruments should be regarded as structured interviews (see Chapter 8) or as self-completion questionnaires (see Chapter 9)—in a sense they are both. So far as online social surveys are concerned there is a crucial distinction between surveys administered by email (email surveys) and surveys administered via the web (web surveys). In the case of the former, the questionnaire is sent via email to a respondent, whereas, with a web survey, the respondent is directed to a website in order to answer a questionnaire. Sheehan and Hoy (1999) suggest that there has been a tendency for email surveys to be employed in relation to 'smaller, more homogeneous on-line user groups', whereas web surveys have been used to study 'large groups of online users'. Email surveys With email surveys it is important to distinguish between embedded and attached questionnaires sent by email. In the case of the embedded questionnaire, the questions are to be found in the body of the email. There may be an introduction to the questionnaire followed by some marking that partitions the introduction from the questionnaire itself. Respondents have to indicate their replies using simple notations, such as an 'x', or they may be asked to delete alternatives that do not apply. If questions are open, they are asked to type in their answers. They then simply need to select the reply button to return their completed questionnaires to the researcher. With an attached questionnaire, the questionnaire arrives as an attachment to an email that introduces it. As with the embedded questionnaire, respondents must select and/ or type their answers. To return the questionnaire, it must be attached to a reply email, although respondents may also be given the opportunity to fax or send the completed questionnaire by postal mail to the researcher (Sheehan and Hoy 1999). The chief advantage of the embedded questionnaire is that it is easier for the respondent to return to the researcher and requires less computer expertise. Knowing how to read and then return an attachment requires a certain facility with handling online communication that is still not universally applicable. Also, the recipients' operating systems or software may present problems with reading attachments, while many respondents may refuse to open the attachment because of concerns about a virus. On the other hand, the limited formatting that is possible with most email software, such as using bold, variations in font size, indenting, and other features, makes the appearance of embedded questionnaires rather dull and featureless, although this limitation is rapidly changing. Furthermore, it is slightly easier for the respondent to type material into an attachment that uses well-known software like Microsoft Word, since, if the questionnaire is embedded in an email, the alignment of questions and answers may be lost. Dommeyer and Moriarty (2000) compared the two forms of email survey in connection with an attitude study. The attached questionnaire was given a much wider range of embellishments in terms of appearance than was possible with the embedded one. Before conducting the survey, undergraduate students were asked about the relative appearance of the two formats. The attached questionnaire was deemed to be better looking, easier to complete, clearer in appearance, and better organized. The two formats were then administered to: two random samples of students, all of whom were active email users. The researchers found a much higher . response rate with the embedded than with the attached questionnaire (37 per cent versus 8 per cent), but there was little difference in terms of speed of response or whether questions were more likely to be omitted with one format rather than the other. Although Dommeyer : and Moriarty (2000: 48) conclude that 'the attached e-mail survey presents too many obstacles to the potential respondent', it is important to appreciate that this study was conducted during what were still early days in the life of online surveys. It may be that, as prospective respon-. dents become more adept at using online communication methods and as viruses become less of a threat (for example, as virus-checking software improves in terms of accessibility and cost), the concerns that led to the lower response rate for the attached questionnaire will be less pronounced. Also, the researchers do not appear to have established a prior contact with the students before sending. out the questionnaires; it maybe that the reaction to such an approach, which is frowned upon in the online community, may have been more negative in the case of the attached questionnaire format. Web surveys Web surveys operate by inviting prospective respondents to visit a website at which the questionnaire can be found and completed online. The web survey has an important advantage over the email survey in that it can use a much wider variety of embellishments in terms of appearance. Plate 26.1 presents part of the questionnaire from the gym study from Chapter 14 in a web survey format and answered in the same way as in Tips and skills 'A completed and processed questionnaire'. Common features include 'radio buttons' (whereby the respondent makes a choice between closed question answers by clicking on a circle in which a dot appears—see question 1 in Plate 26.1) and pull-down menus of possible answers (see Plate 26.2). There are also greater possibilities in terms of the use of colour. With open questions, the respondent is invited to type directly into a boxed area (e.g. question 2 in Plate 26.1). It is worth noting at this juncture that there Gym survey in web survey format 1. Are you male or female? 0 Male q. Femcle 2. How old are you? 3. Which of the following best describes your main reason for going to the gym1? : Please select one only | Maintain or improve fitness -Mil other Answer: 4. When you go to the gym, how often to you use the cardiovascular equipment? 0 Always. rj Usually . q Rarely FJ Never 5. When you goto the gym, how often do you use the weights.rnaehines (including free weights)? 0 Always □ Usually □ Rarely □ Never 6.. How frequently do you usually go to the gym? 2 or 3 davs a week A puil-down menu 3 Whi:h of the following best describes vour main reason for going to the gym? Please select one only i Maintain or improve fitness >.| other Answer: [_ Relaxation Maintain or improve fitness Lose weight Meet others Build strength_ id to you use the cardiovascular equipment? Rarely p Never is evidence that there is a problem with pull-down menus as a mechanism for presenting respondents with potential answers: when the list of possible answers is quite long, there is a tendency for respondents not to read the entire list and to choose one of the earlier answers (Couper et al. 2004). This implies that, for most instances, radio buttons will be preferable to pull-down menus when there are many possible answers. However, diere is a trade-off here, in that pull-down menus have the advantage of taking up less room on the screen than radio buttons. However, the advantages of the web survey are not just to do with appearance. The questionnaire can be designed so that when there is a filter question (e.g. 'if yes, go to question 12; if no, go to question 14'), it skips automatically to the next appropriate question. The questionnaire can also be programmed so that only one question ever appears on the screen or so that the respondent can scroll down and look at ail questions in advance. Finally, respondents' answers can be automatically programmed to download into a database, thus eliminating the daunting coding of a large number of questionnaires. One of the chief problems with the web survey is that, in order to produce the attractive text and all the other features, the researcher will either have to be highly sophisticated in the use of HTML or will need to use one of a growing number of software packages that are designed to produce questionnaires with all the features that have been described. Plates 26.1 and 26.2 were created using Survey Galaxy (http://www.surveygalaxy.com (accessed on 16 July 2007)), which my colleagues and 1 employed in a study of social policy researchers (Bryman et al. in press; Sempik et al. 2007). With commercial websites such as these, you can design your questionnaire online and then create a web address to which respondents can be directed in order to complete it. There is a fee for using this software. The fee will be affected by the number of respondents who complete the questionnaire and the length of time that the questionnaire is active. Each respondent's replies are logged and the entire data set can be retrieved once you decide the data-collection phase is complete. This means that there is no coding of replies and entering of data into your software. Not only does this save time; it also reduces the likelihood of errors in the processing of data. Potential respondents need to be directed to the . website containing the questionnaire. Research in focus 26.10 provides an example of the kind of approach that might be used. Where there are possible problems to do with restricting who may answer the questionnaire, it may be necessary to set up a password system to filter out people for whom the questionnaire is not appropriate. Mixing modes of survey administration The example in Research in focus 26.10 relates to a case in which a web survey is combined with a conventional self-administered questionnaire. When this occurs, there are two different modes of administration of the research instrument in operation. Mixed modes of administering a survey raise the question of whether the mode of administration matters; in other words, do you get different results when you administer a questionnaire online from when you administer it offline (for example, by handing a questionnaire or mailing it to respondents). Obviously, it would not be desirable to aggregate data from two different modes of administration if part of the variation in respondents' replies could be attributed to the way they received and completed the questionnaire. Equally, researchers using solely a web-based questionnaire need to know how far their findings are different from conventional modes of administration. Experiments with different modes of administration are quite reassuring on this point. A study of self-reported illicit drug use among a large sample of US university students found that there were similar findings when the results from web- and paper-based questionnaire surveys were compared (McCabe 2004). The sample had been randomly assigned to either of the two modes of administration. Denscombe (2006) compared paper and web-based modes of administration of nearly identical questionnaires administered to young people at an East Midlands school. The questionnaire was concerned wirh perceptions of social issues. One batch of questions, which is explored in Denscombe's article, dealt with views about smoking. The findings confirmed McCabe's study in suggesting that there is little evidence that the mode of administration makes a significant difference to the findings. There was a lower incidence of self-reported smoking among those using the web-based questionnaire than those using the paper one. However, given the large number of items compared for a mode effect in Denscombe's study, it was likely that a small number would be found to exhibit a mode effect, so it would be unwise to read too much into this particular finding. Given that the differences between modes of administration in surveys that combine a web-based mode with a conventional mode (such as a paper-based self-completion questionnaire) do not appear to be great, there is often a good case to be made for offering respondents an online option. A covering letter might draw prospective respondents' attention to a web-based option along with the necessary instructions for accessing it, so that those who prefer to work online are not put off responding to the questionnaire. Sampling issues Anyone who has read Chapter 7 must be wondering how the sampling principles described there might apply to online surveys. A major issue and limitation is that not everyone in any nation is online or has the technical abil- ity to handle questionnaires online in either email or web formats. Certain other features of online communications make the issue more problematic, such as: e many people have more than one email address; e many people use more than one Internet service provider (ISP); » a household may have one computer but several users; « Internet users are a biased sample of the population in that they tend to be better educated, wealthier, younger, and not representative in ethnic terms (Couper 2000), although, as Internet access diffuses more and more, it is possible that some of these biases will become less prominent or even disappear; * few sampling frames exist of the general online population and most of these are likely to be expensive to acquire since they are controlled by ISPs or may be confidential. Such issues make the possibilities of conducting online surveys using probability sampling principles difficult to envisage. This is not to say that online surveys should not be considered. For example, in many organizations, most if not all non-manual workers are likely to be online and to be familiar with the details of using email and the Internet. Thus surveys of samples of online populations can be conducted using essentially the same probability sampling procedures. Similarly, surveys of members of commercially relevant online groups can be conducted using these principles. Smith (1997) conducted a survey of web presence providers (people or organizations that are involved in creating and maintaining web content). She acquired her sample from a directory of providers, which acted as her sampling frame. A further example of the use of a directory to generate a probability sample can be found in Research in focus 26.11. Research in focus 26.9 provides an interesting example of the use of a well-known website—Friends Reunited—as a means of finding people in the context of an unplanned longitudinal survey. As Couper (2000: 485) notes of surveys of populations using probability sampling procedures: Tntra-organizational surveys and those directed at users of the Internet were among the first to adopt this new survey technology. These restricted populations typically have no coverage problems or very high rates of coverage. Student surveys are a particular example of this approach that are growing in popularity.' The chief problem with sampling strategies of the kind employed by Evans et al. (2001; see Research in focus Research in focus 26.9 One of the best-known websites in the UK in recent years has been Friends Reunited (http://www.friendsreunited.co.uk (accessed on 16 July 2007)). It is best known for its use as a platform for' helping old school friends to meet each other once again. Power et at. (2005) report that in the mid-1990s they had received research funding to study the progress of young people who had been participants in an earlier, study, when they were aged 12 or 13 years. In 2004 they received further funding to examine the same group'M-when they would be in their early 30s. One of the main problems with following up research participants for a. / study such as this is finding them. Friends Reunited proved invaluable. Because Friends Reunited organizes its, v: participants by school, it was relatively easy for the researchers to examine the Friends Reunited postings for ■ / each of the eighteen schools that had been involved in the project during the first wave of data collection: Also,: because Friends Reunited organizes postings by the year the student left a school, the researchers could home; v in on the postings relating to particular years relevant to their sample. Further, because Friends Reunited ■ / participants tend to use their original names, tracking people who had changed their names (most obviously, ;/ because of marriage) made the process even easier. As a result, the researchers were able to find over 80 per: i cent of the original sample and over 90 per cent of those involved in the second wave of data collection.. The process of contacting people through Friends Reunited was not unproblematic, because the website does, not allow direct communication with those who post messages. As the authors put it: This means that while you can send messages and inform people that they are part of a research cohort and you would like them to contact you, they then have to take the initiative and effortto locate you' (Power etal. 2005:7). Further, as a ::;.: result of the process of contacting participants through Friends Reunited and being contacted by them by emaii. (once they had contacted the researchers), Power et ai. point out that they became more reliant on the use of online questionnaires than they had originally anticipated. 26.10) is that we have no idea about the representativeness of those who answered the web survey questionnaire (though the same applies to the respondents to their paper-based questionnaire too). On the other hand, given that we have so little knowledge and understanding of online behaviour and attitudes relating to online issues, it could reasonably be argued that some information about these areas is a lot better than none at all, provided the limitations of the findings in terms of their generalizabil-ity is appreciated. A further issue in relation to sampling and sampling-related error is the matter of non-response (see Key concept 7.1). There is growing evidence that online surveys typically generate lower response rates than postal questionnaire surveys (Tse 1998; Sheehan2001). In the early years, response rates for email surveys were quite encouraging (Sheehan and Hoy 1999) but more recently they have been declining and are at lower levels than those for most postal questionnaires (Sheehan 2001), though there are clear exceptions to this tendency (for example, :. see Research in focus 26.11). Two factors may account for this decline: the novelty of email surveys in the early years and a growing antipathy towards unsolicited emails among online communities. However, response rates can be boosted by following two simple strategies. 1. Contact prospective respondents before sending them :%;ys/ a questionnaire. This is regarded as basic 'netiquette'. 2. As with postal questionnaire surveys, follow up non-respondents at least once. The case for the first of these two strategies in boosting -.^^jC-response rates is not entirely clear (Sheehan 2001) but seems to be generally advisable. Crawford et ai. (2001) report the results of a survey of students at the University of Michigan that experimented with a number of possible influences on the response rate. Students in the sample were initially sent an email inviting them to visit the website, which allowed access, via a password, to the questionnaire. Some of those emailed were led to expect that the questionnaire would take 8-10 minutes to complete (in fact, it would take considerably longer); others were led to expect that it would take 20 minutes. As might be expected, those led to believe it would take longer were less likely to accept the invitation, resulting in a lower response rate for this group. However, Crawford et ai. also found that those respondents who were led to believe that the questionnaire would take only 8-10 minutes were more likely to give up on the questionnaire part of the way through, resulting in unusable partially completed questionnaires in most cases. Interestingly, they also found that respondents were most likely to abandon their questionnaires part of the way through when in the middle of completing a series of open questions. The implications of this finding echo the advice in Chapter 9 that it is probably best to ask as few open questions in self-completion questionnaires as possible. Further evidence regarding this survey suggests that having a progress indicator with a web survey can reduce the number of people who abandon the questionnaire part of the way through completion (Couper etal. 2001). A progress indicator is usually a diagrammatic representation of how far the respondent has progressed through the questionnaire at any particular point. Couper et al. also found that it took less time for respondents to complete related items (for example, a series of Likert items) when they appeared on a screen together than when they appeared singly. Respondents also seemed less inclined to omit related questions when they appeared together on a screen rather than singly. However, it is important not be too sanguine about some of these findings. One difficulty with them is that the samples derive from populations whose members are not as different from one another as would almost certainly be found in samples deriving from general populations. Another is that it must not be forgotten that, as previously noted, access to the Internet is still not universal and there is evidence that those with web access differ from those without both in terms of personal characteristics and attitudinally. Fricker et al. (2005) compared the administration of a questionnaire by web survey and by telephone interview among a general US sample. They found that telephone interviewees were much more likely to complete the questionnaire (though it is possible if not probable that the same effect would have been noted if they had compared the web mode with a self-completion mode). By contrast, telephone interviewees were more likely to omit questions by saying they had 'no opinion' than in the web administration, probably because respondents were prompted to answer if they failed to answer a question. One difficulty noted by Fricker et al. is that web respondents were more likely than telephone interviewees to give undifferentiated answers to series of questions like Likert items. In other words, they were more prone to response sets. Some of the questions were open questions inviting respondents to display their knowledge on certain issues. The researchers found that Web respondents took longer to answer the questions and were more likely to provide valid answers than the telephone interviewees. These findings suggest that it is difficult and probably impossible, given their relative newness, to provide a definitive verdict on web surveys compared to traditional forms of survey administration. For one thing, it is difficult to separate out the particular formats that researchers use when experimenting with modes of administration from the modes themselves. It may be that if they had displayed web questions in a different manner, their findings would have been different—with obvious implications for how the web survey fares when compared with any of the traditional forms. Further, web surveys seem to work better than traditional survey forms in some respects better than others. On the day I was writing this (24 January 2007), I was sent my quotation of the day by http:/'www.quotationspage.com/'qotd.html, which was written by the noted social theorist Thorstein Veblen. Apparently, he once wrote: 'The outcome of any serious research can only be to make two questions grow where only one grew before.' That is certainly true where developmental work on the web survey is concerned! Tips and skills 'Advantages and disadvantages of online surveys compared to postal questionnaire surveys' summarizes the main factors to take into account when comparing online surveys with postal questionnaire surveys, and Table 26.1 compares the different methods of administering a survey. Overview Online surveys are clearly in their infancy but they have considerable potential. There is evidence that having a web survey or even an email option can boost response rates to postal questionnaires (Yun and Trumbo 2000). Several problems have been identified with web and email surveys but it is too early to dismiss them because methodologists are only beginning to get to grips with Issues to consider Mode of survey administration Email Web Issues to consider Face-to-face Telephone Postal interview interview questionnaire Resource issues Is the cost of the mode of /// administration relatively low? Is the speed of th e mode of /// administration relatively fast? Is the cost of handling a dispersed sample relatively low? Does the researcher require // little technical expertise for devising a questionnaire? Sampling-related issues Does the mode of S administration tend to produce a good response rate? Is the researcher able to control // who responds (i.e. the person at whom it is targeted is the person who answers)? Is the mode of administration accessible to all sample members? Questionnaire issues Is the mode of administration suitable for long questionnaires? Is the mode of administration suitable for complex questions? Is the mode of administration suitable for open questions? Is the mode of administration suitable for filter questions? Does the mode of administration allow control over order questions are answered? Is the mode of administration vW suitable for sensitive questions? ts the mode of administration less // likely to result in non-response to some questions? Does the mode of administration // allow the use of visual aids? Answering context issues Does the mode of administration /// give respondents the opportunity to consult others for information? (unless access to low-cost software) /// /// / s S (// if clustered) /// /// /// // /// /// Mode of survey administration Email Web Face-to-face Telephone Postal interview interview questionnaire /// /// /// Does the mode of administration minimize the impact of interviewers' characteristics (gender, class, ethnicity)? Does the mode of administration minimize the impact of the social desirability effect? Does the mode of administration allow control over the intrusion of others in answering questions? Does the mode of administration minimize the need for respondents to have certain skills to answer questions? Does the mode of administration enable respondents to be probed? /// /// /// / (because of need to have online skills) (because of need to have online skills) /// /// /// /// Notes: Number of ticks indicates the strength of the mode of administration of a questionnaire in relation to each issue. More ticks correspond to more advantages in relation to each issue. A single tick implies that the mode of administering a questionnaire does not fare well in terms of the issue in question. Three ticks imply that it does very well, but two ticks imply that it is acceptable. This table has been influenced by the author's own experiences and by Dillman (1978) and Czaja and Blair (1996). CAP) is computer-assisted personal interviewing; CATI is computer-assisted tetephone interviewing. / / /// // /// (because of need (because of need for respondents to be accessible for respondents to be accessible this approach to survey research and may gradually It is also worth making the obvious point that, when online) online) develop ways of overcoming the limitations that are conducting an online survey, you should bear in mind the ✓/ being identified. Moreover, as I have pointed out, for principles about sampling, interview design, and ques- ✓/ // /// /✓ certain lands of populations and as more and more people tion construction that were posed in Chapters 7 through /// / // and organizations go online, some of the sampling- 10 in particular. While online surveys are distinctive in // // related problems will diminish. As Yun and Trumbo certain ways, they require the same rigorous considera- /✓ // /// // / (2000) observe: 'the electronic-only survey is advisable tions that go into the design of conventional surveys that when resources are limited and the target population are conducted by postal questionnaire or by personal or / ✓// /// / suits an electronic survey.' telephone interview. (if allows jumping) (especially (especially ifCAPIused) if CATI used) / / /// /// / Research in focus 26.10 As part of their research into virtual communities concerned with consumption issues, Evans et at. (2001) carried out a survey using two methods of administration. First, they distributed a paper-based questionnaire at various locations at the University of Bristol and the University of West of England, Bristol, and at three cybercafes. Second, they used a web survey hosted by the Bristol Business School, which was linked via BBC Bristol Online and a cybercafe. The authors write: 'Invitations to respond to the on-line questionnaire were posted on several electronic lists within the two Bristol universities, and several international discussion lists' (Evans etal. 2001:152). As a result of these two strategies, over 300 questionnaires were returned. ■iSk 4^ Tipsand skills -v There is a growing tendency for researchers who conduct postal questionnaire surveys to offer their respondents the opportunity to complete their questionnaires online (Couper 2000). This can be done by indicating in the covering letter that goes out with the postal questionnaire that they can have the questionnaire emailed to them or, if the questionnaire is accessible via the Internet, they can be directed to the web address. The advantage of doing this is that some of the samples of respondents may feel more comfortable completing the questionnaire online because of the long periods of time they spend online and it removes the need to return the questionnaire by post. There is the question of whether the mode of administration (postal as against online) influences the kinds of response received, but some research suggests that the impact of mode of administration is not great. Research in focus 26.11 Si for an orslsrae survey Cobanoglu et al. (2001) report the results of a study in which three different modes of survey administration were used: post, fax, and online. The questionnaires were administered to 300 hospitality professors in the USA who had been randomly sampled from the online directory of the Council on Hotel, Restaurant, and Institutional Education. The sampling was carried out only from those who had an email address. The 300 professors were randomly assigned to one of the three modes of survey administration. The authors write: For the web-based survey, an email message was sent to the professors along with a cover letter and the website address. The respondents were informed that they could request a paper copy of the survey should they have problems accessing the survey online. A unique website address was created for each respondent... (Cobanoglu et al. 2001:447) Compared with the postal administration of the questionnaire, the online administration achieved a higher response rate (26 per cent versus 44 per cent), a faster response speed, and was cheaper. Tips and skills * 1 arttages and disadvantages of online surveys spared to postal questionnaire surveys Here I summarize the main advantages and disadvantages of online surveys compared to postal questionnaire surveys. The tally of advantages and disadvantages in connection with online surveys relates to both email and web surveys. It should also be made clear that, by and large, online surveys and postal questionnaires suffer from one disadvantage relative to personal and telephone interviews—namely, that the researcher can never be certain that the person answering questions is who the researcher believes him or her to be. Advantages e Low cost. Even though postal questionnaire surveys are cheap to administer, there is evidence that email surveys in particular are cheaper. This is in part due to the cost of postage, paper, envelopes, and the time taken to stuff covering letters and questionnaires into envelopes with postal questionnaire surveys. However, with web surveys there may be start-up costs associated with the software needed to produce the questionnaire. But the low-cost advantage of Internet surveys increases as the sample size increases. When viewed in relation to the overall amount of time invested in conducting a survey, the cost advantage of an Internet survey over other modes of administration for small surveys is probably not as great as is often presumed. « Foster response. Online surveys tend to be returned considerably more quickly than postal questionnaires. e Attractive formats. With web surveys, there is the opportunity to use a wide variety of stylistic formats for presenting questionnaires and closed question answers. Also, automatic skipping when using filter questions and the possibility of immediate downloading of questionnaire replies into a database make this kind of survey quite attractive for researchers. « Mixed administration. Online surveys can be combined with postal questionnaire surveys so that respondents have the option of replying by post or online. Moreover, the mode of administration does not seem to make a significant difference to the kinds of replies generated. • Unrestricted compass. There are no constraints in terms of geographical coverage. The same might be said of postal questionnaire surveys, but the problems of sending respondents stamped-addressed envelopes that can be used in their own countries is overcome. © Fewer unanswered questions. There is evidence that online questionnaires are completed with fewer unanswered questions than postal questionnaires, resulting in less missing data. However, there is also evidence of little difference between the two modes of administering surveys. s Better response to open questions. To the extent that open questions are used, they tend to be more likely to be answered online and to result in more detailed replies. Disadvantages ® Low response rate. Typically, response rates to online surveys are lower than those for comparable postal questionnaire surveys. « Restricted to online populations. Only people who are available online can reasonably be expected to participate in an online survey. This restriction may gradually ease over time, but, since the online population differs in significant ways from the non-online population, it is likely to remain a difficulty. On the other hand, if online populations are the focus of Interest, this disadvantage is unlikely to prove an obstacle. e Requires motivation. Because online survey respondents must be online to answer the questionnaire, if they are having to pay for the connection and perhaps are tying up their telephone lines, they may need a higher level of motivation than postal questionnaire respondents. This suggests that the solicitation to participate must be especially persuasive. • Confidentiality and anonymity issues. It is normal for survey researchers to indicate that respondents' replies will be confidential and that they will be anonymous. The same suggestions can and should be made with respect to online surveys. However, with email surveys, since the recipient must return the questionnaire either embedded within the message or as an attachment, respondents may find it difficult to believe that their replies really are confidential and will be treated anonymously. In this respect, web surveys may have an advantage over email surveys. s Multiple replies. With web surveys, there is a risk that some people may mischievously complete the questionnaire more than once. There is much less risk of this with email surveys. Sources: Based on Schaeffer and Dillman (1998); Tse (1998); Kent and Lee (1999), Sheehan and Hoy (1999); Cobanoglu et al. (2001); Fricker and Schonlau (2002); Denscombe (2006); http://www.geog.Ie.ac.uk/orm/siie/ home.htm (accessed on 3 July 2007). Ethical considerations In Internet research Conducting research by using the Internet as a method of data collection raises specific ethical issues which are only now starting to be widely discussed and debated. Some of these are related to the vast array of venues or environments in which these new forms of communication and possibilities for research occur, including weblogs, list-servs or discussion groups, email, chat rooms, instant messaging, and newsgroups. The behaviour of Internet users is governed by 'netiquette', the conventions of politeness or definitions of acceptable behaviour that are recognized by online communities, as well as by service providers' acceptable use policies and by data protection legislation, and those contemplating using the Internet as a method of data collection should start by familiarizing themselves with these and by considering the general ethical principles discussed in Chapter 5. However, this section is concerned with the specific ethical issues raised by Internet research. One of the problems faced by social researchers wanting to use the Internet for data collection is that we are clearly in the middle of a huge growth in the amount of research being conducted in this way (Williams 2007). Not only is this trend creatingthe problem of over-researched populations who suffer from respondent fatigue; many of those involved in doing research with this new technology are not adhering to ethical principles. As a result, fatigue and suspicion are beginning to set in among prospective research participants, creating a less than ideal environment for future Internet researchers. The Association of Internet Researchers recommends that researchers start by considering the ethical expectations established by the venue (http://www.aotr.org/ reports/ethics.pdf (accessed on 12 June 2007)). For instance, is there a posted site polity that notifies users that the site is public and specifies the limits to privacy? Or are there mechanisms that users can employ to indicate that their exchanges are private? The more the venue is acknowledged to be public, the less obligation there is on the researcher to protect the confidentiality and anonymity of individuals using the venue, or to seek their informed consent. However, the distinction between public and private space on the Internet is blurred and contested. Hewson et al. (2003) suggest that data that have been deliberately and voluntarily made available in the public Internet domain, such as newsgroups, can be used by researchers without the need for informed consent, provided anonymity of individuals is protected. In the course of her research on websites for female sex workers and their male clients, Sanders (2005) acted as a 'lurker', whereby she observed the activity on message boards without revealing her identity as a researcher, so that she was essentially a covert non-participant observer. She did not reveal her identity because she did not want to have an impact on participants' behaviour and did not want to trigger hostility that might adversely affect her research. Certainly, some researchers (e.g. Hudson and Bruckman 2004) have found that, although certain Internet venues might be considered by some to be public spaces, entering online environments and recording the conversation for research purposes can provoke hostile responses from users (Research in focus 26.12 and 26.13). In the case of the use of the Friends Reunited website (Research in focus 26.9), the researchers do not report irritation among respondents about being contacted through Friends Reunited because many remembered the previous contact with the researchers, though they do acknowledge that 'overuse of the website for research purposes might ultimately irritate members and lead to it being prohibited' (Power etař. 2005: 8). Barnes (2004) identifies five types of Internet message, each presenting slightly different ethical concerns for anonymity, confidentiality, and informed consent. i. Messages exchanged in online public discussion lists. A typical forum for these would be discussion groups, MUDs, or newsgroups. Although most group members see their messages as public, Barnes (2004) found that some consider them as private, despite having been sent statements upon joining the group indicating the public nature of the space. Barnes (2004) recommends as a general principle that the ideas of individuals who share their ideas on public lists should be attributed to their authors in the same way as you would attribute something they had written in a printed text under traditional copyright law. However, it is a good idea to check the welcoming messages of public discussion lists for guidance on how ro cite email messages. Some discussion groups state that researchers must notify the group in advance of any research being undertaken. She advises that when researching any Inrernet group it is a good idea to contact members in advance and ask for permission to observe them. 2. Messages exchanged in private discussions between individuals and on private lists. Barnes (2004) suggests in this situation the names of the lists and participants should never be revealed. Further to protect individual identities, she recommends that messages are combined, all headers and signatures are removed, references to the exact type of forum being studied are not made, and behaviour is described in general terms in a composite personality type rather than by referring to specific messages that could be traced to particular individuals. 3. Personal messages sent to the researcher. In Barnes's (2004) research messages were sent on to her by a contact who had already deleted the names and email addresses of the original sender, but in any case she suggests that headers and signatures are removed to protect the authors' anonymity. 4. Messages reposed and passed around the Internet. Such messages include those that people forward on to other people and to discussion lists because they think they are interesting. They can contain the name of the original author or be distributed as anonymous email. If they are distributed anonymously, Barnes (2004) believes it is worth trying to find the original author so that rhey can be properly credited in a research publication. She advises emailing the author and asking for permission to use the message. 5. Messages generated by computer programs. This refers to messages generated by natural language computer programs that form the basis for interaction with people. There may also be specific ethical considerations associated with certain types of research, such as virtual ethnography (see Research in focus 26.13). ^ Research in focus 26.12 Hudson and Bruckman (2004) designed an experiment to understand how potential participants react to being invited to participate in an online study. This involved entering a number of online moderated chat rooms and informing the participants that they were recording them and then recording how they responded. They downloaded a list of available chat rooms on 'ICQ Chat' each evening at 9.50 p.m. Dividing the chat rooms by size from very small (2-4 participants) to iarge (30 or more participants), they then randomly selected sixteen chat rooms from each set, then subdivided these into groups of four. Each group of four chat rooms was sent a different message as follows: e No Message. The researchers entered the chat room using the nickname 'Chat Study' and said nothing. « Recording Message. The researchers entered the chatroom as 'Chat Study' and announced that they were recording the chat room for a study. o Opt Out Message. The researchers entered the chat room in the same way as above but posted a message giving the participants the option not to be recorded. © Opt In Message. The researchers entered the chat room in the same way as before but gave participants the option to volunteer to be recorded.. Using a sample of 525 chat rooms studied over a two-week period, Hudson and Bruckman found that posting a message about the study resulted in significant hostility, greatly increasing the likelihood of researchers being kicked out of the chat room by the moderator. Moreover, the likelihood of being kicked out. of a chat room decreased as the number of participants in the chatroom increased. The reasons given for being kicked out included referring to the study as 'spamming' (unwanted electronic communication often involving : some form of commercial advertising), objection to being studied, general requests to leave, and insults. When given a chance to opt in, only 4 of the 766 potential respondents actually did so. Hence, even when the option of fully informed consent was given, chat room participants still objected to being studied. The researchers conclude that 'these results suggest that obtaining consent for studying online chatrooms is impracticabie' (Hudson and Bruckman 2004:135). This example highlights the potential ethical difficulties in intruding on a pre-existing Internet communication venue for research purposes, even if it is considered to be a public space. A further ethical issue relates to the principle of protecting research participants from harm (see Chapter 5) and the related issues of individual anonymity and confidentiality. Complete protection of anonymity is suggested by Stewart and Williams (2005) to be almost impossible in Internet research, since, in computer-mediated communication, information about the origin of a computer-generated message, revealed for instance in the header, is very difficult to remove. It is also more difficult to guarantee confidentiality, because the data are often accessible to other participants. In a similar vein, DeLorme et al. (2001) suggest that the Internet raises particular ethical concerns for qualitative researchers that arise from the difficulty of knowing who has access to information. For example, a message posted on an Internet discussion group can be accessed by any- one who has a computer and an Internet connection. In addition, some Internet environments enable 'barkers', people who listen to what is going on without making themselves identifiable. This makes it difficult for researchers to protect the confidentiality of data that they collect. A further concern arises from the potential for individuals to present a 'fake' identity during online interaction. If a research participant does this, it has implications for the validity of the data, but there is also potential for the researcher to deceive participants in the expectation that this will encourage them to respond more openly, for example, by pretending to be a man when conducting a focus group with all male participants. This is thus a form of covert research that, as discussed in Chapter 5, raises particular ethical issues because of the lack of informed consent. Research in focus 26.13 There have also been some attempts to highlight the ethical considerations associated with particular kinds of Internet research such as virtual ethnography. Smith (2004) was interested in organizational change and the role played by professionals in the NHS; While she was in the process of doing her research, she came across . a listserv that was being used by British general practitioners (GPs) as a forum to discuss their feelings about the proposed reforms to the British health care system and their likely effects. She explains 'essentially, I had stumbled on a "setting" in which GPs were "talking" among themselves about the significance of the proposed health care reforms for them as individuals, for the wider profession and generally about the future of general practice in Britain' (2004:225). The geographically dispersed nature of GPs' work meant that the list provided a unique opportunity for them to interact with each other. Smith argues that one of the advantages of such virtual methods is that it provides opportunity to conduct research with virtually no observer effects {see Research in focus 2.5). Therefore, her strategy was covert because, she explains, 'I anticipated difficulties in informing participants about my research without intruding in the ongoing interaction to an unacceptable extent' (2004:232) and feared that this might also arouse hostility because of the way she observed that 'spam' messages were received unfavourably. For fifteen months she 'participated' in the iist by receiving and reading messages daily without explicitly stating or explaining her presence to the majority of the listserv's members. A further difficulty in seeking informed consent arose from the nature of the list as an unmoderated forum; therefore there was no gatekeeper to whom she could address her request. Added to this, the membership of the list of around 500 members was in constant flux, so any single request for consent would have been impossible. Hence 'the oniy appropriate way to gain informed consent would be to repeatedly post requests to the entire list. Through my previous exposure to the list, however, I knew that such behavior was clearly out of line with accepted practice in this domain' (2004: 233). However, as Smith explains: 'I am aware that in making the decision not to expound my presence on the list, I may face considerable ethical critique. My research appears analogous with the notion of "covert" research so demonized in the usual discussions of research ethics' (2004:225). One of the ways in which she justifies this is through discussion of the features of her study that distinguish it from other studies of virtual interaction. She notes how her study examined interaction between participants who were not engaged in the kinds of 'fantasy interaction' associated with sexual or social virtual interaction. Therefore, Smith argues, her participants were not taking the opportunity to 'engage in behavior with which they would not be comfortable engaging as part of their "real" lives' (2004:228). Afurther ethical justification of her research arises from the extent to which participants saw the list as a public rather than a private space. Hence, the warning posted to each member on subscription and at monthly intervals states 'members are advised to consider comments posted to listx to be in the public domain' (2004: 229; capitalization in original). In addition, list members receive guidelines on the copyright implications of email messages, which state that comments posted to public lists are comparable to sending letters to a newspaper editor. Smith suggests that this provides justification for her'electronic eavesdropping', since the ethical guidelines she was working to suggested that it was 'not necessary to explicitly seek permission for recording and analyzing publicly posted messages' because this is 'akin to conducting research in a marketplace, library or other public area, where observers are not necessarily expected to obtain informed consent from ail present' (2004:230). A final ethical issue arising from the study concerns the principle of anonymity. Initially, Smith assumed she should protect the identity of participants when reporting her research findings, but through her involvement in the list she became aware that 'participants might wish to be "credited" for their postings' (2004:234) because of the reaction when journalists used list messages without crediting the authors. However, despite this she felt that, because she had not sought informed consent from all list members, it would be wrong to do this. These concerns have led some researchers to suggest that there is a need for an ethics code for Internet research. DeLorme et al. (2001) surveyed qualitative researchers to find out whether they felt there was a need for an ethics code for qualitative researchers using the Internet and, if so, what kinds of issues it should cover. A majority of respondents thought that there should be an ethics code for qualitative Internet research. When asked what their reasons were for believing this, researchers expressed a rationale based on principles, driven by a professional view of what constitutes good research, and a practical rationale, based on the belief that dishonest practices will discourage Internet users from taking part in future online studies and undermine the reputation of legitimate researchers who use the Internet. DeLorme et al. (2001) suggest that ethics codes designed by professional associations such as those discussed in Chapter 5 need to be revised to include an addendum that deals with these issues. However, the debates about the ethics of Internet research and the development of guidelines for researchers are ongoing, and, even though traditional ethical guidelines may need to be revised to reflect the ethical issues raised by Internet research, researchers should continue to be guided by the ethical principles discussed in Chapter 5. Student experience -4 As noted in a Student experience feature earlier in this chapter, Isabella Robbins used Internet message boards to gain additional data on mothers whose children had not been vaccinated. She was concerned about the ethics of using these media, and this is how she dealt with the issues. In terms of the ethics of using data from the Internet, I would argue that the Internet is in the realm of the public sphere. I decided that I did not want to contact the women on the message board, because I considered this forum did provide these women with a useful forum in which to debate difficult issues. I considered it unethical to break into that forum. I don't consider that what I was doing was covert. The message board had very visual reminders that the message board is a public space, warning women not to use names, addresses, and phone numbers (although some did). I did contact the press office of the message board, and they referred me to their terms and conditions of using the message board. This acknowledged . that it is a public space, and that people using it take responsibility for that. They did not object to me using this data, i told them what I intended to do with it, and that the message board and data would be anonymysed. To read more about Isabella's research experiences, go to the Online Resource Centre that accompanies this book at http://www.oxfordtextbooks.co.uk/orc/brymansrm3e/. Key points • The growth in the use of the Internet offers significant opportunities for social researchers in allowing them access to a large and growing body of people. • Many research methods covered elsewhere in this book can be adapted to online investigations. • There is a distinction between research that uses websites as objects of analysis and research that uses the Internet to collect data from others. ® Online surveys may be of either of two major types: web surveys and email surveys. © Most of the same considerations that go into designing research that is not online apply to e-research. @ Both quantitative and qualitative research can be adapted to e-research. @ There is growing concern about the ethical aspects of e-research. Questions for review The Internet as object of analysis • In what ways might the analysis of websites pose particular difficulties that are less likely to be encountered in the analysis of non-electronic documents? Using the Internet to collect data from individuals ® What are the chief ways of collecting data from individuals using the World Wide Web and online communications? ® What advantages do they have over traditional research methods for collecting such data? • What disadvantages do they have in comparison with traditional research methods for collecting such data? • What is the difference between web-based and communication-based research methods? An ethnography of the Internet? @ How does ethnography need to be adapted in orderto collect data on the use of the Internet? © Does the study of the impact of the Internet necessarily mean that we end up as technological determinists? • Are ethnographies of the Internet really ethnographic? Qualitative research using online focus groups • What is the significance of the distinction between synchronous and asynchronous focus groups? • How different is the role of the moderator in online, as against face-to-face, focus groups? Qualitative research using online personal interviews © Can online personal interviews really be personal interviews? To what extent does the absence of direct contact mean that the online interview cannot be a true interview? Online social surveys • What is the significance of the distinction between email and web surveys? • Are there any special circumstances in which embedded email questionnaires will be more likely to be effective than attached questionnaires? @ Do sampling problems render online social surveys too problematic to warrant serious consideration? • Are response rates in online surveys worse or better than in traditional surveys? Ethical considerations in Internet research ® What ethical issues are raised by using the Internet as a method of data collection? Online Resource Centre http://www.oxfordtextbooks.co.uk/orc/brymansrm3e/ Visit the Online Resource Centre that accompanies this book to enrich your understanding of e-research. Consult web links, test yourself using multiple choice questions, and gain further guidance and inspiration from the Student Researcher's Toolkit.