302 INTRODUCTION TO INFORMATION SCIENCE Mackey, T. R and Jacobson, T. E, (2011) Reframing information literacy as a metaliteracy, College and Research Libraries, 72(1], 62-78, Martin, A. (2003) Towards e-literacy, in Martin, A. and Rader, H. (eds), Information and IT literacy: enabling learning in the 21st century, London: Facet Publishing, 3-23. Martin, A. (2006) A European framework for digital literacy, Nordic Journal of Digital Literacy, 2006(2), 151-160 , available at http://www,idunn.no/ts/dk/2006/02/a_european_framework_for_digital_ literacy?languageld=2. Martin, A. and Madigan, D. (eds) (2006) Digital literacies for learning, London; Facet Publishing. McClure, C. R. (1994) Network literacy: a role for libraries, Information Technology and Libraries I 13(2), 115-25. Neelameghan, A. (1995) Literacy, numeracy . . . informacy, Information Studies, 1(4), 239-49. Notess, G. R. (2006) Teaching web search skills, Medford NJ: Information Today. Pilerot, O. (2006) Information literacy - an overview, in Martin, A. and Madigan, D. (eds), Digital literacies for learning, London: Facet Publishing, 80-8. Pilerot, O. and Lindberg, J. (2011) The concept of information literacy in policy making texts: an imperialistic project?, Library Trends, 60(2), 338-60. Pinto, M., Cordon, J. A. and Diaz, R. G. (2010) Thirty years of information literacy (1977-2007): A terminological, conceptual and statistical analysis, Journal of Librarianskip and Information Science, 42(1), 3-19. Robinson, L., Hilger-Ellis, J., Osborne, L., Rowlands, J., Smith, J. M., Weist, A., Whetherly, J. and Phillips, R. (2005) Healthcare librarians and learner support: competences and methods, Health Information and Libraries Journal, 22 (supplement 2), 42-50. Seeker, J., Boden, D. and Price, G. (eds) (2007) The information literacy cookbook: ingredients, recipes and tips for success, Oxford; Chandos. SCONUL (2011) The seven pillars of information literacy, [online] available at http://www.sconul. ac.uk/groups/information_Literacy/seven_pillars. html. Shapiro, J. J. and Hughes, S. K. (1996) Information technology as a liberal art, Educom Review, 31(2), March/April 1996, [online], available at http://net.educause.edu/apps/er/review/reviewarticles/31231 .html. Sturges, E and Gastinger, A. (2010) Information literacy as a human right, Libri, 60( behaviour, informetrics, information organization and inform >i give examples of early research in these topics, and the ways in and methods of information research have developed. Styles of information research All styles of research are based in a philosophical viewpoint kind discussed in Chapter 3, though this is not always c researchers. A 'broad brush' distinction can be made lx't w i from a positivist viewpoint, where it is assumed thai ;\n ■ which the researcher may identify and study, and thi i constructivist viewpoint which assumes that reality h constructed or interpreted by the researcher. Foj ml INFORMATION SCIENCE RESEARCH: WHAT AND HOW? 305 piece of information to a user, or the proper role of an information service, are objective facts which may be determined correctly, while the latter assumes that they are subjective, and created and modified by those involved. While information researchers need not be philosophers, it is important to recognize what assumptions are implicit in the methods used, and what assumptions or viewpoints they themselves hold which may affect their approach; see, for example, Burke (2007) for examples in the context of information management research. A common distinction is between quantitative and qualitative research. These describe a difference in general style and ethos; in essence whether the focus is on measuring, counting and testing formally stated hypotheses or on an interpretation of the meaning of events, issues and opinions, in an attempt to •:iin understanding. In practice there is overlap. Few studies are solely one or he other. Each has its own ways of analysing the data generated in the research rocess. This may be statistical in nature for quantitative studies, where the ■rarch questions may be phrased as hypotheses to be proved or disproved at a Main level of statistical significance. Many information research studies, •.ever, can find answers to their research questions by simpler quantitative hods, using mainly descriptive statistics. It is, of course, essential to plan the uch starting with the questions, and working through the kind of results 11 to answer them, and then to the data collection and analysis methods will give those results. Collecting data first, and then wondering how to ii, is a recipe for failure, or at best wasted time. »ometimes believed that qualitative studies are somehow 'easier' than mtit.ii ive, but this is not so. Gorman and Clayton (2005) give a good overview iMklit.ii ive methods for information research, emphasizing that they are in no ■■ n'prous than quantitative methods. •ral aspects of research methods wry wide range of research methods and techniques used in our V\V consider them later in this chapter in three rough categories -| r»i^-!imenting, evaluating and observing; and desk research. This reflects N*l ■ rude categorization that, to find out about something we can: ask • has some insight; observe what happens; or examine relevant practice, in any real research or evaluation, there is some overlap, hod' studies are increasingly common. The term 'triangulation' is ••indies which mainly rely on one method, but then use a second i In- validity of the results; for example, questionnaire surveys »• interviews. For more discussion and examples of triangulation Is, see Huntington and Gunter (2006) and Fidel f?(W\ 306 INTRODUCTION TO INFORMATION SCIENCE carried out on a micro level, evaluating, for example, the content of a small set of literature, or a macro level, evaluating systems and services. Powell (2006) gives an overview of evaluation research in information topics. Research embedded in a practice context, where the aim of the research is the improvement of practice rather than an objective study of the situation, may be termed action research. All research of whatever kind will involve an element of desk research, in the form of a literature review for context setting, and to establish if similar studies have already been done. For desk research proper, this will form an introduction, for example, a literature review of subject X will begin with a description :iu< I analysis of any previous reviews of that subject. 'Literature' here is defined vi i broadly, as it may include information in sources other than academic :•»»i the question of sampling: how to know that an adequate number and rul cases have been included for the results to be valid, and how to km" generalizable to other situations the results will be. The sampling issiu avoided only when all relevant cases are considered; for example, i> ■ studying some aspect of national libraries in the British Isles, it is h study all of them. The extreme example of this is the case study, win example is studied in great detail. We will discuss sampling in more ■' Advances in information and communication technologies, allowing | increased options for communication between researchers, and lm integration and analysis of large amounts of data, are leading to tin research, termed 'e-research' or 'e-science', as discussed in Chapii i affect information research directly, and information specialist.', i in promoting and facilitating it. All research, even small-scale and relatively informal studic planning; in particular, to ensure that all the resources neei' to people and systems to be studied, are in place, and thnt in the right order. All research also requires attention to: tin collected, which must be planned before the collection sUgi presentation of results. Olli'tl Research and the practitioner It has always been considered desirable that inform^ carry out research and use research results, and «c«fc ■ — -WW Vip. aware of practice issues in planning ■ * Mil INFORMATION SCIENCE RESEARCH: WHAT AND HOW? 307 in research results, still less in carrying out research themselves; the counter-prejudice has been that information researchers undertake 'useless' research, ignoring the needs of practice. Thirty years ago, Alan Blick, a well known information manager in the British pharmaceutical industry, was presenting this as a kind of contest or conflict between practitioners and researchers in information science (Blick, 1983), The debate rumbles on; see, for example, Haddow and Klobas (2004), Greifeneder and Seadle (2010), Powell, Baker and Mika (2002) and Hall (2010). There is now a greater pressure for practitioners to be more involved in research, and in using its results, and for academics and other researchers to pay ■'/eater attention to dissemination of results in a way useful for practice: funders ft research are increasingly requiring attention to be given to practical impact. I oinmunication between the two groups is of obvious importance; some i nhlishers are asking for statements of implications for practice to be made in i- abstracts of information research articles. However, a study of the extent to n-fi reports of research into information-seeking behaviour addressed the plications for practice found that while a majority did so, they did not do it 41, most recommendations for practice were stated only vaguely (McKechnie ' , 2008). Wilson (2008) also comments on an increasing disconnection | pirn the interests of researchers and practitioners in studies of users and t'hnviour. There is still a long way to go, in this respect. . die other side, it is of course necessary that practitioners are reminded .ilue of using research in a routine way. A good example of the kind of • >n that would help here is IFLA's guidance on using research to promote ind reading (Farmer and Stricevic, 2011). Application of the ideas of translation may help in this respect (Garnett, 2011). ■ ictitioner research is an integral part of reflective practice, and usually vice evaluation and improvement. This has led some commentators Ibrsry and information service provision to be fully evidence-based', i ice based explicitly on research results; we are very far from that it idea of 'evidence-based practice' itself has been criticized as too i ■< hanical an approach, by those who argue for practitioners to have landing of research and theory; see Booth and Brice (2004) and I lor viewpoints. methods for information science ', there are many different methods which can be used for h, ranging from the experimental to the conceptual (Cronin These may be categorized in various ways; each of the 308 INTRODUCTION TO INFORMATION SCIENCE strategies: historical research; ethnography; surveys, evaluation; case study; action research; experiment; other strategy; mixed strategies. They also define 15 forms of data collection, which to an extent overlap with methods: questionnaire or interview; focus groups; journal entries; observation; inspection; content analysis; protocol analysis; bibliometric analysis; transaction log analysis; task analysis; historical source analysis; dataset construction; use of data collected earlier; other technique; more than one technique. They conclude, from an analysis of the literature and comparison with similar studies in the past, that all of this wide range of methods is important for information research, but that surveys and experiments are the predominant methods. We will use the simpler three-way classification, justified above, to considei the main forms of information research, and the methods used for each: survey-experimenting, evaluating and observing; and desk research. We will do so onlv in outline, as full details of each are given in relevant chapters of the textbooi referred to earlier. We will give some examples of each from the literature; emphasize that these are not intended as the 'best' examples, simply as typi of what is in the literature. Research methods - surveys This is, and has always been, the most common method for carrying out ic in library and information science. It is the, crudely expressed, 'asking soi style of research, and usually involves asking for the opinions and expei i< information users (actual and potential) and information providers. It is ;i very much in the social science research tradition. This approach is very commonly used in mixed-methods research, combining a survey with some form of observation. A small number of will often be incorporated into research mainly relying on desk rrw some form of observation. Conversely survey research will usuallv t some element of desk research, if only for the initial literature sui \ • Most library and information surveys are small-scale, rarely ini than 100 participants, and are usually aimed at a closely defined ■ • They do not therefore usually adopt the rigorous methods ol analysis used for purposes such as public opinion polling or l;u ■•■ research. The main methods used for library and informal u questionnaires and interviews. Questionnaires are usually regarded as being in the posit iu of research, since they assume that the researcher and tluv.i common perspective of the situation. A series of short, :>ti i with prescribed answers (yes/no, Lickert scale, multiple i1 used, with limited opportunity for expressing extra infonii.itn INFORMATION SCIENCE RESEARCH: WHAT AND I U ■ who responses quickly and simply, and for giving results which can be quantitatively. However, the value of the results relies on the rescai\ In understood all the relevant factors, and expressed them in a way whl) participants can understand. Questionnaires are increasingly likely to be administered electronh .ill software such as SurveyMonkey, rather than in printed form. This is coin ■ and gives greater 'reach', but may mean that the population sampl response rate, and any bias in response, is unknown. If such a survey in |> number who saw it, and did not respond, cannot usually be established, whether there is anything different about those who responded and i In did not. Interviews are regarded as appropriate for getting a greater, and richei, .n ■ I information from a smaller number of participants. There are sevu.il 1 ; interview. Structured interviews resemble questionnaires, in that a pi e d< «1 of questions is asked, often requiring a choice of answer from a list l>o.site extreme unstructured interviews have no predefined struvtiin "inble a free-flowing conversation; they are most appropriately use.I (oratory way, in situations where the interviewer does not have .> m i. |i i standing of the issues or context. An intermediate form, and often re ost appropriate for most library and information research is thi-tired interview, where a number of predefined questions are alway- .i hsequent questioning will depend on the responses given. Interview ^ta*lime-s referred to as research conversations, if the intention is to ri a i\ * us with the interviewee, or even to affect their views, rather than miji 'ilormation from them. u'e a number of issues around the conduct of interviews, which i. ■fd for each study, and often will be determined by practicality rail ■eiical considerations. They include: it is carried out face to face, or by telephone, internet chat, e 'i her means. In general, all other things being equal, those mel In >w tone of voice, facial expression, etc., to be noted will give " mation than others. m of face-to-face interviews; interviewees are usually more orlhcoming in their own space. I * iews are recorded, videoed, or taken down verbatim « Pterviewer takes notes; unless it is essential to have the ,v.. "to have a permanent record of the intwvi«.. ...... 310 INTRODUCTION TO INFORMATION SCIENCE immediate feedback to be given to the interviewee on what the interviewer understands them to have said. Interviews are usually carried out one-to-one, but may involve an interviewer with a group of interviewees. This may be useful on grounds of practicality and time saving, and also in allowing interviewees to discuss issues, reach consensus and spark ideas off each other; but equally they run the risk of being dominated by a smalt number of senior, or noisy, participants, so that other views are not heard. The terms group interview and focus group are used for this method; the latter more commonly when the motive is obtaining a 'group viewpoint' rather than just interviewing several people at once for convenience. A tool specifically designed for reaching consensus within a group is a Delplii study. This brings a virtual group of participants together, typically by e-mail They are asked individually to state their views on some topic. The research^ writes a summary of their responses, and sends it out to them for furtln comment; this process is repeated as many times as necessary. Usually twu i three 'rounds' are enough to reach consensus, or to establish that there art-1 or more incompatible viewpoints. An approach which may be used with any of the above methods is the cnniw incident technique. This asks participants to specify one particular occurnm of what is under investigation; for example, one occasion when use of a pin I •■• t information source made a difference to their work. This approach avoi general and bland responses, and is particularly useful in assessing im| information systems and services. When a survey is begun, the researchers may not be sure that thi instrument (interview schedule, questionnaire, etc.) is quite right. A pirn may therefore be carried out, with a small number of participants, to ( validity of the survey, and allow for changes to be made for the main a large-scale study, particularly if it is analysed quantitatively, the n-pilot study would not be included, though in smaller surveys they mi i included. The following list gives a selection of information research arti< various forms of survey method. Examples of surveys in information research Please note that these are not recommended as 'best praci.ii I • recent relevant examples. Questionnaires 1^ c . a ar*H Rlanrtford. A. (20061 PatieiU in INFORMATION SCIENCE RESEARCH: WHAT AND I K >v\ Bennett, R. (2007) Sources and use of marketing information by rrictiketii managers, Journal of Documentation, 63(5), 702-26. Palsdottir, A. (2010) The connection between purposive information seH.uo and information encountering: a study of Icelanders' health and lifestyli' information seeking, Journal of Documentation, 66(2), 224-44. Interviews Savolainen, R. (2010) Source preference criteria in the context of everyday projects: relevance judgements made by prospective home buyers, Journal of Documentation, 66(1), 70-92. Marcella, R. et. al. (2007) The information needs and information-seeking behaviour of the users of the European Parliamentary Documentation Centre: a customer knowledge survey, Journal of Documentation, 63(6), 920-34. Mansourian, Y. and Ford, N. (2007) Web searchers' attributions of success and failure: an empirical study, Journal of Documentation, 63(5), 659-79. iocus groups White, M. D., Matteson, M. and Abels, E. G. (2008) Beyond dictionaries: understanding information behaviour of professional translators, Journal >>f Documentation, 64(4), 576-601. ■ .ila, A. (2010) Open access and civic scientific information literacy, ilormation Research, 15(1), paper 426, available from ip://lnformationR.net/ir/15-1/paper426.html. Iftwnna, K. J. et al. (2009) No natives here: a focus group study of student fmtt options of Web 2.0 and the academic library, Journal of Academic iikinship, 35(6), 523-32, itudies ■ .md Salaba, A. (2009), What is next for Functional Requirements liucjraphic Records? A Delphi study, Library Quarterly, 79(2), U009) The future of information literacy in academic libraries: a |M «ui|y. Portal: Libraries and the Academy, 9(1), 99-114. lioof) Conceptions of information science, Journal of the American Information Science and Technology, 58(3), 335-50. i .., :,.nts I «1 P009) Electronic journsk 310 INTRODUCTION TO INFORMATION SCIENCE immediate feedback to be given to the interviewee on what the interviewer understands them to have said. Interviews are usually carried out one-to-one, but may involve an interviewer with a group of interviewees. This may be useful on grounds of practicality and time saving, and also in allowing interviewees to discuss issues, reach consensus and spark ideas off each other; but equally they run the risk of being dominated by a smalt number of senior, or noisy, participants, so that other views are not heard. The terms group interview and focus group are used for this method; the latter more commonly when the motive is obtaining a 'group viewpoint' rather than just interviewing several people at once for convenience. A tool specifically designed for reaching consensus within a group is a Delplii study. This brings a virtual group of participants together, typically by e-mail They are asked individually to state their views on some topic. The research^ writes a summary of their responses, and sends it out to them for furtln comment; this process is repeated as many times as necessary. Usually twu i three 'rounds' are enough to reach consensus, or to establish that there art-1 or more incompatible viewpoints. An approach which may be used with any of the above methods is the cnniw incident technique. This asks participants to specify one particular occurnm of what is under investigation; for example, one occasion when use of a pin I •■• t information source made a difference to their work. This approach avoi general and bland responses, and is particularly useful in assessing im| information systems and services. When a survey is begun, the researchers may not be sure that thi instrument (interview schedule, questionnaire, etc.) is quite right. A pirn may therefore be carried out, with a small number of participants, to ( validity of the survey, and allow for changes to be made for the main a large-scale study, particularly if it is analysed quantitatively, the n-pilot study would not be included, though in smaller surveys they mi i included. The following list gives a selection of information research arti< various forms of survey method. Examples of surveys in information research Please note that these are not recommended as 'best praci.ii I • recent relevant examples. Questionnaires 1^ c . a ar*H Rlanrtford. A. (20061 PatieiU in INFORMATION SCIENCE RESEARCH: WHAT AND I K >v\ Bennett, R. (2007) Sources and use of marketing information by rrictiketii managers, Journal of Documentation, 63(5), 702-26. Palsdottir, A. (2010) The connection between purposive information seH.uo and information encountering: a study of Icelanders' health and lifestyli' information seeking, Journal of Documentation, 66(2), 224-44. Interviews Savolainen, R. (2010) Source preference criteria in the context of everyday projects: relevance judgements made by prospective home buyers, Journal of Documentation, 66(1), 70-92. Marcella, R. et. al. (2007) The information needs and information-seeking behaviour of the users of the European Parliamentary Documentation Centre: a customer knowledge survey, Journal of Documentation, 63(6), 920-34. Mansourian, Y. and Ford, N. (2007) Web searchers' attributions of success and failure: an empirical study, Journal of Documentation, 63(5), 659-79. iocus groups White, M. D., Matteson, M. and Abels, E. G. (2008) Beyond dictionaries: understanding information behaviour of professional translators, Journal >>f Documentation, 64(4), 576-601. ■ .ila, A. (2010) Open access and civic scientific information literacy, ilormation Research, 15(1), paper 426, available from ip://lnformationR.net/ir/15-1/paper426.html. Iftwnna, K. J. et al. (2009) No natives here: a focus group study of student fmtt options of Web 2.0 and the academic library, Journal of Academic iikinship, 35(6), 523-32, itudies ■ .md Salaba, A. (2009), What is next for Functional Requirements liucjraphic Records? A Delphi study, Library Quarterly, 79(2), U009) The future of information literacy in academic libraries: a |M «ui|y. Portal: Libraries and the Academy, 9(1), 99-114. lioof) Conceptions of information science, Journal of the American Information Science and Technology, 58(3), 335-50. i .., :,.nts I «1 P009) Electronic journsk 312 INTRODUCTION TO INFORMATION SCIENCE INFORMATION SCIENCE RESEARCH: WHAT AND HOW Kraaijenbrink, J. (2007) Engineers and the web: an analysis of real life gaps in information usage, Information Processing and Management, 43(5), 1368-82. Weightman, A., Urquhart, C, Spink, S. and Thomas, R. (2009) The value and impact of information provided through library services for patient care: developing guidance for best practice, Health Information and Libraries Journal, 26(1), 63-71. Mixed methods Nicholas, D., Rowlands, I. and Jamali, H. R. (2010) e-textbook use, information seeking behaviour and its impact: case study - business and management, Aslib Proceedings, 36(2), 263-80 [questionnaires, focus groups, web log analysis, analysis of sales and library circulation data). Craven, J., Johnson, F. and Butters, G. (2010) The usability and functionality of an online catalogue, Aslib Proceedings, 62(1), 70-84 [interviews, focus group, observation with 'think aloud']. Wilson, K. and Corrall, S. (2008) Developing public library managers as leaders: evaluation of a national leadership development programme, Library Management, 29(6-7), 473-88 [Interviews, focus groups, questionnaires, observation]. Research methods - experimenting, evaluating and observing This category of research covers a variety of methods. At one end of the spectrum are those described as experimental, based in a positivist, scientific world-view. At the other are methods of observation and interpretation, based in the traditions of ethnography and similar disciplines. They often have a common set of purposes, however, in being used to understand information behaviour, to evaluate the use and performance of information systems and services, and to give a basis for the improvement of such systems and services, and the design of new ones. The experimental style of research in library and information science derives from the methods of the experimental sciences. The topic to be studied is isolated, so far as possible, from the complexities of the 'real world', so that all the variable factors in the situation can be held constant, apart from the ones being studied. Its opposite is operation evaluation, analysing the totality of an information service, and including all the 'messiness' of the real-world context. Experiment is an approach used particularly often in research into information retrieval and human-computer interaction, typically comparing algorithms and interfaces. Where users are involved, data on their success at using the system —^„ „,,„mc,n+od Viv data aathered from special equipment, such as eye- by conventional interview, or by 'talk/think aloud' or 'talk/think after' h-of the experiment. At the other extreme, observation is usually regarded as a techniq understanding a real-world situation, without affecting or controllini' it (Baker, 2006]; hence a phrase commonly used in the past, unohi observation. The use of the so-called mystery shopper or mystery vixih modern variant. The current trend is for observation, often used in conjunction with ollu gathering methods, to be used in a way which draws from subjct i approaches such as anthropology, ethnography, and phenomenon, phenomenography, which emphasize a detailed interpretation of the mi from the viewpoint of those observed, and understanding as much of the 11 of their situation as possible. For overviews, in addition to chapter', research methods textbooks, see Goodman (2011) and Bruce (1999). The more unstructured ways of gathering qualitative data, typical 1\ around observation, are sometimes referred to as grounded them v. incorrectly, since this involves precisely specified and rigorous forms ol an see relevant chapters in the research method textbooks, and the artit \v 1 (2010) for details. Grounded theory can be a valid and valuable melh shown by the information research examples given below; however, i( only be used with an understanding of its nature. A new form of observation has become possible with the move in ,i l digital information environment; the ability to analyse the logs of welv.ii i search engines, to establish exactly what a very large number of users ure i in a way not possible with printed information. However, such log anaiysii give any explanation for why users are doing what they do, nor how tl they are; it is therefore often combined with other methods, typically iniu ■ Research studies of the kinds noted above can be used for the very | purpose of the evaluation of the performance of information systffi services. Here, research overlaps very much with practice, since the nun of performance should be a routine activity for information providers, iiiii such as standard sets of performance indicators, user-satisfaction snrvi cost-benefit analyses. 'Research', in the context of the evaluation ol op. i systems, tends to mean particular exercises or projects, such as an inj audit, whose purpose is to enumerate and evaluate all the available inf n resources (Buchanan and Gibb, 2008), or an impact study, aiming t>> -i 'real benefit' to the users. Impact studies were discussed in Chaptei I context of attempts to measure the value of information. Similarly, they may lead to the equally practical issues of the di implementation of new systems and services, or new features vviUm 314 INTRODUCTION TO INFORMATION SCIENCE INFORMATION SCIENCE RESEARCH: WHAT AND HOW? for the practitioner; they are typically regarded as 'research' when there is some novelty in the new system or feature, or when an established feature is introduced into a new context. Research into information needs and behaviour is also needed in order to develop or extend systems to go beyond incremental improvement of what is already available. The next list gives a selection of information research articles based on various forms of experiment, evaluation and observation methods. Examples of experiment, evaluation and observation in information research Please note that these are not recommended as 'best practice', simply as recent relevant examples. Experiment Vilar, P. and Zumer, M. (2011) Information searching behaviour of young Slovenian researchers, Program, 45(3), 279-93. Makri, S., Blandford, A. and Cox, A. L. (2010) This is what I'm doing and why: methodological reflections on a naturalistic think-aloud study of interactive information behaviour, Information Processing and Management, 47(3), 336-48. Boryung, J. (2007) Does domain knowledge matter: mapping users' expertise to their information interactions, Journal of the American Society for Information Science and Technology, 58(13), 2007-20. Observation Reddy, M. C. and Spence, P. R. (2008) Collaborative information seeking: a field study of a multidisciplinary patient care team, Information Processing and Management, 44(1), 242-55. Ulvik, S. (2010) 'Why should the library collect immigrants' memories?': a study of a multicultural memory group at a public library in Oslo, New Library World, 111(3^1), 154-60. Allard, 5., Levine, K. J. and Tenopir, C. (2009) Design engineers and technical professionals at work: observing information usage in the workplace, Journal of the American Society for Information Science and Technology, 60(3), 443-54. Log analyses Hider, P. M. (2007) Constructing an index of search goal redefinition through transaction log analysis, Journal of Documentation, 63(2), 175-87. Rnrronn a and Urhano. C. (2007) Analysis of the behaviour of the users of a Documentation, 63(2), 243-58. Nicholas, D. et. a(. (2009) Student digital information-seeking behaviom |i context. Journal of Documentation, 65(1), 106-32. Grounded theory Camargo, M. R. (2008) A grounded theory study of the relationship bcitv ■ e-mail and burnout, Information Research, 13(4), paper 383, available from http://lnformationR.net/ir/13-4/paper383.html. Mutshewa, A. (2010) The use of information by environmental planner. , qualitative study using grounded theory, Information Processing uml Management, 46(2), 212-32. Makri, S. and Warwick, C. (2010) Information for inspiration: understanding architects' information seeking and use behaviors to inform design, Journal of the American Society for Information Science and Techno!* 61(9), 1745-70. Ethnographic and phenomenographic approaches Prigoda, E. and McKenzie, P. J. (2007) Purls of wisdom: a collectivist Mini. human information behaviour in a public library knitting group, Jtuitngt of Documentation, 63(1), 90-114. Boon, S., Johnson, B. and Webber, S. (2007) A phenomenographic study of English faculty's conceptions of information literacy, Journal of Documentation, 63(2), 204-28. Gross, M. and Latham, D. (2011) Experiences with and perceptions Ol information: a phenomenographic study of first-year college student Library Quarterly, 81(2), 161-86. Service evaluation Robinson L. and Bawden D. (2007) Evaluation of outreach services foi primary care and mental health: assessing the impact, Health Inh«i and Libraries Journal, 24(s1), 57-66. Botha, E. et al. (2009) Evaluating the impact of a special library dim I information service. Journal of Librarianship and Information Si U i 41(2), 108-23. Bawden, D., Calvert, A., Robinson, L., Urquhart, C, Bray, C. and Ann. (2010) Understanding our value: assessing the nature of the: imp,, fibrary services, Library and Information Research, 33(105), \\? : <■• > available from http://www.lirg.org.uk/lir/ojs/index.php/lir. System and service design 316 INTRODUCTION TO INFORMATION SCIENCE INFORMATION SCIENCE RESEARCH: WHAT AND HOW? 317 for finding and using information: user interface design recommendations from a user study, Information Processing and Management, 43(1), 10-29. Ahmed, S. M. Z., McKnight, C. and Oppenheim, C. (2006) A user-centred design and evaluation of IR interfaces. Journal of Librarianship and Information Science, 38(3), 157-72. Westbrook, L (2009) Unanswerable questions at the IPL: user expectations of e-mail reference. Journal of Documentation, 65(3), 367-95. Research methods - desk research The term 'desk research' covers the varied forms of research carried out by some kind of analysis of documents. This is a part of all research, in the form of a literature search to establish the context and to identify previous relevant work. It may provide the methods for studies in their own right, which are just as much valid research as any other, in as much as they have the potential to provide new knowledge and insights. Such studies will themselves be preceded by a literature review. See the section on 'Finding and evaluating research' below for more on the process. Desk research is sometimes referred to as 'literature research' or, increasingly, 'internet research', although both these terms are too narrow to cover all forms of desk research. Several styles can be distinguished, although the distinctions are not sharp, and the terms are often loosely used; some studies cross the boundaries between styles. Literature reviews are the most common form of desk research, and may form a study in their own right as well being a precursor to other research methods. They may be designated as comprehensive or selective, according to whether an attempt is made to cover all relevant material or only a subset which the reviewer finds significant. A systematic review, more common in subjects such as healthcare than library and information science, carefully defines and justifies the sources, search strategies and relevance criteria to be used, before material is identified. A review may be objective, in simply reporting what is in the literature, or subjective, in that the reviewer gives a judgement on the quality ol the material and its content; the latter may be called a critical review. Reviews focused on an emergent or developing technology, and its prospects, may be termed technology assessments. Meta-analysis and meta-synthesis are specific forms of literature review, in which a number of sources, typically individual research reports, are combined together, hopefully giving a more reliable and informative result than from any of the sources considered alone (Fink, 2010; Urquhart, 2010; Saxton, 2006). iMeta-analysis deals with quantitative data by statistical analysis. It is difficult to -f ~~->Kreic nn thp literature of the information sciences, information papers are unhelpfully termed meta-analyses, when they are in fact 'only' qualitative, or semi-quantitative, literature reviews. Meta-synthesis is an equivalent process, but aimed at combining the results of qualitative studies. Conceptual or philosophical analysis, perhaps the most theoretical form of desk research, sets out to analyse and clarify terms, concepts and issues within the information sciences. Historical analysis analyses the historical development of issues within the library and information disciplines and professions. The most common form of this has been the description of the development of libraries and information institutions. These have been joined by analyses of the development of systems, services, processes, and concepts, and by studies of information phenomena in society. Content analysis is a diverse form of analysis, always involving some kind of quantitative assessment of the content of a set of documents, to assess the extent to which concepts and issues are mentioned, or not (White and Marsh, 2006). It may include a qualitative dimension, to record how concepts and issues are described. Discourse analysis is a form of content analysis, focusing on 'discourse'; the way in which spoken or written language is used (Budd, 2006). It is used to analyse, often in detail, the way in which concepts and issues are mentioned, and what this shows about how they are understood. Bibliometrics and webliometrics are quantitative methods for describing and analysing patterns of recorded communication. They are used to carry out research into such topics as: size and growth of information within disciplines; significant sources, authors, institutions and countries; linkages and influence between information producers; and changes in communication patterns. Results may be presented as simple counts or graphs, or complex maps. These methods have been discussed in more detail in the chapter dealing with informetrics. The following list gives a selection of information research articles based on various forms of desk research. Examples of desk research methods in information research Please note that these are not recommended as 'best practice', simply as recent relevant examples. Literature analysis Davies, K. (2007) The information-seeking behaviour of doctors: a review of the evidence, Health Information and Libraries Journal, 24(2), 78-94. Liew, C. L. (2009) Digital library research 1997-2007: organisational and people issues, Journal of Documentation, 65(2), 245-66. 318 INTRODUCTION TO INFORMATION SCIENCE INFORMATION SCIENCE RESEARCH: WHAT AND HOW? 319 information behaviour in assigned learning tasks, Journal of Documentation, 64(6), 893-914. Meta-analysis Aabo, S. (2009) Libraries and return on investment (ROl): a meta-analysis, New Library World, 110(7-8), 311-24. Julien, C, Leide, J. E. and Bouthillier, F. (2008) Controlled user evaluations of information visualization interfaces for text retrieval: literature review and meta-analysis, Journal of the American Society for Information Science and Technology, 59(6), 1012-24. Webb, T. L. et al. (2010) Using the Internet to promote health behavior change: a systematic review and meta analysis of the impact of theoretical basis, use of behaviour change techniques, and mode of delivery on efficacy, Journal of Medical Internet Research, 12(1), available from http://www.jmir.org/2010/1/e4/. Content analysis Cummins, i. and Bawden, D. (2010) Accounting for information: information and knowledge in the annual reports of FTSE 100 companies, Journal of Information Science, 36(3), 283-305. Park, J., Lu, C. and Marion, L. (2009) Cataloging professionals in the digital environment: a content analysis of job descriptions, Journal of the American Society for Information Science and Technology, 60(4), 844-57. Manzuch, Z. (2009) Archives, libraries and museums as communicators of memory in the European Union projects, Information Research, 14(2), paper 400, available from http://informationr.net/ir/14-2/paper400.html. Discourse analysis Kouper, I. (2010) Information about the synthesis of life-forms: a document-oriented approach, Journal of Documentation, 66(3), 348-69. Haider, J. and Bawden, D. (2007) Conceptions of information poverty' in LIS: a discourse analysis, Journal of Documentation, 63(4), 534-57. Foster, J. (2009) Understanding interaction in information seeking and use as a discourse: a dialogic approach. Journal of Documentation, 65(1), 83-105. Philosophical/conceptual analysis Robinson, L and Maguire, M. (2010) The rhizome and the tree: changing metaphors for information organisation, Journal of Documentation, 66(4), 604-13. r- ----- i /-,AAn\ inwrAnotinn ' i h p nt i t v': a Dhilosoohical approach to an endur- Thornely, C. and Gibb, F. (2009) Meaning in philosophy and meaning in information retrieval (IR), Journal of Documentation, 65(1), 133-50. Historical analysis Wefler, T. and Bawden, D. (2005) The social and technological origins of the information society: an analysis of the crisis of control in England, 1830- 1890, Journal of Documentation, 61(6), 777-802. Bowman, J. H. (2006) The development of description in cataloguing prior to ISBD, Aslib Proceedings, 58(1-2), 34-48. Muddiman, D. (2005) A new history of ASLIB, 1924-1959, Journal of Documentation, 61(3), 402-28. Bibliometrics Frandsen, T. F. (2009) Attracted to open access journals: a bibliometric author analysis in the field of biology, Journal of Documentation, 65(1), 58-82. Robinson, L. (2007) Impact of digital information resources in the toxicology literature, Aslib Proceedings, 59(4-5), 342-51. Ying, D. (2010) Semantic web: who is who in the field - a bibliometric analysis, Journal of Information Science, 36(3), 335-56. Having briefly examined the main methods of information research, we will look at some more general issues: sampling; research ethics; and identifying and evaluating research findings. Sampling 'Sampling' is the procedure by which we choose a selection of entities (people, organizations, documents, etc.) to study, when there are too many possibilities for lis to study them all. The aim of sampling is to choose them so that they are representative of the larger population from which they are drawn, so that the results obtained have a more general validity beyond the particular entities which were studied. The issue of sampling very often arises in information research, regardless of (he method used. It is often thought of in the context of surveys - how should we choose our participants? - but is equally applicable in, for example, the choice ui documents for content or discourse analysis, or the choice of place and time tor observations. It raises two questions: what kind of sample is to be used; and raw to know when it is enough. These issues often cause angst to novice ivsearchers. This is a very complex subject, and only some simple points are • nade here. The research methods textbooks cover it more thoroughly; I Vnscombe (2010) gives a particularly clear account. 320 INTRODUCTION TO INFORMATION SCIENCE INFORMATION SCIENCE RESEARCH: WHAT AND HOW ' Four general kinds of sample may be taken: 1 A complete sample, where we include all examples of the population. For instance, if we were studying archiving policies in local government in London, we might choose to interview the archivist in each of the London boroughs. 2 A random sample of our population. This has to be done formally, typically using tables of pseudo-random numbers. Many samples described as 'random' are no such thing, having simply been picked arbitrarily. 3 A purposive sample, chosen to include examples of different types within our population. For example, to get a sample of users of a website, we might choose to select by age, gender, nationality, education level, place of residence, occupation, etc. Having chosen these groups, we might then select individuals within them randomly. This is a complex process, and needs to be done carefully according to the nature of the research. 4 A convenience sample, where a set of appropriate participants is selected because they are available: family, friends, fellow students, regular users of an information service, etc. This may be an appropriate method for small-scale and exploratory studies, provided that (a) it is declared as such, and (b) no attempt is made to generalize the results, or to claim statistical validity; both of the latter would need a more formal sampling process. The question 'how big a sample do I need' is very often asked, but is difficult to answer exactly. A numerical answer can only be given in rather specific cases; when we are comparing two circumstances, know the magnitude of the difference we are looking for and the level of statistical confidence we require for the answer, and where we are able to make assumptions about the. underlying distributions of the variable. For example, if our study set out to decide whether students who use the library get better grades, if we were able to say that by 'better' we meant a difference of 5%, that we would take the question as settled if the difference we saw would occur by chance only once in a hundred times, and that we knew that the distributions of marks and of use of the library both observed certain statistical distributions: then we would be able to calculate that a sample of so many students would be enough. Denscombe [2010) gives a clear account of these issues. In most information research, we know little about the underlying distributions, and have no particular reason to fix the other parameters. We have to judge whether the sample we. have allows us to give a reasonably confident answer to our research questions. Usually this means two things: does my sample include all groups of the population I am studying; and do I have enough cases by common sense in each case; usually, in small-scale studies, by doinj; j cases as is feasible and - in the case of qualitative studies - stopping uli new information is being found. There are a number of 'rules of thumb' ler t Lawal (2009), for example, suggests that a sample of 100 items or prop!* t enough; if groups are being compared, then 30 in each group. Anollin quoted 'rule' is that adequate insight into a topic is gained from S in interviews or 50 questionnaire responses. These 'rules' are based pun 'custom and practice', and common sense, rather than any methods validity. Information research ethics Information ethics in general were discussed in Chapter 11, and il« principles hold good here. For information research, ethical issues and Ulii will be much the same as for research in similar social and computing di-.< i Two standard texts of information research methods (Pickard, 2007 and ' 2006) mention as the main issues: • gaining access - being open and honest about the purpose and miliih■ • research, who is funding it, how the results will be disseminated, rt. • informed consent - making sure that everyone being studied in an v »i aware of the research, and has the chance to refuse to take part, ;uid i. withdraw from the study • ensuring anonymity and/or confidentiality of results, as necessary • protecting participants' rights - particularly for 'vulnerable' people, children or sick people • onhne and internet research - particularly with its possibilities ibi anonymous observation and interaction • researcher integrity - for example, observing relevant codes ol corn li collecting only those data which are really needed for the rescan h Finding and evaluating research Research does not consist of a collection of isolated studies and limlm process of cumulative growth of knowledge, each study buililin extending earlier work. An essential first step is therefore to identify aiu I previous studies. This is the process generally referred to as literature searching oi Pickard, A. J. (2007) Research methods in information, London: Facet Publishiir Powell, R. R. (2006) Evaluation research: an overview, Library Trends, 55(1), 111' Powell, R. R., Baker, L. M. and Mika, J. J. (2002) Library and information scieni I practitioners and research, Library and Information Science Research, 24(1 ), Ridley, D. (2008) The literature review: a step-by-step guide for students, London Rumsey, S. (2008) How to find information: a guide for researchers (2nd edn), Maidenhead: Open University Press. Saxton, M. L. (2006) Meta-analysis in library and information science: method, and recommendations for reporting research, Library Trends, 55(1), 158-70 Tan, J. (2010) Grounded theory in practice: issues and discussion for new qiwiii.i researchers, Journal of Documentation, 66(1), 93-112. Urquhart, C. (2010) Systematic reviewing, meta-analysis and meta-synthesis Fol evidence-based library and information science, Information Research, 15(3), colis708, available from http://informationr.net/ir/15-3/colis7/cohs708.htm White, M. D. and Marsh, E. E. (2006) Content analysis: a flexible methodok>".\ Library Trends, 55(1), 22-45. Wilson, T. (2008) The information user: past, present and future, Journal of information Science, 34(4), 457-64. ■ HAPTER 15 The future of the information sciences I'i reliction is very difficult, especially about the future. Robert Storm Petersen, Danish poet and philosopher -also attributed to the physicist Niels Bohr H we have learned one thing from the history of invention and discovery, it is that, mi i lie long run and often in the short one, the most daring prophecies seem liiui'.hably conservative. Sir Arthur C. Clarke ii .trianship has become preoccupied, perhaps to a point of obsession, with its own i ire. There seems to be a growing sense that change is now moving at such a rate O. it steering may have ceased to be an option. Ross Atkinson (2001, 3) tSt ii I ier a wise man nor a brave man lies down on the tracks of history to wait for iJw 11 ain of the future to run over him. Dwight D. Eisenhower duction I i rial chapter, we give an overview of some ideas about the discipline and . hi of information science. As the opening quotation from Ross Atkinson ti, some of the information professions are very concerned about this, seeing signs of their own demise, overwhelmed by changing technical i il environments. ■I by no means a new concern. During the 1970s, Dennis Lewis, an i ion manager in the British chemical industry who later headed the • nal association ASLIB, became well known for propounding the idea rre won't be an information profession by the year 2000' (Lewis, 1980). i me known as the 'Doomsday Scenario', and Lewis rather revelled in . i me of Doomsdav Dpn TUie fn^i^J — ^L -