1 Bridging the policy/ research divide Reflections and Lessons from the UK Keynote paper presented at “Facing the Future: Engaging stakeholders and citizens in developing public policy”. National Institute of Governance Conference, Canberra, Australia 23/24 April 2003 Sandra Nutley Reader in Public Policy and Management and Director, Research Unit for Research Utilisation Department of Management University of St Andrews St. Katherine’s West The Scores St Andrews KY16 9AL, UK Website: www.st-and.ac.uk/~ruru Email: smn@st-and.ac.uk The Research Unit for Research Utilisation is a member of the ESRC Network for Evidence-based Policy and Practice, which is co-ordinated by the UK Centre for Evidence-Based Policy and Practice, Queen Mary, University of London. 2 Abstract This paper draws upon data from the UK to argue that there is the potential for policy decisions to be better informed by research evidence than has hitherto been the case. This requires an investment in research, some rethinking of policy processes, and the development of mechanisms for bringing research and policy closer to one another. There has been a significant increase in social research funding in the UK. This has been accompanied by exercises to: identify and plug key gaps in research knowledge; agree and develop appropriate research and evaluation methods; increase the use of systematic review methods to assist the process of knowledge synthesis and accumulation. All of these initiatives are aimed at improving the evidence base for policy and practice decisions. The modernising government agenda in the UK argues that policy making should be based on the best available evidence and should include rational analysis of the evidence about what works. While this is a laudable aim, research evidence does not always, or even often, enter the policy process as part of a rational consideration of policy options. Instead research tends to become known and discussed within policy networks through a process of advocacy. This suggests that other aspects of the modernising government agenda which seek to open up policy processes, to make them more consultative and inclusive of stakeholder interests, are likely to be a more powerful vehicle for increasing research impact. The implications for mechanisms to bridge the policy/ research divide are that many bridges are needed to link researchers with relevant policy and practice networks; government ministers and officials are not the only policy audience. Intermediary bodies (such as the Social Care Institute for Excellence in the UK) can play a key role in disseminating and promoting the uptake of research in both the policy and practice fields. Furthermore, there appears to be much to be gained from developing sustained interactions between researchers and research users through the development of partnership arrangements. Where partnerships operate throughout the research process, from the definition of the problem to the application of findings, they appear to increase both the quality of research and its impact. Overall, it is easy to be cynical about the prospects for more evidence-based policy making: research rarely provides definitive answers to policy questions and rational decision making rarely lies at the heart of policy processes. However, this paper argues that neither definitive research evidence nor rational decision making are essential requirements for the development of more evidence-informed policy. Acknowledgements This paper draws upon joint work undertaken within the Research Unit for Research Utilisation (RURU), at the University of St Andrews. I am indebted to my colleagues, Huw Davies and Isabel Walter, for their contributions to this work. I am also grateful for their comments on this paper. This work was supported by the Economic and Social Research Council (ESRC; Award Reference: H141251001), for which we are duly grateful. However the views expressed are those of the authors and should not be attributed to any other agency or individual. 3 Bridging the policy/ research divide Good government is thinking government…rational thought is impossible without good evidence….social science research is central to the development and evaluation of policy (David Blunkett, UK Minister for Education, 2000 p 4) Introduction The potential for scientific knowledge to change and improve the world has been an enduring theme within debates about good governance since the period of the Enlightenment in the seventeeth century. Hence there is nothing entirely new in the present widespread interest in ensuring that the best available evidence underpins policy decisions. Yet, it can be argued that the current momentum and level of activity associated with the idea of ‘evidence-based policy’ is unprecedented, certainly in the UK (Davies et al 2000; Wyatt 2002; Young et al 2002). It was the landslide election of the Labour government in 1997, subsequently returned with a substantial majority in 2001, that fuelled the present pre-occupation with evidence-based policy making in the UK. When setting out the government’s modernising agenda, Tony Blair declared that ‘what counts is what works’. This was intended to signal the end of ideologically-driven politics and herald a new age, where policy making would be driven by evidence (particularly research evidence) of what was proven to be effective in addressing social problems and achieving desired outcomes. A range of ambitious initiatives followed and those unfamiliar with developments in the UK will find an overview of two generic initiatives and a summary of several sector specific developments outlined in the annex to this paper. Given the inherently political nature of the policy making process, it is hardly surprising that verdicts are at best mixed about the extent to which policy development in the UK since 1997 has been based on the best available evidence. Supporters are able to point to areas of policy where research evidence does seem to have played a significant role in shaping policy development (Eisenstadt 2000; Sebba 2000). Critics, on the other hand, have focused not only on examples of policy development which run counter to the best available evidence, but also on what are considered to be the dual follies of assuming, firstly, that research can provide objective answers to policy questions and, secondly, that policy making can become a more rational process (Clarence 2002; Parsons 2002). In between these two extremes, there are many commentators, such as myself, who eschew notions of evidence-based policy making which assume that policy decisions will be largely determined by research evidence. Nevertheless, they (and I) argue that there is the potential for policy decisions to be better informed by available evidence than has often hitherto been the case. The task, then, is to analyse the conditions that facilitate evidence-informed policy making (Sanderson 2002; Young et al 2002). While evidence-informed or even evidence-aware policy would be a better description of the aspirations for the role of research in the policy making process, the term evidence-based policy has become part of the lexicon of academics, policy people, practitioners and even client groups (Wyatt 2002). Thus this term will inevitably reoccur throughout this paper. Readers are requested to treat it as convenient 4 shorthand for a process where evidence does and should compete with other forms of knowledge and other interests, rather than imbue it with more deterministic qualities. I have already alluded to the fact that research is only one form of knowledge with the potential to provide evidence to inform policy development. Knowledge is needed about the nature of social problems, what interventions appear to address these problems, how potentially effective interventions can be implemented, and who needs to be involved in this process. Inevitably, knowledge about some of these issues is based more on tacit understandings than on evidence derived from systematic investigations. So, it is no surprise that the UK Cabinet Office works with a broad and eclectic definition of evidence: Expert knowledge; published research; existing statistics; stakeholder consultations; previous policy evaluations; the Internet; outcomes from consultations; costings of policy options; output from economic and statistical modelling… There is a great deal of critical evidence held in the minds of both front-line staff … and those to whom policy is directed (SPMT 1999) This paper considers the factors which appear to hinder or facilitate the use of just one type of knowledge: research evidence, particularly that arising from social research. A broader consideration of other types of knowledge can be found elsewhere (Nutley et al 2003). Discussions about the extent of research utilisation within the policy-making process are concerned predominantly with the apparent under-use of much research and how this might be addressed (Bogenschneider et al 2000; Weiss 1999). Resulting explanations can be grouped under three main headings: • Problems relating to research • Problems relating to the policy-making process • Problems with the interactions between these two worlds. The title of this paper might suggest an interest in only the last of these areas: the gap between the research and policy worlds and how this might be bridged. However, the construction of a good bridge needs to consider the suitability of the land on either side as well as the technology for spanning the space in-between. Hence, drawing on this analogy, a broad survey of the lie of the land is provided before considering possible bridge constructions. Each of the three areas of explanation is discussed in turn. In doing so, the paper not only elaborates on the problems they reveal but also outlines the key strategies used to address these problems and emerging evidence about their effectiveness. A variety of data underpin the discussion and analysis which follows. These include two primary research studies on the implementation of evidence-based policy (Nutley et al 2002a; Homel et al, 2003); several cross-sector conceptual and empirical overviews of evidence based policy in the UK (Davies et al 1999; Davies et al 2000 and Nutley et al 2000); and a systematic literature review of models of research impact (Walter et al 2003). The data is drawn largely, but not exclusively, from experiences in the UK. 5 The limitations of research evidence Whichever part of the public sector one is concerned with, one observation is clear: the current state of research-based knowledge is insufficient to inform many areas of policy. There remain large gaps and ambiguities in the knowledge base, and the research literature is dominated by small, ad hoc studies, often diverse in approach, and of dubious methodological quality (see, for example, the review of schools’ research in the UK by Hillage et al 1998). In consequence, there is little accumulation from this research of a robust knowledge base on which policy makers can draw. Furthermore, additions to the research literature are more usually producer driven rather than led by research-users’ needs. The response to these problems in the UK has been threefold: establishing mechanisms for identifying and plugging key gaps in research knowledge; improving research and evaluation methods; and promoting the use of systematic review methods to assist the process of knowledge synthesis and accumulation. Identifying and plugging key gaps in research knowledge In many public policy areas there has been a concerted effort to ensure that the balance of research carried out is relevant to the needs of the policy community. Government departments have generally taken the lead in developing research strategies for specific policy areas. These not only seek to structure the research that is funded directly by government but also aim to influence research funded by nongovernmental bodies. For example, in the UK the Department for Education and Skills has played a lead role in establishing the National Educational Research Forum (NERF), which brings together researchers, funders and users of research evidence. NERF has been charged with responsibility for developing a strategic framework for research in education, including: identifying research priorities, building research capacity, co-ordinating research funding, establishing criteria for the quality of research and considering how to improve the impact of research. In bringing these ideas to action, much of the emphasis has been on changing the way in which external research (often carried out in universities) is commissioned and funded. Research councils, such as the Economic and Social Research Council (ESRC), have developed thematic priorities to focus research on scientific and national priorities. Government departments have also adjusted their research commissioning processes to facilitate the provision of timely and relevant research findings. Furthermore, in some policy areas, such as criminal justice, the focus has been as much on developing in-house research capacity (in the Home Office) as on shaping the external research agenda. A key question is whether these attempts at identifying and plugging existing knowledge gaps have been successful. As this is a relatively new departure for most policy areas, other than health care, it is too early to give a definitive answer to this question. There has been an overall increase in level of funding for social research (for example, the ESRC budget is due to increase from GBP 72m in 2001/2 to to 92m in 2003/4) and within this overall total there is a greater proportion of policy-focused research than hitherto. The number of social researchers employed by government has more than doubled since 1997 and new social research units have been established in departments which had no previous history of social research (GSR 2001/2). This growth has exposed the shortage of suitably qualified researchers in the UK but 6 overall the prospects for a more strategically focused evidence base are encouraging. However, there are concerns that this may be a dangerous course for social science research in the UK, particularly if research priorities become too closely allied to political priorities (Walker 2000). Improving research and evaluation methods One of the aims of many sector specific research strategies is to establish criteria for good quality research as the first stage in addressing the perception that much research (particularly social science research) is of dubious quality. Most quality criteria depend in large measure on identifying fitness for purpose. Research may be directed at addressing a variety of purposes or questions. This paper focuses on one specific purpose: assessing what works. Readers interested in the quality assessment of other types of research or, indeed, in assessing the quality of other forms of evidence, will find much of interest in the project outlined in Box 1. One of the key debates about how to assess what works is the relative merit of randomised experimental methodologies as opposed to theory-based evaluations. These debates are summarised elsewhere (Davies et al 2000) and it is not my intention to spend much time going over this well-trodden ground. Suffice to say that different policy areas have come to different conclusions about the best methodologies for assessing what works (see Box 2). Alongside the issue of whether randomised experiments are preferable to theorybased evaluation methods is the much bigger question of whether either of these approaches is capable of delivering definitive evidence about what works. Robustness is generally judged in terms of internal persuasiveness and external applicability. While both randomised and theory-based approaches have much to offer, they both face limitations on one or other of these counts. Randomised experiments can answer the pragmatic question of whether intervention A provided, in aggregate, better outcomes than intervention B in the sampled population. However, such experiments do not answer the more testing question of whether and what aspects of interventions are causally responsible for a prescribed set of outcomes. This may not matter if interventions occur in stable settings where human agency plays a small part (as is the case for some medical technologies) but in other circumstances there are dangers in generalising from experimental to other contexts. Theory-based evaluation methods often seem to hold more promise, because of the way in which they seek to unravel the causal mechanisms that make interventions effective in context, but they too face limitations. This is because theory-based evaluation presents significant challenges in terms of articulating theoretical assumptions and hypotheses, measuring changes and effects, and developing appropriate tests of assumptions and hypotheses (Weiss 1995). Hence it is wise to remain cautious about what can be achieved, as the following extended quote from one of the doyens of evaluation methodology indicates: In my most optimistic moments, I succumb to the notion that evaluations may be able to pin down which links in which theories are generally supported by evidence and that program designers can make use of such understanding in modifying current 7 programs and planning new ones… Such hopes are no doubt too sunny. Given the astronomical variety of implementation of even one basic program model, the variety of staffs, clients, organizational contexts, social and political environments, and funding levels, any hope for deriving generalizable findings is romantic. (Weiss 2000, p 44) So, despite a lot of valuable effort to improve research and evaluation methods, research will rarely provide definitive answers, especially when the questions are about what works in tackling complex social problems. With the large increases in funding for social research come increased expectations about the contributions that research evidence can make to improving the success of policy interventions. Some of these expectations are likely to prove unrealistic and need to be managed accordingly. Box 1: Types and quality of knowledge in social care In 2002 the Social Care Institute for Excellence (SCIE) commissioned a research project to: • Identify and classify types of social care knowledge (Stage 1), and • Develop ways of assessing their quality that will be acceptable to a wide spectrum of sometimes conflicting opinion (Stage 2). Stage 1 of the research project recommends a classification system based on five different sources of social care knowledge: organisational knowledge, practitioner knowledge, user knowledge, research knowledge and policy community knowledge Stage 2 of the project has yet to be completed but a provisional quality assessment framework, based on six generic quality standards and commentaries on existing or potential standards for each of the five knowledge sources identified in Stage 1, has been proposed. The project involved significant participation by service users, practitioners and other experts in the production of the classification framework and the quality standards. For further information consult the EvidenceNetwork website (http://www.evidencenetwork.org) or email the project co-ordinator: Annette Boaz (a.l.boaz@qmul.ac.uk). Box 2: Different sector approaches to assessing what works Health care has an established ‘hierarchy of evidence’ for assessing what works. This places randomised experiments (or, even better, systematic reviews of these) at the apex. This explicit ranking has arisen for two reasons. First, the fact that what counts as ‘desired outcomes’ is readily understood (reductions in mortality and morbidity; improvements in quality of life) greatly simplifies the methodological choices. Second, the explicit methodological hierarchy is based on empirical research which suggests that biased conclusions may be drawn about treatment effectiveness from less methodologically rigorous approaches (Schulz et al. 1995; Kunz and Oxman 1998; Moher et al. 1998). Sector areas such as education, criminal justice and social care are riven with disputes as to what constitutes appropriate evidence about what works, there is relatively little experimentation (especially compared with health care), and divisions between qualitative and quantitative paradigms run deep (Davies et al. 2000). Knowledge of ‘what works’ tends to be largely provisional and highly context dependent. This has resulted in the popularity of theory-based evaluation methods in many social policy areas, particularly drawing upon the realist evaluation approach (Pawson and Tilley 1997) with its mantra of ‘what works, for whom, in what circumstances, and why’. Source: Nutley et al 2002b 8 Promoting the use of systematic review techniques Given the widely dispersed nature of the existing evidence base, a key concern has been to establish appropriate methods for synthesising and drawing conclusions about the state of knowledge in a particular area. In the UK much of the activity built around supporting the evidence-based policy agenda has focused on developing and implementing systematic review techniques. Such reviews seek to identify all existing research studies relevant to a given evaluation issue, assess these for methodological quality and produce a synthesis based on the studies selected as relevant and robust (see Box 3). Box 3: Key features of systematic reviews A distinctive feature of systematic reviews is that they are carried out to a set of pre-agreed standards. The main standards are as follows: 1. Focusing on answering a specific question(s) 2. Using protocols to guide the review process 3. Seeking to identify as much of the relevant research as possible 4. Appraising the quality of the research included in the review 5. Synthesising the research findings in the studies included 6. Updating in order to remain relevant. Source: Boaz et al 2002 There has been a significant increase in the level of systematic review activity in the UK over recent years, not only in health care but also in other social policy areas. While many see this as a key breakthrough in moving towards a robust and cumulative knowledge base (Chalmers and Altman 1995; Fitz-Gibbon 2002), others express disappointment at the findings of many reviews (Young et al 2002). Too often, it seems, expensive reviews conclude that there are very few robust studies relevant to the issue under consideration and that it is not possible to draw firm conclusions. There is also concern that systematic reviews, developed in the context of judging the effectiveness of medical interventions, sometimes inappropriately privilege studies based on experimental methodologies. While systematic review methods can be used to synthesise non-experimental data, the approach needs to be developed in order to establish successful ways of incorporating a broader range of research into reviews and to enable the review of complex issues, interventions and outcomes. One of the aims of the ESRC’s EvidenceNetwork is to contribute to methodological developments on these issues (Boaz et al 2002). However, a barrier to review effort is the significant cost involved in undertaking a systematic review – estimated at around GBP 51,000 per review (Gough, 2001). Furthermore, just as debates have raged about whether randomised experimentation or theory-based evaluation provide the best methodologies for assessing what works, so a parallel debate is emerging about review methodologies. The predominant approach in the UK has been the healthcare-derived, systematic review of experimental studies. However, more recently, the ideas of theory-based evaluation have been extrapolated to review methods, resulting in the formulation of a realist approach to synthesis (Pawson 2001a and b). 9 Overall, then, research evidence has its difficulties and limitations. As not all of these can be addressed by further investment in improving the evidence base, expectations need to be managed accordingly. A key challenge is how to explain what research can and cannot do without creating a sense of disillusionment. For this, we need to understand what is expected of research and how it is used in the policy process. The limitations of the policy process Analyses of the extent to which government departments are organised to make good use of research evidence in the policy process often do not make for an edifying read: Recent work by the Council for Science and Technology found that no department was really organised to make the best possible use of science and technology either in delivering its immediate objectives or in formulating its strategy for the long term. Our evidence suggest that the same is true of social and economic research (Cabinet Office 1999, para 7.6) Critics of the whole evidence-based policy movement argue that this will always be the norm and that the one constant over time is the inherent messiness of the policy process (Parsons 2002). While such messiness leads governments to seek to impose a veneer of order and control on the process, a critical eye would see the evidencebased policy movement as being no more substantial than this (Clarence, 2002). Others are less sceptical about government moves to ensure that policy is better informed by the best available evidence: …the political world has become a place where ideological certainties have been undermined, the world’s problems seem more complex and solutions more uncertain. In this context evidence could be a bigger player than in the previous era of conviction-driven, ideological politics. (Stoker 1999, p1) What both critics and supporters tend to agree upon, is that attempts to improve the policy process to make it more receptive to research evidence tend to focus on introducing more instrumental rationality into the process. However, this is not the only way to encourage research use and in this section, after considering some of the ‘rationalising’ initiatives, we go on to outline other approaches which run with, rather than against, the political grain of a messy policy process. Making the policy process more rational Recent government reports aimed at improving the process by which policy is made set out a number of recommendations for increasing evidence use (see Box 4). These include mechanisms to increase the ‘pull’ for evidence, such as requiring spending bids to be supported by an analysis of the existing evidence base, and mechanisms to facilitate evidence use, such as integrating analytical staff at all stages of the policy development process. 10 Box 4: Encouraging better use of evidence in policy making Increasing the pull for evidence Facilitating better evidence use • Require the publication of the evidence base for policy decisions • Require departmental spending bids to provide a supporting evidence base • Submit government analysis (such as forecasting models) to external expert scrutiny • Provide open access to information – leading to more informed citizens and pressure groups. • Encourage better collaboration across internal analytical services (e.g. researchers, statisticians and economists) • Co-locate policy makers and internal analysts • Integrate analytical staff at all stages of the policy development process • Link R&D strategies to departmental business plans • Cast external researchers more as partners than as contractors • Second more university staff into government • Train staff in evidence use Source: Abstracted from PIU 2000, Bullock et al 2001 Whether such initiatives have had the desired effect is debatable. Although a UK government review, Better Policy Making (Bullock et al 2001), provides 130 examples of good practice from a diverse range of departments, initiatives and policy areas, these are not necessarily representative or evaluated. Instead the examples aim to illustrate professional, interesting and innovative approaches to modernising policy-making. The effectiveness of initiatives to introduce more instrumental rationality into the policy process is supported by other empirical evidence which suggests that research use is encouraged by a climate of rationality (Weiss 1999; Nutley et al 2002a). However, there are many factors affecting the uptake of research (see Box 5) and most of these are not linked to rational decision making. Box 5: Factors affecting the uptake of research Attention is more likely to be paid to research findings when: • The research is timely, the evidence is clear and relevant, and the methodology is relatively uncontested. • The results support existing ideologies, are convenient and uncontentious to the powerful. • Policy makers believe in evidence as an important counterbalance to expert opinion: and act accordingly. • The research findings have strong advocates. • Research users are partners in the generation of evidence. • The results are robust in implementation. • Implementation is reversible if need be. Source: adapted and extended from (Finch 1986; Rogers 1995; Weiss 1998). Attempts to increase research use by introducing more rational deliberation into the policy process tend to treat utilisation as something which is direct and instrumental, that is the direct application of research results to a pending decision. There are, however, many ways in which research enters the policy-making process (see Box 6) and the instrumental (or problem-solving) model of utilisation is in fact quite rare (Weiss 1980, 1998; Finch 1986; Davies et al 2000). It is most likely where the research findings are non-controversial, require only limited change and will be 11 implemented within a supportive environment: in other words, when they do not upset the status quo (Weiss 1998). Research utilisation as depicted by the last four models, is the more common occurrence. Box 6: Models of research utilisation • The knowledge-driven model – research generates knowledge that impels action. • The problem-solving (or engineering) model – involves the direct application of the results of a specific study to a pending decision. • The interactive/ social interaction model – utilisation occurs as a result of a complex set of interactions between researchers and users which ensures that they are exposed to each other’s worlds and needs. • The enlightenment (or percolation) model – research is more likely to be used through the gradual ‘sedimentation’ of insight, theories, concepts and perspectives. • The political model - research findings are ammunition in an adversarial system of policy making. • The tactical model – research is used when there is pressure for action to be taken on an issue, and policy makers respond by announcing that they have commissioned a research study on the matter. Source: Adapted from Weiss 1979 Working with the political grain of the policy process Consideration of the policy process has so far focused on what happens in government and how this might be changed to encourage better use of research evidence. However, broader analyses of the policy process have demonstrated the importance of looking beyond formal government institutions in order to consider policy networks: the patterns of formal and informal relationships that shape policy agendas and decision-making. One of the main ways by which research evidence becomes known and is discussed within policy networks is through the process of advocacy (Sabatier and Jenkins-Smith 1993). Around many major issues there have evolved groups of people who have long-term interests and knowledge in shaping policy. These interest groups are important purveyors of data and analysis: ‘It is not done in the interests of knowledge, but as a side effect of advocacy’ (Weiss 1987, p 278). Concepts such as policy networks and policy communities highlight the potential for a different vision of how the policy process might be improved to encourage greater research use; one that is more focused on “democratising” the policy process as opposed to “modernising” it (Parsons 2002). Although research evidence tends to become a political weapon under such a scenario, ‘when research is available to all participants in the policy process, research as political ammunition can be a worthy model of utilisation’ (Weiss 1979). Recent developments in the provision of research evidence over the internet may encourage more open debates which are not confined to those operating in traditional expert domains. Similarly, the establishment of intermediary bodies (such as the National Institute for Clinical Excellence in the UK) to digest existing research evidence may facilitate the opening up of evidence informed policy debates. One scenario for the future is that initiatives that encourage consultation through policy action teams will widen the membership of policy networks. The involvement of wider interests in these teams is likely to set a different agenda and lead to a more 12 practice-based view of policy options. The use of research evidence under such a scenario is likely to be diffuse and researchers will be required to answer questions not only on what works, but also on how and why it works and, crucially, does it matter to many of those involved. This is not an easy path to follow and where it has been attempted (for example in the Welsh Assembly Government), the potential for tension between participative and evidence-based approaches have been noted (Quinn 2002). The limited interaction between the research and policy worlds Analyses of the lack of research utilisation have not only commented on the limitations of research and the apparent inhospitality of the policy making environment but also on the divergence of these two worlds. The research and policy worlds have different priorities, use different languages, operate to different timescales and are subjected to very different reward systems. No wonder, then, that there is frequently talk of a communications gap and the need to establish better connective mechanisms. The response to these problems has generally been twofold: improve communication between researchers and policy makers, and establish better institutional mechanisms to bridge the research/policy divide. Improving communications Much of the focus here has been on the ways in which researchers can improve how they communicate (disseminate) their findings. Good practice guidelines abound (Box 7 provides one example) and these are further supported by evidence from a literature review of models to increase research impact (Walter et al 2003) which, amongst other things, considers the features of successful dissemination strategies. The latter review found that provision of targeted materials can raise awareness of research findings and that seminars and workshops, which enable the discussion of findings, can encourage more direct use of research. Box 7: Improving dissemination Recommendations for research commissioners Recommendations for researchers • Time research to deliver solutions at the right time to specific questions facing practitioners and policy-makers. • Ensure relevance to current policy agenda. • Allocate dedicated dissemination and development resources within research funding. • Include a clear dissemination strategy at the outset. • Involve professional researchers in the commissioning process. • Involve service users in the research process. • Commission research reviews to synthesise and evaluate research. • Provide accessible summaries of research. • Keep the research report brief and concise. • Publish in journals or publications that are user friendly. • Use language and styles of presentation that engage interest. • Target material to the needs of the audience. • Extract the policy and practice implications of research. • Tailor dissemination events to the target audience and evaluate them. • Use a combination of dissemination methods • Use the media. • Be proactive and contact relevant policy 13 and delivery agencies • Understand the external factors likely to affect the uptake of research. Source: Abstracted from JRF 2000 Evidence from the same literature review (Walter et al 2003) also suggests that those who commission and fund research can encourage good dissemination. In a study of the Swiss National Research Council (Huberman 1993), 10% of funding was explicitly set aside for dissemination work outside the academic community, and this served to improve dissemination activities. Another study – of the Social Science and Humanities Research Council of Canada (Cousins and Simon 1996) – found that funding guidelines could encourage the development of partnerships between researchers and potential users and that these partnerships improved dissemination and research impact. Overall, a striking feature of much of the literature on ways of improving the uptake of research is the common conclusion that the way forward should be to develop better, ongoing interaction between researchers and research users (Nutley et al 2002b). This echoes Huberman’s (1987) call for ‘sustained interactivity’ between researchers and research users throughout the process of research, from the definition of the problem to the application of findings. Closer and more integrated working over prolonged periods would seem to be capable of fostering better cross-boundary understandings. But achieving this level of sustained interaction is not straightforward, and it raises some serious concerns about the independence and impartiality of research. Nonetheless, examples of successful development of policy from suggestive evidence, policy that is then seen through to practice change and beneficial outcomes, often display an unusual degree of partnership working (see Box 8, below). Building institutional bridges Talk of sustained interaction inevitably leads to discussion of how this can be institutionalised within the policy process so that it becomes the rule rather than the exception. There have been a variety of responses to this challenge in the UK. One line of response has been to use policy-making guidelines to encourage the early involvement of in-house and other researchers in the policy process. Anecdotal evidence suggests that government researchers are now more likely to be consulted earlier in the process than hitherto, especially when there is a clear decision to review and develop policy in a specific area. However, practice varies across departments and policy areas (Bullock et al 2001) and it is not clear that early involvement leads to sustained interaction. Another response has been to use location changes to institutionalise the desired interactions between in-house researchers and policy makers. There are examples of where this has been successful (see Box 8). However, a review of such arrangements in one policy area found that, although co-location can facilitate better interactions, it is does not seem to be a necessary condition for achieving a good working relationship and is certainly not a panacea (Nutley et al 2002a). 14 Box 1: The use of repeat victimisation research in policy and practice A 10-year programme highlights how the significance of evidence on repeat victimisation was fed directly into criminal justice policy and practice through the co-location of research and policy teams within the Home Office (a UK central government department). Researchers were able to develop close working relationships with policy makers and had direct access to high-level ministers, rather than operating at arms-length through policy brokers. A rolling programme of research work was developed and researchers established an innovative ‘open door’ approach to bids for research from policy units. Researchers were also given responsibility for policy implementation around repeat victimisation and worked closely with police officers on the ground to ensure new practice was evidence-based. Ultimately, repeat victimisation was adopted as one of the police performance indicators for the prevention of crime, and the original research was seen to have had significant impact. Laycock (2000, 2001) A further institutional response has centred on the issue of whether research use is facilitated when the makers of policy are specialised experts in the substance of the policy domain (Weiss 1999). While civil servants in departmental policy divisions in the UK continue to be generalists, some specialist, quasi-policy bodies have been established: an example, from the drug misuse field, is the National Treatment Agency. Such bodies seem to be helpful in bringing research evidence to bear on the policy process, particularly in relation to policy implementation (Nutley et al 2002a). The final institutional response to be considered here focuses on ways of increasing the interaction between policy makers and external researchers. A potentially important mechanism is the use of secondments to encourage the exchange of staff between government departments and universities. Although there is a long-standing UK government secondment programme, this has traditionally fostered exchanges between government and business (Gosling and Nutley, 1990). The institutional environment of higher education in the UK (particularly the format of the Research Assessment Exercises) appears to work against university-government exchanges. The result is a lower prevalence of such exchanges than found in some other countries (Wilenksy 1997). In the absence of formal exchange arrangements an alternative might be to work with external researchers in a different way: casting them more as partners rather than contractors. While there are some examples of ‘partnering’ relationships, the client-contractor relationship is still the institutional norm. Conclusions It is time to return to the bridge building analogy. The outline survey of the lie of the land is complete and some of the mechanisms for spanning the gap between the research and policy fields have been described. What conclusions can be drawn? The implications and conclusions from this overview are grouped under four main themes. First, bridging mechanisms need to be based on a realistic assessment of the ‘landfall’ on either side: the research and policy fields. It would be foolhardy to build on the assumption that research can provide definitive answers to policy questions and that policy processes can and should be based on a rational model of decision making. However, it would be equally remiss to assume that there is no basis for bridging the policy/ research divide; neither definitive evidence nor rational decision making are essential requirements for this task. 15 Second, while it is important to recognise some of the fundamental limitations about what research can and cannot tell us, the state of the research evidence base in most policy areas can be improved in at least four ways: • Research priority setting exercises play an important role in identifying and plugging important gaps in research knowledge. However, these need to ensure that there is still a place for curiosity-driven, “blue skies” research, as new insights and innovations often depend upon this. • Research and development strategies also need to address research capacity building. Recent increases in the funding of social research in the UK have exposed shortages of suitably qualified researchers. • The development of broad agreement about what constitutes robust evidence, in what context, for addressing different types of policy/practice questions would be helpful. This will involve being more explicit about the role of research vis-à-vis other sources of information, as well as a greater clarity about the relative strengths and weaknesses of different methodological stances. Such development needs to emphasise methodological pluralism, seeking complementary contributions from different research designs, rather than epistemological competition. • Systematic reviews have the potential to increase access to robust bodies of knowledge but to capitalise on this potential there needs to be further methodological development in this area and appropriate levels of funding for review activity. Third, there may be some benefits from initiatives which seek to introduce more instrumental rationality into the policy making process but there is even more to be gained from opening up policy making processes: enabling participation by a wide range of stakeholders and citizens. Policy making is an inherently political and often messy process where research gets used in a variety of ways, including the use of research as ammunition in an adversarial system of policy making. This is not a bad thing, particularly if useful knowledge (including research knowledge) is distributed more widely among members of policy and practice communities than is presently the case. An “active” or “self-guiding” society (Etzioni 1968, 1993; Lindblom 1990) offers an inclusive vision of what an evidence-informed policy making might be like. . Fourth, the conclusions thus far indicate that a grand policy/ research bridge designs seem unlikely to be the best way forward. Multiple, demountable footbridges seem preferable to a few uni-directional motorways. Research (and researchers) needs to travel in many directions and research often has greatest impact when delivered personally. If more permanent bridges are deemed necessary for specific policy areas, because of their centrality within the overall social policy agenda, then it may be helpful to think in terms of those suspension bridges which rely on a central, intermediate pillar to support a wider bridging structure. This could be the role of intermediary bodies, such as the Social Care Institute for Excellence and the National Treatment Agency, referred to above. However, bridges per se may not be the most appropriate analogy. They assume an ongoing gap or obstruction that needs to be spanned. An alternative is to think about how the research and policy (and practice) fields can be brought closer together so that they naturally come into contact with one another at key points. This is the aim of various partnership approaches to improving research utilisation. 16 Overall, the key theme that emerges from this overview is that simple models of the policy/ research relationship – where evidence is created by research experts and drawn on as necessary by policy makers – fail as either accurate descriptions or effective prescriptions. The relationships between research, knowledge, policy and practice are always likely to remain loose, shifting and contingent. Initiatives to improve the linkages between policy and research need to be designed with this in mind. References Blunkett D (2000) Influence or Irrelevance: Can Social Science Improve Government? Secretary of State’s ESRC Lecture Speech, 2nd February. London: Department for Education and Employment Boaz, A, Ashby D and Young K (2002) Systematic reviews: what have they got to offer evidence based policy and practice, Working Paper 2, ESRC UK Centre for Evidence Based Policy and Practice, London: Queen Mary, University of London (www.evidencenetwork.org) Bogenschneider K, Olson J, Linney K and Mills J (2000) Connecting research and policymaking: implications for theory and practice from the Family Impact Seminars, Family Relations 49(3): 327-339 Bullock, H., Mountford, J. and Stanley, R. (2001) Better Policy-Making, London: Cabinet Office, Centre for Management and Policy Studies http://www.cpms.gov.uk. Cabinet Office (1999) Modernising Government White Paper Cm 4310 London: The Stationery Office. Chalmers I and Altman D G (1995) Systematic Reviews, London: BMJ Publishing Group Clarence E (2002) Technocracy reinvented: the new evidence based policy movement, Public Policy and Administration 17(3): 1-11 Cousins J B and Simon M (1996) The nature and impact of policy-induced partnerships between research and practice communities, Educational Evaluation and Policy Analysis 18(3): 199-218 Davies, H. T. O., Nutley, S. M. & Smith, P. C. (Eds) (1999) What Works? The role of evidence in public sector policy and practice, Public Money and Management Theme issue 19(1) Davies, H. T. O., Nutley, S. M. & Smith, P. C. (Eds) (2000) What Works? Evidence-Based Policy and Practice in Public Services, Bristol: The Policy Press Eisenstadt N (2000) Sure Start: Research into practice; practice into research, Public Money and Management 20(4): 6-8 Etzioni, A. (1968) The Active Society: A Theory of Societal and Political Processes, New York: Free Press Etzioni, A. (1993) The Spirit of Community: Rights, Responsibilities and the Communitarian Agenda, New York: Crown Publishers Finch, J. (1986) Research and Policy: The Use of Qualitative Methods in Social and Educational Research, London: The Falmer Press Fitz-Gibbon C (2002) Evidence-based policy and management in practice: education in England 1980s to 2002, Public Policy and Administration 17(3): 95-105 Gosling R and Nutley S M (1990), Bridging the Gap: Secondments between government and business, London: Royal Institute of Public Administration Gough, D (2001) Personal communication cited in Boaz, A, Ashby D and Young K (2002) Systematic reviews: what have they got to offer evidence based policy and practice, 17 Working Paper 2, ESRC UK Centre for Evidence Based Policy and Practice, London: Queen Mary, University of London (www.evidencenetwork.org) Government Social Research (GSR) (2002) Annual Report 2001-02, London: Government Social Research Heads of Profession Group Hillage J, Pearson R, Anderson A and Tamkin P (1998) Excellence in research in schools, London: Department for Education and Employment Huberman M (1993) Linking the practitioner and researcher communities for school improvement School Effectiveness and School Improvement 4(1): 1-16 Huberman, M (1987-June) ‘Steps toward an integrated model of research utilization’ Knowledge 586-611 JRF Findings (2000-September) Linking research and practice York: Joseph Rowntree Foundation http://www.jrf.org.uk Kunz, R. and A. D. Oxman (1998). The unpredictability paradox: review of empirical comparisons of randomised and non-randomised clinical trials BMJ (Clinical Research Ed.) 317(7167): 1185-90. Laycock G (2001) Hypothesis-based research: The repeat victimisation story, Criminal Justice 1(1): 59-82 Laycock, G. (2000). From central research to local practice: identifying and addressing repeat victimisation Public Money and Management 20(4): 17-22. Lindblom C E (1990) Inquiry and change: The troubled attempt to understand and shape society, New Haven, CT: Yale University Press Moher, D., Pham, B. et al. (1998). Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses? Lancet 352(9128): 609-13. Nutley S M, Bland N, Walter I (2002a) The institutional arrangements for connecting evidence and policy: the case of drug misuse, Public Policy and Administration 17(3): 76-94 Nutley S M, Davies H T O and Tilley N (2000) Getting research into practice, Public Money and Management Theme issue, 20(4) Nutley S M, Davies H T O and Walter I (2002b) Evidence Based Policy and Practice: Cross Sector Lessons from the UK, St Andrews University: Research Unit for Research Utilisation (www.st-and.ac.uk/~ruru/ publications.htm) Nutley, S. M., Davies, H.T. O. and Walter, I. (2003) ‘From knowing to doing: a framework for understanding the evidence-into-practice agenda’, Evaluation, forthcoming Parsons W (2002) From muddling through to muddling up – Evidence based policy making and the modernisation of British Government, Public Policy and Administration 17(3): 43-60 Pawson R and Tilley N (1997) Realistic Evaluation, London: Sage Pawson, R (2001a) Evidence-based policy: in search of a method, Evaluation 8(2): 157-181 Pawson, R (2001b) Evidence-based policy: the promise of ‘realist synthesis’, Evaluation 8(3): 340-358 Performance and Innovation Unit (PIU) (2000) Adding It up: Improving Analysis and Modelling in Central Government London: Cabinet Office Quinn M (2002) Evidence based or people based policy making?: a view form Wales, Public Policy and Administration 17(3): 29-42 Rogers, E. M. (1995) Diffusion of Innovations, New York: Free Press Sabatier P A and Jenkins-Smith HC (eds) (1993) Policy change and learning: an advocacy coalition approach Bouldner, Col: Westview Press Sandersons I (2002) Making sense of ‘what works’: evidence based policy making as instrumental rationality, Public Policy and Administration 17(3): 61-75 18 Schulz, K. F., I. Chalmers, et al. (1995). Empirical evidence of bias: dimensions of methodological quality associated with estimates of treatment effects in controlled trials Journal of the American Medical Association 273: 408-12. Sebba, J. (2000). Education: using research evidence to reshape practice Public Money and Management 20(4): 8-10. Strategic Policy Making Team (SPMT) (1999) Professional Policy Making for the Twenty First Century, London: Cabinet Office Walker D (2000) You find the evidence, we’ll pick the policy, The Guardian, February 15 Walter I, Nutley S M and Davies H T O (2003) Research Impact: a cross sector review, University of St Andrews: Research Unit for Research Utilisation (http://www.st- andrews.ac.uk/~ruru/publications.htm) Weiss C H (1987) The circuitry of enlightenment, Knowledge: Creation, Diffusion, Utilisation, 1(3): 381-404 Weiss C H (1995) Nothing as practical as a good theory: exploring theory-based evaluation for comprehensive community initiatives for children and families, in Connell J P, Kubisch A C , Schorr L B and Weiss C H (eds) New Approaches to Evaluating Community Initiatives, Volume 1: Concepts, Methods and Contexts, Washington D C: Aspen Institute Weiss C H (1999) The interface between evaluation and public policy, Evaluation 5(4): 468- 486 Weiss C H (2000) Which links is which theories shall we evaluate? In Rogers PJ, Petrosino A, Heuber TA and Hascia TA (eds) Program Theory Evaluation: Practice, Promise and Problems, New Directions in Evaluation, No 87, pp 35-45 Weiss, C. H (1998) Have we learned anything new about the use of evaluation?, American Journal of Evaluation 19(1) 21-33 Weiss, C. H. (1979) The many meanings of research utilisation, Public Administration Review 39(5): 426-31 Weiss, C. H. (1980) Knowledge creep and decision accretion, Knowledge: Creation, Diffusion and Utilisation 1(3): 381-404 Wilensky H L (1997) Social Science and the public agenda: reflections on the relation of knowledge to policy in the United States and abroad, Journal of Health Politics, Policy and Law, 22(5): 1241-1265 Wyatt A (2002) Evidence based policy making: the view from a centre, Public Policy and Administration 17(3): 12-28 Young K, Ashby D, Boaz A and Grayson L (2002) Social science and the evidence-based policy movement, Social Policy and Society 1(3): 215-224 19 Annex: Examples of some evidence-based policy initiatives in the UK Two generic initiatives to enhance the use of evidence in policy making The Centre for Management and Policy Studies (part of the Cabinet Office) has been given the task of promoting practical strategies for evidence-based policy making, which it is taking forward through: • The development of ‘knowledge pools’ to promote effective sharing of information • Training officials in how to interpret, use and apply evidence • A policy hub web site providing access to knowledge pools, training programmes and government departments’ research programmes • Implementing the findings of Adding It Up with the Treasury, such as placements to bring academics into Whitehall to carry out research projects. For more information visit http://www.cmps.gov.uk The ESRC UK Centre for Evidence Based Policy and Practice is an initiative funded by the Economic and Social Research Council. The Centre, together with an associated network of university centres of excellence, is intended to foster the exchange of social science research between policy, researchers and practitioners. It aims to: • improve the accessibility, quality and usefulness of research • develop methods of appraising and summarising research relevant to policy and practice • inform and advise those in policy making roles, through its dissemination function. For more information visit http://www.evidencenetwork.org An overview of evidence based policy and practice initiatives in four sectors Criminal justice The criminal justice field is mainly characterised by systematic/ top-down approaches to getting evidence into practice. For example, the ‘What Works’ initiative within the Probation Service of England and Wales has taken lessons from small-scale programmes and wider research and applied them to redesign the whole system of offender supervision (see HMIP 1998; Furniss and Nutley, 2000). Similarly, in 1999 the Home Office launched the Crime Reduction Programme, which represents a major investment by the government in an evidence-based strategy to pilot new ways of tackling crime. The overall objectives of the programme are to: • achieve long term and sustained reduction in crime though implementing ‘what works’ and promoting innovation into mainstream practice • generate a significant improvement in the crime reduction knowledge base • deliver real savings through the reduction of crime and improved programme efficiency and effectiveness Further information is available from http://www.crimereduction.gov.uk/crimered.htm. There is also a forthcoming review of the programme (Homel et al (2003), forthcoming). Education Since the late 1990s a somewhat bewildering array of dispersed initiatives have been launched to improve the supply, accessibility and uptake of research evidence in education. These include initiatives to: • develop a research and development strategy – the National Education Research Forum • increase the evidence base for education, such as the ESRC’s Teaching and Learning Research Programme (http://www.tlrp.org) 20 • systematically review the existing evidence base - the Evidence for Policy and Practice Information (EPPI) Centre (http://eppi.ioe.ac.uk), which has completed the first wave of reviews (in teaching English, leadership, inclusion, gender, further education and assessment) • encourage teacher use of research evidence, such as those sponsored by the Teacher Training Agency (http://www.canteach.gov.uk) Health Care The arrival of the first NHS research and development strategy in1991 (Peckham 1991) represented a major shift in the approach to research in healthcare. It aimed to ensure that research funding was directed to areas of agreed need and was focused on robust research designs. Subsequent initiatives have sought to: • systematically review the existing evidence base – such as the Cochrane Collaboration (http://www.cochrane.org) and the NHS Centre for Reviews and Dissemination (http://www.york.ac.uk/inst/crd) • collate and disseminate evidence on effectiveness – for example the clinical and practice guidelines promulgated by the Royal Colleges (http://sign.ac.uk) • provide robust and reliable (prescriptive) guidance on current best practice – such as the government sponsored National Institute for Clinical Excellence (NICE – http://www.nice.org.uk), which reviews evidence on the effectiveness and costeffectiveness of health technologies • establish review structures to ensure that practice is informed by evidence – such as clinical audit and clinical governance actitivities (http://www.doh.gov.uk/clinicalgovernance) • change individual clinician behaviour via a new approach to clinical problem solving – this has been the specific aim of the Evidence Based Medicine movement Social Care Until recently initiatives in social care have been somewhat fragmented and localised. These include: • The Centre for Evidence-based Social Services – based at the University of Exeter and working with a group of local authorities in the South West of England (http://www.ex.ac.uk/cebss) • Research in Practice – an initiative to disseminate childcare research to childcare practitioners and to enable them to use it (http://www.rip.org.uk) • Barnardo’s and ‘What Works’ – Barnardo’s is the UK’s largest childcare charity. It has produced a series of overviews of evidence on effective interventions relevant to children’s lives and has sought to ensure that its own practice is evidence-based (see http://www.barnardos.org.uk). More recently a government sponsored Social Care Institute for Excellence (Scie) has been established. It aims to improve the quality and consistency of social care practice and provision by creating, disseminating and working to implement best practice guidance in social care through a partnership approach. It forms part of the government’s Quality Strategy for Social Care but has operational independence (see http://www.scie.org.uk).