Clinical Psychology and Psychotherapy Clin. Psychol. Psychother. 10, 319­327 (2003) Copyright 2003 John Wiley & Sons, Ltd. Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/cpp.379 BACKGROUND There is considerable interest from both policy makers and practitioners alike in extending the potential of research to inform clinical practice. Historically, there has always been a drive to `bridge' the gap between practice and research. This was famously encapsulated in the `scientistpractitioner' model arising out of the 1949 Boulder Conference which called for clinical training to comprise both scientific and practitioner components. Although the model has not always been pre-eminent, it is still highly pertinent to current clinical practice within the area of the psychological therapies (see Shapiro, 2002). Indeed, the advent of the evidence-based practice movement in the 1980s and the adoption of this paradigm as a driver within US and UK NHS policy documents have ensured that the key component of `science' is a genuine force in delivering rigorous research within the area of the psychological therapies. However, at the same time, such a drive continues to fuel concerns about the relevance of this research to practitioners in routine clinical settings where the drive towards enhancing treatment quality takes a quite different form, namely practice-based evidence (Barkham & Mellor-Clark, 2000; Margison et al., 2000). The tension between these two paradigms has the potential to fracture the overall research effort or set up models of research which are in competition and the pull between efficacy and effectiveness has been likened to being between Scylla and Charybdis (Nathan, Stuart, & Dolan, 2000). Set against this context, the aim of Bridging Evidence-Based Practice and Practice-Based Evidence: Developing a Rigorous and Relevant Knowledge for the Psychological Therapies Michael Barkham* and John Mellor-Clark Psychological Therapies Research Centre, University of Leeds, Leeds, UK Four key areas of research work are identified: efficacy, effectiveness, practice, and service system. These research areas are placed within the paradigms of evidence-based practice and practice-based evidence. This article provides an introduction to these two paradigms and these four research areas together with examples of current work. From this basis, we argue for a knowledge base for the psychological therapies in which each area has a place within an overall research model and in which the interdependence of each area on the others is acknowledged. A cyclical model exemplifying the complementary relationship between evidence-based practice and practice-based evidence is presented as a means for furthering the delivery of a rigorous but relevant knowledge base for the psychological therapies. Copyright 2003 John Wiley & Sons, Ltd. *Correspondence to: Professor M. Barkham, Psychological Therapies Research Centre, 17 Blenheim Terrace, University of Leeds, Leeds LS2 9JT, UK. E-mail: m.barkham@leeds.ac.uk 320 M. Barkham and J. Mellor-Clark Copyright 2003 John Wiley & Sons, Ltd. Clin. Psychol. Psychother. 10, 319­327 (2003) this paper is to argue that no single research paradigm can deliver all the requirements of rigorous and relevant research. And further, that practitioners and researchers need to value multiple paradigms which, together, can provide a more robust knowledge base for the psychological therapies. STRATEGIC OVERVIEW A notable attempt to set out the range of research paradigms applicable to the mental health services that would influence the appropriate policy space was delivered in Bridging Science and Service, a report by the US National Advisory Mental Health Counciľs (NAMHC) Clinical Treatment and Services Research Workgroup (1999). The report-- written under the auspices of both the National Institutes of Health and the National Institute of Mental Health--set out a clear vision of the role and kinds of research paradigms that would be most likely to deliver a relevant evidence base for mental health services. Although some specifics of the US-managed care system differ significantly from the UK, this does not lessen the relevance of the need for a strategic shift in the focus and orientation of research. The NAMHC report set out four key domains of research activity: efficacy, effectiveness, practice, and service systems. The primary aims of each activity would be as follows: * Efficacy research aims to examine whether a particular intervention has a specific, measurable effect and also to address questions concerning the safety, feasibility, side-effects and appropriate dose levels. * Effectiveness research aims to identify whether efficacious treatments can have a measurable, beneficial effect when implemented across broad populations and in other service settings. * Practice research examines how and which treatments or services are provided to individuals within service systems and evaluates how to improve treatment or service delivery. The aim is not so much to isolate or generalize the effect of an intervention, but to examine variations in care and ways to disseminate and implement research-based treatments. * Service systems research addresses large-scale organizational, financing, and policy questions. This includes the cost of various care options to an entire system; the use of incentives to promote optimal access to care; the effect of legislation, regulation and other public policies on the organization and delivery of services; and the effect that changes in a system (e.g. costshifting) have on the delivery of services. From a conceptual viewpoint, we see efficacy research as underpinning the evidence-based paradigm while both effectiveness and practice research are components of practice-based evidence. Service systems research extends into the area of policy. These four types of research scope the domains of activity that are needed in order to provide a more comprehensive approach to the accumulation of evidence. As such, it represents a huge agenda. The present paper--and those that follow--addresses issues relating to efficacy, effectiveness, practice and service systems research. However, we acknowledge that some areas (e.g. efficacy) are considerably more advanced than others (e.g. service systems research). EVIDENCE-BASED PRACTICE PARADIGM: EFFICACY RESEARCH The foundation of the evidence-based practice paradigm rests on efficacy research which in turn rests within a natural sciences paradigm and has been termed `professional activity as applied science' (Peterson, 1991). The epitome of the efficacy trial lies in the various components of the randomized control trial (RCT)--randomization, manualized treatment, a control condition and specific inclusion and exclusion criteria. An exemplar of a rigorous efficacy trial within the area of the psychological therapies is the National Institute for Mental Health's Treatment of Depression Collaborative Research Programme (Elkin, 1994). The raison ďetre for this paradigm is to protect the internal validity of the particular study in order to draw causal inferences about the effects of, for example, a specific treatment for a specific presenting problem. However, a central issue focuses on whether the evidence from efficacy trials--and the evidencebased paradigm itself--is sufficient in and of itself to underpin policy and practice in routine clinical settings. Hence, to borrow Carl Rogers' (1957) famous litany, the question is whether the evidence-based practice paradigm is a necessary and sufficient condition to support routine practice settings. Bower (2003) argues that while it is indeed necessary, it is not a sufficient condition for delivering an evidence base to practice settings. The challenge to increase the appropriateness of the evidence-based practice paradigm has, however, Bridging Evidence-Based Practice and Practice-Based Evidence 321 Copyright 2003 John Wiley & Sons, Ltd. Clin. Psychol. Psychother. 10, 319­327 (2003) delivered improved designs within this paradigm. Notable among these is the use of patient preference designs. Ward et al. (2001) used such a design in a high quality study comparing cognitivebehavioural therapy, non-directive counselling and standard GP care. The procedure in such a design enables researchers to take account of patients stating a preference for one treatment--and receiving that treatment--as well as those patients who have no preference and who can then be randomly assigned to a treatment condition. Such a design enables the trial to better mimic practice by attempting to accommodate the preferences patients may hold when provided with a choice of bona fide treatments--a condition approaching best practice in routine settings--together with providing better science in that those patients who are randomized to a condition start from a position of no stated preference. But such a design comes at considerable costs in terms of the required N of participants and the associated costs of the study. And more challenging is the trend that there appears to be little difference between the effects achieved in preference and non-preference arms. This leads towards the point of trying to develop more rigorous but relevant clinical studies of the psychological therapies. Indeed, this has been the call of a number of commentators (e.g. Shadish 1997; Shapiro, 2002). The evidence drawn from such RCTs and metaanalytic studies provide the bases for clinical treatment guidelines such as the Treatment Choice in Psychological Therapies and Counselling: Evidence Based Clinical Practice Guidelines (Department of Health, 2001). This guideline was informed by a comprehensive review of the literature from 1990 to 1998 (Mackay & Barkham, 1998) as well as consensus views from panels of experts. However, the procedures employed within the evidence-based practice paradigm are not without certain limitations. For example, a quality analysis of the literature base for the above review showed that only 11% of reviews met a minimum quality standard (Mackay, Barkham, Rees & Stiles, 2003). Notwithstanding such caveats, the premise that the practice of the psychological therapies in general should be informed by the best available evidence is well documented (e.g. Parry, 2000). Not least among these arguments is that of ensuring appropriate accountability for use of taxpayers' money. Parry, Cape and Pilling (2003) have provided an account of the clinical guideline agenda and its yield for clinical psychology and psychotherapy but draw attention to the need for this approach to be supplemented by other clinical support methods and with procedures for monitoring what is actually done in practice. PRACTICE-BASED EVIDENCE PARADIGM: EFFECTIVENESS AND PRACTICE RESEARCH It has long been recognized that practitioners in routine practice do not follow such elaborate procedures. There has been a long tradition of interest amongst many researchers in the results of studies derived from routine service settings (e.g. Newman and Tejeda, 1996). Although efficacy trials have mined the traditional horse race comparison between one treatment modality and another, its ability to address duration--that is, dosage--has been restricted by invariably adopting a model of fixed duration. Hence, while evidence exists pertaining to whether psychotherapy of a particular dose is efficacious, there is considerably less information about how much psychotherapy is sufficient and it is this question which is central to service delivery. However, there are a range of methodological and statistical issues involved in determining dosage which are a function of whether the design derives from an efficacy or effectiveness base. Feaster, Newman and Rice (2003) tackle a number of the central issues from the premise that both paradigms are needed in order to provide a fuller understanding of the effects of dose­response. Studies derived from a practice-based paradigm have high external validity because they sample therapy as it is in routine practice. Hence, there is little, if any, inferential distance when generalizing to other populations (Fishman, 2002). However, the potential for confounds in explaining why a particular result occurs drastically reduces the internal validity of the study. Two key components are central to the practice-based paradigm: effectiveness and practice (NMHDC, 1999). The effectiveness component addresses the agenda of the generalizability of results across particular services and settings--that is, the ability to locate the activity of an individual service in the context of other services is central. The practice component addresses the agenda of analysing results within a service or setting--that is, the ability to drill down in the data to ascertain individual differences and variations in relation to patient subgroups. The degree of robustness for both these components rests to a large degree on securing sample 322 M. Barkham and J. Mellor-Clark Copyright 2003 John Wiley & Sons, Ltd. Clin. Psychol. Psychother. 10, 319­327 (2003) sizes of a considerably higher order to those achieved in efficacy research. As such, a different infrastructure is required built around practice research networks (PRNs). A PRN is defined, somewhat tautologically, as a `network of clinicians that collaborate to conduct research to inform their day-to-day practice' (Audin et al., 2001). In contrast to most `formaľ research, PRNs utilize data gathered in `real worlď practice settings rather than specifically orchestrated clinical trials, and large, clinically representative, datasets can be developed. The PRN is typically linked with one or more academic centres which help to keep the group appraised of recent developments in the literature and disseminate recent systematic reviews. A consequence of this infrastructure is that it binds together the activities of research and practice and aims to deliver clinically meaningful and scientifically rigorous effectiveness research (e.g. Borkovec, Echemendia, Ragusea, & Ruiz, 2001). The utility of large datasets derived from practice research networks can be used to address one key component which has often been raised in terms of RCTs in the psychological therapies and relates to the unit of analysis. Power has primarily been calculated on the basis of clients, thereby assuming that all clients are, statistically, independent. Many researchers have long argued for the need to consider therapists as the unit of analysis. However, in focusing on therapists, the research endeavour can become threatening in terms of confidence and professional standing. It is interesting that while traditional research paradigms have employed the client as the unit of analysis and the evidence-based culture focuses on empiricallysupported treatments, there is a scarcity of established research utilizing the therapist as the unit of analysis. Okiiski, Lambert, Nielsen and Ogles (2003) address this issue and encourage researchers and clinicians to enhance patient outcomes through studies that examine the treatment response of clients as a function of the therapist. Indeed, they not only support a move towards empirically-supported psychotherapy practice as opposed to treatments, but argue for a move towards empirically-supported therapists. The ability to compare levels of services both across and within a setting combines the two components of effectiveness and practice research. Such work is exemplified by Evans, Connell, Barkham, Marshall and Mellor-Clark (2003) who show how a single service can both benchmark a range of service descriptors and outcomes against national comparisons and also drill down into its own data to investigate one component of its service--in this instance that of ethnic minorities. A key role of practice-based evidence focuses on improvement in practice. An example of practicebased evidence methods being used in the service of improvement is the recent focus on the provision of outcomes feedback (e.g. Lambert et al., 2001). This research utilizes the method of `signals' or `flags' which enable practitioners to focus in on clinically salient issues. These can be at an overall level whereby an individual clienťs trajectory crosses specific upper or lower thresholds. Or they can utilize specific items at a criterion level (e.g. relating to the presence of risk). However, any purely numerical case monitoring is likely to be impossible. Even the strongest advocates of case monitoring would only claim that these methods are adjuncts to clinical methods of supervision and case reviews. The practice-based paradigm is likely to be most effective where a whole service adopts a position that such a paradigm becomes a driver for service planning and delivery. Lucock and colleagues (2003) present a highly developed--but ongoing-- account of the influence of practice-based evidence in the psychological therapies and highlight its ability to enhance therapists' reflection on their practice in a systematic and non-threatening manner. Key to this position is the fact that research--traditionally viewed as a separate or irrelevant activity--becomes a central activity of practitioners who have a sense of ownership of the research and which informs at all levels of the service. SERVICE SYSTEMS RESEARCH At the level of service systems, data needs to be extracted from individual settings and aggregated to inform practice and policy at national or state level. One central process component which builds on the principle of collaboration via PRNs is that of adopting a common policy of measurement. There is increasing momentum towards implementing national outcomes programmes in the US and in the UK which raise huge issues in terms of the appropriate paradigm upon which they are based. In various States in the US, there has been a move towards the development and adoption of broadly single outcome systems. Brower (2003) provides a case study in which she reflects on some of the issues raised by such an endeavour. Indeed, at this level, the interface is very much with Bridging Evidence-Based Practice and Practice-Based Evidence 323 Copyright 2003 John Wiley & Sons, Ltd. Clin. Psychol. Psychother. 10, 319­327 (2003) policy-makers and advisors whether in the US (e.g. Ohio; Brower, 2003) or in the UK (e.g. Department of Health National Outcomes Programme). However, while Ohio adopted a broadly unitary system, it would be important to state that this is not the only option. It is equally--if not more-- plausible for practitioners to be able to select from a limited pool of bona fide measures within which the conceptual and empirical relationship between such measures is understood and defined. A key axiom of the practice-based evidence paradigm is that practitioners both have and exercise choice within their practice settings--akin to clinical decision-making--and this is a fundamental difference between practice-based outcomes and evidence-based outcomes as delivered within a RCT in which most of the major sources of variance--treatments, measures, dosage, etc.--are tightly controlled. TOWARDS A RIGOROUS AND RELEVANT KNOWLEDGE BASE FOR THE PSYCHOLOGICAL THERAPIES The danger of multiple paradigms is that they are seen as competitive or mutually exclusive: one paradigm is `righť and the other `wrong'. A move away from such a dichotomous position would be to construe each paradigm as occupying a space along a continuum. However, one component of a continuum is its linearity and this fuels the argument that a research question is first tested under one condition and then tested under the other. The usual direction is that from efficacy to effectiveness. Within this framework, several models have been presented: for example, a developmental model (Linehan, 1999) or an `hour glass' model (Salkovskis, 1995). However, our view is that more than being a continuum, there is a need for a cyclical process--inherent in the action of the hour glass--which combines features of both paradigms. We have previously presented the paradigm of practice-based evidence as complementary to evidence-based practice (Barkham & Mellor-Clark, 2000). As a consequence, this complementarity generates an evidence cycle between the rigours of evidence-based practice and the relevance of practice-based evidence. However, more than being complementary, the two paradigms have the great potential for feeding into each other to generate a model for the knowledge base of the psychological therapies that is both rigorous and relevant (see Barkham & Barker, 2003). We have further developed this cyclical model, as presented in Figure 1, to consider the practical products, yields and activities arising from each paradigm. A key principle in this cyclical model is that each component is equally valued in the service of delivering best practice and this, in turn, has important implications for the relationship between policy, practice and research. The traditional linear direction is of RCTs informing policy which, in turn, directs practice. The complement to this process is of practitioners developing and building an evidence base rooted in practice. This can then feed into and inform issues which can be shaped into more finely-tuned tests of specific hypotheses through efficacy research. The yield of both these evidence bases can then better inform policy. Hence, in this cyclical model, policy per se is not the driver for practice. Rather, policy is a product of knowledge informed by a combined evidence base. In this situation, any specific policy will have a provenance that is grounded in both paradigms. Our argument here is that policy needs to derive from the discipline of applied academia which yields products that are both rigorous and relevant. The openness of researchers and practitioners to cycle through these differing paradigms may provide us all with a more robust knowledge base about the process and outcomes of psychological interventions. CONCLUSION We believe that the broad range of contributions to this special edition offer working examples of what we have argued to be a complementary cycle of evidence-based practice research and practicebased evidence activity. Accordingly, we have presented the cyclical fit between the papers and the four key NAMHC research activity domains introduced at the beginning of this article and summarized in Figure 2. Clearly, the structure and shape of the papers are suited to differing paradigms-- hence, many of them do not fit the traditional framework so often associated with academic journals. It is important that if researchers and practitioners are to benefit from the dissemination arising from these differing but complementary activities, then abandoning the `one size fits alľ model is as appropriate to dissemination as it is to research design. By encapsulating the range of papers in this framework we hope to illustrate how the strengths 324 M. Barkham and J. Mellor-Clark Copyright 2003 John Wiley & Sons, Ltd. Clin. Psychol. Psychother. 10, 319­327 (2003) Activity: Rigorous research delivers hypotheses relevant for naturalistic investigation through practice applications Activity: Service systems generate questions relevant for rigorous research to assess the potential for practice application Product: Sets common and specific data goals drawn from pool of standardised and face-valid tools Model: Evidence Based Practice Yield: Services are led to deliver evidence-based interventions Method: Rigorous efficacy studies Meta-analytic studies and randomised controlled trials Method: Relevant effectiveness studies and practice research within services linked through practice research networks Model: Practice Based Evidence Yield: research is led to investigate issues important to whole service system Product: Sets standards or guidelines for practitioners in routine settings Figure 1. A cycle of rigorous and relevant research Bridging Evidence-Based Practice and Practice-Based Evidence 325 Copyright 2003 John Wiley & Sons, Ltd. Clin. Psychol. Psychother. 10, 319­327 (2003) of treatment efficacy research (exemplified by Bower, 2003) can yield the development of clinical practice guidelines (described by Parry et al., 2003) for naturalistic assessment in treatment effectiveness activity. In this domain, highly practical intra-service questions on such relevant issues as dose­response (explored by Feaster et al., 2003) naturally lead on to inter-service benchmarking studies that aim to inform local practice research. This domain increasingly generates research activity that explores highly relevant service delivery issues such as the existence of `Supershrinks' (introduced by Okiishi et al., 2003) and the clinical profile of ethnic minority clients relative to a referential database (profiled by Evans et al., 2003). We would suggest that these, and other service-relevant questions quickly evolve larger-scale local practice research (as profiled by Lucock et al., 2003) that ultimately grow into whole service systems research (as presented by Brower, 2003) that has the potential to yield practice-based evidence questions requiring traditional evidence-based practice research methodologies. 7. Brower 6. Lucock et al 5. Evans et al 4. Okiishi et al 2. Parry et al 3. Feaster et al 1. Bower (WHOLE) SERVICE SYSTEMS RESEARCH (LOCAL) PRACTICE RESEARCH THERAPY EFFECTIVENESS TREATMENT EFFICACY Figure 2. A conceptual schema to represent the research evidence and activity of the papers submitted to the Special Edition within the cyclical model of evidence-based practice and practice-based evidence 326 M. Barkham and J. Mellor-Clark Copyright 2003 John Wiley & Sons, Ltd. Clin. Psychol. Psychother. 10, 319­327 (2003) With the ongoing evolution of such cyclical and symbiotic relationships between the various domains of research activity we would hope that the selection of papers presented here bears evidence to a decreasing gap between practice and research in the fields of counselling and the psychological therapies. REFERENCES Audin, K., Mellor-Clark, J., Barkham, M., Margison, F., McGrath, G., Lewis, S., Cann, L., Duffy, J., & Parry, G. (2001). Practice Research Networks for effective psychological therapies. Journal of Mental Health, 10, 241­251. Barkham, M., & Mellor-Clark, J. (2000). Rigour and relevance: practice-based evidence in the psychological therapies. In N. Rowland, & S. Goss (Eds), Evidencebased counselling and psychological therapies (pp. 127­ 144). London: Routledge. Barkham, M., & Barker, C. (2003). Establishing practicebased evidence for counselling psychology. In R. Woolfe, W. Dryden, & S. Strawbridge (Eds), Handbook of counselling psychology (2nd Ed.; pp. 93­117). London: Sage. Borkovec, T.D., Echemendia, R.J., Ragusea, S.A., & Ruiz M. (2001). The Pennsylvania Practice Research Network and future possibilities for clinically meaningful and scientifically rigorous psychotherapy effectiveness research. Clinical Psychology-Science & Practice, 8, 155­167. Bower, P. (2003). Efficacy in evidence-based practice. Journal of Clinical Psychology and Psychotherapy, 10, 328­336. Brower, L. (2003). The Ohio Mental Health Consumer Outcomes System: reflections on a major policy initiative in the US. Journal of Clinical Psychology and Psychotherapy, 10, 400­406. Department of Health (2001). Treatment choice in psychological therapies and counselling: Evidence based clinical practice guideline. London: DOH. Elkin, I. (1994). The NIMH Treatment of Depression Collaborative Research Study. In A.E. Bergin, & S.L. Garfield (Eds), Handbook of psychotherapy and behavior change (4th ed.; pp. 114­139). New York: Wiley. Evans, C., Connell, J., Barkham, M., Marshall, C., & Mellor-Clark, J. (2003). Practice-based evidence: benchmarking NHS primary care counselling services at national and local levels. Journal of Clinical Psychology and Psychotherapy, 10, 374­388. Feaster, D.J., Newman, F.L., & Rice, C. (2003). Longitudinal analysis when the experimenter does not determine when treatment ends: what is dose­response? Journal of Clinical Psychology and Psychotherapy, 10, 352­360. Fishman, D.B. (2002). Transcending the efficacy vesus effectiveness research debate: Proposal for a new electronic journal of pragmatic case studies. Prevention & Treatment, 3, article 8. Available on the World Wide Web: http://journals.apa.org/prevention/ volume3/pre0030008a.html. Lambert, M.J., Whipple, J.L., Smart, D.W., Vermeersch, D.A., Nielsen, S.L., & Hawkins, E.J. (2001). The effects of providing therapists with feedback on patient progress during psychotherapy: Are outcomes enhanced? Psychotherapy Research, 11, 49­68. Linehan, M.M. (1999). Development, evaluation, and dissemination of effective psychosocial treatments: Levels of disorder, stages of care, and stages of treatment research. In M.D. Glantz, & C.R. Harte (Eds), Drug abuse: Origins & interventions (pp. 367­394). Washington, DC: American Psychological Association. Lucock, M., Leach, C., Iveson, S., Lynch, K., Horsefield, C., & Hall, P. (2003). A systematic approach to practicebased evidence in a psychological therapies service. Journal of Clinical Psychology and Psychotherapy, 10, 389­399. Mackay, H., Barkham, M., Rees, A., & Stiles W.B. (2003). Appraisal of published reviews in psychotherapy and counseling 1990­1998: Achieving a minimum quality standard. Journal of Consulting and Clinical Psychology (in press). Mackay, H., & Barkham, M. (1998). Report to the National Counselling and Psychological Therapies Clinical Guidelines Development Group: Evidence from Cochrane reviews, and published reviews and meta-analyses, 1990­1998. PTRC Memo 369, University of Leeds. Margison, F., Barkham, M., Evans, C., McGrath, G., Mellor-Clark, J., Audin, K., & Connell, J. (2000). Measurement and psychotherapy: evidence-based practice and practice-based evidence. British Journal of Psychiatry, 177, 123­130. Nathan, P.E., Stuart, S.P., & Dolan, S.L. (2000). Research on psychotherapy efficacy and effectiveness: Between Scylla and Charybdis? Psychological Bulletin, 126, 964­981. National Advisory Mental Health Council, National Institute of Mental Health. (1999). Bridging science and service: A report by the National Advisory Mental Health Counciľs Clinical treatment and Services Research Workshop (NIH Publication No. 99-4353). Washington DC. Newman, F.L., & Tejeda, M.J. (1996). The need for research that is designed to support decisions in the delivery of mental health services. American Psychologist, 51, 1040­1049. Okiishi, J., Lambert, M.J., Nielsen, S.L., & Ogles, B.M. (2003). Waiting for supershrink: An empirical analysis of therapist effects. Journal of Clinical Psychology and Psychotherapy, 10, 361­373. Parry, G. (2000). Developing treatment choice guidelines in psychotherapy. Journal of Mental Health, 9, 273­281. Parry, G.D., Cape, J., & Pilling, S. (2003). Clinical practice guidelines in clinical psychology and psychotherapy. Journal of Clinical Psychology and Psychotherapy, 10, 337­351 Peterson, D.R. (1991). Connection and disconnection of research and practice in the education of professional psychologists. American Psychologist, 40, 441­451. Rogers, C.R. (1957). The necessary and sufficient conditions of therapeutic personality change. Journal of Consulting Psychology, 21, 95­103. Bridging Evidence-Based Practice and Practice-Based Evidence 327 Copyright 2003 John Wiley & Sons, Ltd. Clin. Psychol. Psychother. 10, 319­327 (2003) Salkovskis, P.M. (1995). Demonstrating specific effects in cognitive and behavioural therapy. In M. Aveline, & D.A. Shapiro (Eds), Research foundations for psychotherapy research (pp. 191­228). Chichester: Wiley and Sons. Shadish, W.R. (1997). Revisiting field experimentation: Field notes for the future. Psychological Methods, 7, 3­18. Shapiro, D.A. (2002). Renewing the scientist--practitioner model. The Psychologist, 15, 232­234. Ward, E., King, M., Lloyd, M., Bower, P., Sibbauld, B., Farrelly, S., Gabbay, M., Tarrier, N., & Addington-Hall, J. (2000). Randomised controlled trial of non-directive counselling, cognitive-behaviour therapy, and usual general practitioner care for patients with depression. I: Clinical effectiveness. British Medical Journal, 321, 1383­1388.