Next Generation Consultants - All rights reserved 1 NEXT GENERATION MONITORING, EVALUATION AND IMPACT ASSESSMENT GUIDE MONITORING, EVALUATION AND IMPACT ASSESSMENT GUIDE REANA ROSSOUW NEXT GENERATION SECTION ONE THE IMPORTANCE OF MEASUREMENT 11–16 WHY MEASUREMENT MATTERS 12 The context of measurement 12 Challenges of measurement 13 WHAT MATTERS IN MEASUREMENT 14 SECTION TWO THE BASICS OF MEASUREMENT 17 – 27 KEY CONCEPTS OF MEASUREMENT 18 LOGIC FRAMEWORK MODELS 18 What it is 18 Why is it important 19 How it works 20 The purpose of a logframe or logic model 20 THEORY OF CHANGE 20 What it is 20 Why it is important 21 How it works 21 Applications of theories of change 22 Using a theory of change for evaluation 22 Using a theory of change for organisational strategy 23 Focusing on the goal 23 Showing the causal links 23 Revealing hidden assumptions 23 Basing the strategy on evidence 24 Using the views of stakeholders 24 Using a theory of change to think about your place in the sector or portfolio 24 Using a theory of change in programme management 24 The difference between theory of change and logical framework 26 • programme/project design • review and/or quality audit of an existing initiative • strategic learning design and knowledge generation • evaluation • monitoring • scaling up and scaling out 01 02 SECTION THREE THE FUNDAMENTALS OF MEASUREMENT 29–52 THECORECONCEPTSOFMEASUREMENT 30 Overview of types of measurement 30 MONITORING 32 What it is 32 Why it is important 32 How it works 33 Types of monitoring 34 EVALUATION 35 What it is 35 Designing and facilitating evaluation that matters 36 Complementary roles of M&E 37 Purpose of evaluation 37 Why it is important 38 How it works 38 Types of evaluation 40 Selecting the evaluation type 41 Evaluation type by outcome 42 IMPACT ASSESSMENT OR EVALUATION 44 What it is 44 Why it is important 46 How it works 46 Three P-terms to achieve impact 47 CONDUCTING IMPACT ASSESSMENTS 49 Conditions to support an impact assessment system 49 The impact value chain 51 CHALLENGES IN DEVELOPING IMPACT ASSESSMENT SYSTEMS 51 SECTION FOUR INFORMATION AND DATA MANAGEMENT 53 – 70 INFORMATION MANAGEMENT 54 What it is 54 Why it is important 54 How it works 54 DATA SOURCES 55 What it is 55 Why it is important 55 How it works 55 DATA COLLECTION METHODS 55 What it is 55 Data collection tools and techniques 55 Advantages and disadvantages of data collection methods 58 Reducing data collection costs 59 DATA QUALITY 60 What it is 60 Why it is important 60 How it works 60 ANALYSING DATA 61 Interpreting results 61 Using the results 61 Contents of an evaluation report 62 INDICATORS 62 What it is 62 Why it is important 62 How it works 63 INDICATORTRAPS 65 REPORTING ON IMPACT AND RETURN 66 What it is 66 Why it is important 66 Developing an impact report 66 What makes a good impact report? 68 How should you report impact? 68 What should you report? 68 What should be communicated about impact? 69 Reporting mechanisms 70 03 04 SECTION FIVE RETURN ON INVESTMENT 71 – 82 RETURN FOR INVESTORS 72 Return on investment 72 What it is 72 How it works 73 The principles of ROI 73 Why it is important 74 Return on investment: Aspects of return 75 Calculating return on investment 80 How does it work? 80 SECTION SIX SETTING UP THE MEASUREMENT SYSTEM 83 – 103 FROM THINKING TO DOING 84 Special note 84 The measurement process 85 Setting up an M&E system 86 The M&E process 88 Step 1: Identify the process and use of the information 88 Step 2: Clarify the programme design 90 Step 3: Clarify the impact 91 Step 4: Identify what information is required 91 Step 5: Plan data management 94 Step 6: Plan data analysis 95 Step 7: Plan reporting and utilisation 96 Step 8: Developmeasurement(M&E)plans 99 Step 9: Guidelines for managing the measurement cycle 102 • social return on investment • present vs future value 05 06 SECTION SEVEN THE MEASUREMENT PROCESS 105 – 125 THE EVALUATION PROCESS 106 Planning the evaluation 107 Step 1: Identify stakeholders 107 Step 2: Understand programme objectives and intended outcomes 108 Step 3: Decide on evaluation objectives and key evaluation questions 108 Step 4: Choose appropriate evaluation methods 110 Step 5: Specify criteria for measuring success 110 Step 6: Consider who should manage and conduct the evaluation 111 Step 7: Conduct a risk assessment and develop mitigation strategies 111 Conducting the evaluation 112 Using the results 112 Practical considerations 112 PRACTICE NOTES FOR DEVELOPING A THEORYOFCHANGEANDLOGICMODE 117 EVALUATION STAGES 120 SECTION EIGHT GLOSSARY OF TERMS 127 – 134 • roles and responsibilities • evaluation timeframes • scale of the evaluation SECTION NINE THE MEASUREMENT PROCESS 135 – 156 Tool 1: Developing strategy 136 Tool 2: Developingatheoryofchange 137 Tool 3: Developingalogicmodelframework 138 Tool 4: Due diligence of organisations 139 Tool 5: Designing an M&E framework for programmes and portfolios 144 Tool 6: Designing an evaluation plan for a programme 144 Tool 7: Data collection 147 Tool 8: Evaluation report 150 07 08 09 Reana Rossouw is one of Africa’s leading experts on social innovation, social investment, as well as development and impact assessment. As director of Next Generation Consultants, a specialised management consultancy, she believes strongly in contributing to the development of the sector. Reana is an alumnus of Stellenbosch University’s business school. She has more than 30 years’ experience in business management at senior executive and director level. She has a reputation for achieving business success, developing leading brands and innovative industry, product and service solutions. She has worked in several industry sectors, including donor, philanthropy and corporate grantmaking, information and communication technology, mining, agriculture, manufacturing, retail and media, among others. Her experience in these sectors is the basis of her expertise in creating and implementing strategies and brands for innovation, growth and sustainability. Reana is regarded as a visionary and has distinguished herself as one of Africa’s leading experts in social innovation, social investment, and development and impact assessment. THE AUTHOR REANA HAS BEEN ACKNOWLEDGED FOR HER CONTRIBUTIONS BY THE INDUSTRY, BUSINESS PARTNERS AND PEERS: In 2014, she was the winner of the Top 20 Most Influential Women in Business and Government in Africa, representing the SME sector, the Gauteng region and the SADC region. In 2013, she was the winner of the Top 100 Most Influential Women in Business and Government in South Africa and the SADC region in 2013 (CEO magazine). In 2013, she was nominated for the Business Women’s Association of South Africa’s Top 100 Women Business Awards in the categories Women-owned Business of the Year Award and Top Female Entrepreneur of the Year Award. In 2010 and 2011 she was a finalist in the Shoprite Checkers Woman of the Year Competition (business category). In 2009, she was the winner of the South African Council for Business Women’s Business Woman of the Year Competition (small business category). Reana is an accredited fellow of the Institute of Directors (IoD) and an accredited trainer of the Global Reporting Initiative (GRI). She has published extensively and spoken at local and international conferences. Copies of the articles, research papers, presentations and whitepapers are available at www.nextgeneration.co.za www.linkedin.com/company/next-generation-consultants plus.google.com/+reana rossouw www.pinterest.com/reanarossouw/ www.facebook.com/nextgenerationconsultants/ www.slideshare.net/Reana1 6 Next Generation Consultants is a management consulting firm that specialises in various aspects of social innovation to address the most pressing economic, social and environmental challenges in addition to the success of the business, the environment or the communities involved. The company offers advisory and consulting services, research and development services, impact and return on impact assessments and capacity development and training. Based in Johannesburg, South Africa, Next Generation works across Africa utilising innovative solutions to contribute to the future sustainability of the continent, its enterprises and its people. Next Generation consists of independent industry specialists and subject experts. Teams are dynamically put together to ensure that clients’ requirements are met with insight, relevant experience, global understanding and industry knowledge. The company’s experience is with multinational, public and private entities, as well as small, medium and family-based businesses in the for-profit and not-for-profit sectors. Next Generation has proved its ability to work seamlessly in complex, multidimensional environments to deliver innovative services and solutions. In the field of measuring impact and return on investment of development programmes and interventions, Next Generation has done groundbreaking work. The Investment Impact Index™ is widely recognised as pioneer work in the community development, Socio-economic development and humanitarian aid sectors in Africa. Striving to contribute to Africa’s continuous economic transformation, the company aims to improve the competitiveness, growth and sustainability of all companies in an economically, environmentally and socially responsible way. It is committed to transform business into successful, profitable, sustainable and responsible enterprises that deliver shared value. The company upholds the same standards, frameworks, guidelines and codes of conduct for ethics, compliance, transparency and fairness as its clients. THE COMPANY NEXT GENERATION IS AN ACTIVE MEMBER OF THE FOLLOWING ORGANISATIONS: NEXT GENERATION HAS PROVED ITS ABILITY TO WORK SEAMLESSLY IN COMPLEX, MULTIDIMENSIONAL ENVIRONMENTS TO DELIVER INNOVATIVE SERVICES AND SOLUTIONS. “ “ Africa Market Research Association (AMRA) Southern African Market Research Association (SAMRA) South African Monitoring and Evaluation Association (SAMEA) Institute of Directors (IOD) Next Generation Consultants - All rights reserved 7 The awards and recognition Next Generation has received are indicative of the consultancy’s success and serve as an inspiration to think bigger, reach higher and be bolder in service of clients. Advisory and consulting • Social innovation strategies • Circular economy strategies • Shared value strategies • Social capital strategies • Social enterprise and entrepreneurship strategies • Social and impact investment strategies • Human rights and stakeholder management strategies Research and development services • Industry research • Reviews, opinions, sector comparative research and benchmarking • Baseline studies and due diligence • Socio-economic and perception surveys • Social impact, opportunity and management assessments • Performance measurement and management services Impact and return on investment assessments Capacity development and training • Tailored, onsite solutions • Annual master class events SERVICES THE COMPANY IS PROUD AND HUMBLED BY THE RECOGNITION OF ITS PERFORMANCE OVER THE LAST FEW YEARS: Nominated for 2017 South African Business Awards by Global Media. Nominated for the Global Women Leadership Achievement Awards (India) in 2016 Nominated by Impumelelo magazine as a leader in the African Transformation and Empowerment Awards (2015) Nominated for the Best South African Company SMME Awards – African Growth Institute (2007-2015) Nominated for the Most Empowered South African Companies – Topco (2014 and 2015) Next Generation’s deep understanding of the continent, its people and social conditions has led to the development of uniquely African business models, strategies, stakeholder engagement and human rights management approaches. 8 ABOUT THIS GUIDE This guide for development practitioners focuses on monitoring and evaluation (M&E), encompassing impact assessment (IA). It has been developed based on Next Generation’s work over two decades and is the result of development work across Africa. Development practitioners are aware of the need to improve current and future programmes by planning and implementing their interventions better. There is also increased pressure on development organisations to demonstrate the impact of their activities by implementing effective M&E. While there is growing interest in M&E, there is often confusion about what it exactly entails. The purpose of this guide is to strengthen awareness about M&E, engage interest in it and clarify what it’s all about, specifically for development practitioners. RIGHTS AND PERMISSIONS The material in this guide is copyrighted. Copying and/or transmitting all or portions of this work without permission may be a violation of applicable law. However, Next Generation encourages dissemination of the work and will usually grant permission promptly. All rights reserved Produced in South Africa First release: June 2017 The publication of this guide is based on extensive consulting assignments for numerous clients in Africa. This guide is an output of an initiative aimed at building knowledge and capacity for monitoring and evaluation across development agencies. ACKNOWLEDGEMENTS Next Generation Consultants - All rights reserved 9 Tips: Useful information, key points and guidelines for each section Important information: Significant content, data or figures Definition: Explaining the meaning of a word, theory or concept Questions: Insights to guide practitioners with practical decisions or a course of action ICON REFERENCES For easy navigation through the guide, these icons have been used: “One of the great mistakes is to judge policies and programmes by their intentions rather than their results.” – Milton Friedman “The pure and simple truth is rarely pure and never simple.” – Oscar Wilde “True genius resides in the capacity for evaluation of uncertain, hazardous and conflicting information.” – Winston Churchill “Fear cannot be banished, but it can be calm and without panic; it can be mitigated by reason and evaluation.” – Vannevar Bush 10 SECTION ONE THE IMPORTANCE OF MEASUREMENT 01 12 THE IMPORTANCE OF MEASUREMENT WHY MEASUREMENT MATTERS THE CONTEXT OF MEASUREMENT Before looking at the how of measurement, it is important to address the motivation of why it is necessary. What a funder, intermediary or beneficiary is expecting to ultimately learn from the outcomes or impact of its interventions will help dictate its approach to measurement. There is a lot of pressure on development practitioners. Some of this pressure stems from internal sources, creating a need for accountability, to demonstrate how funding is used and which social and business outcomes are achieved. External pressure exists as well, due to the increasing savviness and expectations of consumers and the general public, who believe that funders have a role to play in making positive contributions to society. Measurement increases programme effectiveness by using results to learn and continuously improve development practices. This educational aspect can often be overlooked in the rush to focus on end results, but is an important component of effective measurement. Organisations that measure are better able to adapt programmes to changing circumstances, faster and more effectively; they also make better resource allocation decisions. With experience, and over time, organisations can identify with increasing confidence the aspects of their programmes that drive results, and the corresponding measures that give them the most valuable information. They are then able to reduce the time and expense of measurement. Social impact is linked to funder or investor impact. The two cannot be treated as mutually exclusive. Strong partnership programmes and the demonstration of positive results from social investments are factors that can influence or lead to desired business outcomes, such as improved profit, increased employee loyalty and an enhanced reputation. The aim of measurement is to show grantmaking as strategic, cost-effective and value-enhancing to stakeholders for the resources entrusted to them and to live up to the expectations to be accountable and responsible. Social impact: The demonstration of positive long-term social outcomes. This is one of the leading motivations for measuring social investment and development. It is nearly equally important to be able to demonstrate to executives, boards, management and other stakeholders that resources are achieving the desired outcomes in the form of returns. MEASUREMENT INCREASES PROGRAMME EFFECTIVENESS BY USING RESULTS TO LEARN AND CONTINUOUSLY IMPROVE DEVELOPMENT PRACTICES. “ “ Next Generation Consultants - All rights reserved 13 THE IMPORTANCE OF MEASUREMENT CHALLENGES OF MEASUREMENT Measuring the impact of corporate social investments is important, complex and an ongoing challenge for practitioners in the field. Some of the most common difficulties are listed below. Social change is inherently difficult to assess Sought-after behaviour, skills and community changes may be long-term, hard to quantify and complicated to express in tangible terms. In addition, attributing a specific social change to a social investment intervention adds another layer of difficulty. It often takes a long time before final impact can be observed and this involves a lengthy measurement process. One must be able to establish statistically validated evidence and causality between services delivered and observed impact (i.e. change achieved) to prove without a doubt that the programme in question is delivering per the stated strategic objectives and intent. There is a lack of common standards for impact measurement There is no uniform consistency around the definitions of measurement-related terms, no single shared approach or methodology to measurement that fits all programme types, and no common outcomes and metrics that have been adopted as universally accepted standards to use when measuring social change. This inhibits the ability of funders and donors to easily compare programmes, benchmark their activities against peers and validate if their methodologies and metrics are the “right” or “best” ones to determine investment results and programme outcomes. Non-profits have varying expertise in, and capacity for, measurement While accountability and a focus on results have increased in the social sector, many non-profit organisations do not have the level of skills and/or resources to invest in the type of robust measurement that a corporate or business social investor requires. Among others, the biggest barriers are: Not enough time, lack of the required staff expertise and competencies, and lack of financial resources to appoint outside experts to help collect data and conduct evaluations. Funders, investors and donors may have insufficient resources to invest in measurement Like the development sector, social investors lack the staff skills and/or budget to fully invest in a measurement process that will provide the quality and quantity of data they aim to collect and communicate. Measurement is not easy to institutionalise in an organisation While a measurement model or framework may hold promise in theory, embedding the data collection, analysis and communication aspects of measurement is time-consuming and potentially costly. If not adequately integrated, measurement can be an inefficient process that expends resources better directed to other aspects of programme management and implementation. It is difficult to identify the investor’s, funder’s or donor’s impact or return While there is consensus in the development sector that support of social programmes creates positive returns for investors, it remains challenging to translate these benefits into tangible bottom-line results, such as increased employee loyalty, improved reputational awareness, enhanced stakeholder relations and increased revenue. 14 THE IMPORTANCE OF MEASUREMENT WHAT MATTERS IN MEASUREMENT The concepts measurement, monitoring, evaluation and impact assessment have become more critical in the development sector. Although it has always been important, there has been a realisation that assessment, done right, can achieve three things: Strengthen grantor and grantee decision-making and partnerships Enable continuous learning and improvement of development practices Contribute to sector-wide learning and sustainable development in development portfolios Irrespective of the form of assessment chosen for measurement, assessment, due diligence and evaluation practices, these aspects are important: Definition matters: Different terminologies can undermine efforts by grantors and grantees to collaborate effectively in the design and implementation of an M&E system. Many use the terms evaluation, impact assessment and monitoring and evaluation interchangeably. In fact, M&E practices encompass activities with distinct purposes, methods and difficulty levels. As such, M&E falls into three separate categories undertaken at three stages: Theories of change described and logic models devised during the initial design of a project or initiative, supported by a due diligence/audit process before funding an initiative. Tracking progress against the strategy set during the life cycle of an initiative (monitoring and evaluation). Assessing impact after the fact. 1 2 3 The first of these (theory of change and logic models) is essential background for M&E, and the three together (monitoring, evaluation and impact assessment) provide a useful means of organising the various activities and purposes of M&E. The second enables a grantor and grantee to gain the information needed to make mid-course corrections to the strategy and intervention, and learn throughout the process. The third activity – assessing impact – is the most daunting, as it determines if the intended outcomes have been realised. Purpose matters: At its best, M&E informs decision-making and provides for continuous learning. All parties (grantor, investor, grantee/intermediary and beneficiaries) need to agree from the beginning which benchmarks for success are expected at each stage of the development intervention, and why.   Next Generation Consultants - All rights reserved 15 THE IMPORTANCE OF MEASUREMENT The cost-benefit ratio matters: The cost of M&E places a burden on all stakeholders. Consider the use or outcome of M&E before starting the measurement process. Failure to do so can lead to excessive and unnecessary data gathering to search for evidence of impact. The contributory cost of the M&E process is fourfold: 1 2 3 4 It is a burden to grantees, creating surplus work for often tightly staffed and financially strapped non-profit organisations. It undermines the quality of data because grantees will only provide the requested or specified information to meet their grant obligations. They may not have the time, skills or resources to supply the insight and knowledge that is often more valuable for learning than the data that only provides statistical evidence of expenditure. It inundates and burdens grantor, donor or investor staff with information they may not have the time or insight to use effectively. It may not provide the actual information that is needed to understand how effective the initiatives and grantmaking practices, systems and processes are. Culture, context and capacity matter: M&E requires a commitment to building capacity at grantor, grantee/intermediary and beneficiary level, and the field of evaluation practice in general. Additional insight is required to understand the social, cultural, political and geographic landscape in which an evaluation will be conducted. Similarly, understanding the specific social context or issue (e.g. education or health) and various cultural (e.g. language or gender) contexts in a social (e.g. rural or urban) setting or social development discipline (e.g. infrastructure, materials or training) is also required. This deep-seated knowledge of various contextual settings requires specialised skills. The unit of analysis matters: The good – and bad – news is that all activities can be evaluated. It is important to sort out the different units of analysis that will be used. In general, there are three levels of focus: At the strategy level, the measurement focus should be on measuring outcomes over impact, on assessing contribution rather than attribution, and on the degree of success that can be achieved among grantees, intermediaries or beneficiaries pursuing a given or specific strategy, strategic intent or objectives. At the portfolio level, the funder, grantor or investor should use intermediary (grantee) reported data on outputs and outcomes (on beneficiaries) to signal whether the initiative is making progress, track the programme management activities and capture intended as well as unintended consequences of the programme so that detailed analysis and knowledge can be applied in proactive learning. 16 THE IMPORTANCE OF MEASUREMENT At the programme level, the funder should align expected results with strategic intent and objectives, track inputs, activities, outputs and outcomes at critical points to manage and adjust each intervention appropriately, and measure the impact (actual change) as intended on beneficiaries. Timing matters: Just as the units of measurement and analysis differ, so do the time horizons to measure the performance of initiatives. A variety of short-term, medium-term and long-term metrics (indicators) are useful in assessing the outcomes of an intervention. The annual grantmaking cycle poses a structural barrier to longer-term programmes, as programme goals and objectives rarely follow annual funding or development cycles. Feedback from grantees, intermediaries and beneficiaries or recipients matters: M&E must incorporate the viewpoints and observations of the funder, the intermediary and ultimately the beneficiaries through all the stages of work – identifying problems, co-creating solutions and implementing a shared vision of outcomes. The reality is that social and community development will only achieve its purpose when the voices of those whose lives we see to improve are heard, respected, integrated and internalised in our understanding of the problems we seek to solve. Transparency matters: Although the goals of M&E are to inform decision-making and enable continuous learning by funders, there is a larger community to serve and a larger purpose to pursue. By publicly sharing the data gathered and the conclusions reached, funders, beneficiaries and intermediaries can contribute to sector-wide learning. Proportionality matters: This means ensuring that the resources expended on evaluating the outcome of an intervention are proportionate to the financial resources spent on achieving the outcome. Proportionate measurement processes imply several different approaches and methodologies to be used that are appropriate to the capacity, outcome, impact and nature of the intervention. Comparability matters: If several frameworks are used to measure social value, recognising that there will never be a “one size fits all” model, care must be taken with data analysis. This knowledge and insight will enable managers of individual programmes, as well as combined portfolios across an investment and development portfolio spectrum, to recognise the need to use different tools, approaches and methodologies, not only to understand their own programmes and portfolios better, but to assist intermediaries in comparing themselves and their development approaches to their counterparts. Perhaps, in the longer term, the wider use of standardised, comparable measurement and assessment models, methodologies and approaches will then become a more realistic vision. Standardisation matters: This is a basic requirement for comparability, therefore it is important to use generally accepted measures of value (indicators) to support M&E practices across programmes and portfolios to ensure that the evidence of impact is credible, reliable and trustworthy. This is important so that not only informed decisions are made, but so that the applied approaches can be compared within programmes, across portfolios and to standardised development models. SECTION TWO THE BASICS OF MEASUREMENT 02 18 THE BASICS OF MEASUREMENT The importance of measurement: • It provides the only consolidated source of information to showcase project progress. • It allows stakeholders to learn from each other’s experiences, building on expertise and knowledge. • It often generates reports that contribute to transparency and accountability, and allows lessons to be shared more easily. • It reveals mistakes and offers paths for learning and improvement. • It provides a basis for questioning and testing assumptions. • It provides a means for all stakeholders seeking to learn from their experiences and to incorporate insight into policy and practice. • It provides a way to assess the crucial link between investors, intermediaries and beneficiaries. • It adds to the retention and development of institutional memory. • It provides a more robust basis for raising funds and influencing policy. KEY CONCEPTS OF MEASUREMENT In the measurement, evaluation and impact assessment context, there are a few principles of good practice and key frameworks that should be noted. These form the cornerstones of good performance measurement and management practice. LOGIC FRAMEWORK MODELS WHAT IT IS The logic model is a picture of a process that illustrates how an intervention will achieve its stated objectives and outcomes. It also indicates and illustrates the theory and assumptions underlying the programme, portfolio or strategy. The purpose of a logic model is to illustrate how a programme will work by linking outcomes to resources invested, connecting programme outcomes with activities and processes, as well as identifying the theoretical assumptions and principles of development underlying the programme theory. ELEMENTS OF A PROGRAMME LOGIC MODEL Assumption context Problem statement Implementation Outcomes Impact THE LOGIC MODEL IS A PICTURE OF A PROCESS THAT ILLUSTRATES HOW AN INTERVENTION WILL ACHIEVE ITS STATED OBJECTIVES AND OUTCOMES. “ “ THE BASICS OF MEASUREMENT WHY IT IS IMPORTANT The logic model framework facilitates effective programme planning, implementation and evaluation through the identification of an organisation’s planned work and intended results. Using a logic model is an effective way to ensure programme, portfolio or strategy success, as it helps to organise and systematise programme planning, management and evaluation functions. In programme design and planning, a logic model serves as a planning tool to develop a programme strategy and to explain and illustrate programme concepts and approaches for key stakeholders, including funders. Logic models also link programme structure and outcomes to programme design, and ensure a shared understanding of what is to take place. During the planning phase, developing a logic model requires all stakeholders to consider other best practice development approaches. It combines practitioner experience with industry knowledge and subject expertise to guarantee specific outcomes. In programme implementation, a logic model forms the core for a focused project management plan that helps to identify and collect the data needed to monitor and improve programme or project management tasks. Using the logic model during a programme implementation and management process ensures a focus on achieving and documenting results. It also prioritises the programme aspects that are most critical for gathering, tracking and reporting on specific deliverables, and assists in making the necessary data adjustments during the intervention to effect the desired outcomes. For programme evaluation and strategic or management reporting, a logic model provides detailed programme information that indicates progress toward strategic goals in ways that inform programme approaches. 1 2 3 Next Generation Consultants - All rights reserved 19 Using frameworks is one way to develop a clearer understanding of the goals and objectives of a project, with an emphasis on identifying measurable objectives, and outcomes for the short, medium and long term. “Logical framework” or “logframe” describes a general approach to project or programme planning, monitoring and evaluation, and – in the form of a logframe matrix – a discrete planning and monitoring tool for projects and programmes. Logframe matrices are developed during project or programme design, planning and appraisal stages, and are updated throughout implementation, while remaining an essential resource for ex-post evaluation. A logframe is another name for logical framework, a planning tool consisting of a matrix which provides an overview of a project’s goal, activities and anticipated results. It provides a structure to help specify the components of a project and its activitie,s and for relating them to one another. It also identifies the measures by which the project’s anticipated results will be monitored. 20 THE BASICS OF MEASUREMENT HOW IT WORKS FIVE ESSENTIAL COMPONENTS OF A LOGIC MODEL INPUTS The resources invested in a programme, for example, technical assistance, products, services, cash, infrastructure, training, skills or time. ACTIVITIES The activities carried out to achieve the programme’s objectives, for example classes in science and maths, tutoring, providing wheelchairs, meals or books. OUTPUTS The immediate results achieved at the programme level through the execution of activities, for example number of classes, number of meals served or number of children assisted. OUTCOMES The set of short-term or intermediate results achieved by the programme through the execution of activities, for example improved skills, behaviour changes or number of jobs created. IMPACTS The long-term effects, or end results, of the programme, for example, changes in health status, employment status or economic growth. Inputs (or resources) are used in processes (or activities) that produce immediate or intermediate results (or outputs). They lead to longer-term or broader results (or outcomes) and ultimately impacts or changes that are evident and can be attributed to a specific intervention. THE PURPOSE OF A LOGFRAME OR LOGIC MODEL A logframe is a table that lists programme activities, short-term outputs, medium-term outcomes and long-term goals. It is supposed to show the logic of how the activities will lead to the outputs, which in turn lead to the outcomes, and ultimately the stated goal/objective (impact). A logframe differs from a theory of change. THEORY OF CHANGE WHAT IT IS A theory of change (ToC) shows an organisational or programme path from needs to activities, outcomes and impact. It describes the desired change and the steps involved in making it happen. Theories of change also depict the assumptions behind developmental reasoning, where possible backed up by evidence. A good ToC can reveal: • whether your activities make sense, given your goals • whether there are things you do that do not help you achieve your goals • which activities and outcomes you can achieve alone, or not • how to measure your impact Theories of change are often shown in a diagram, allowing you to see the causal links between all the steps. The development sector is complex and messy to reflect comprehensively in a diagram. But that is where the ToC approach has real value: it forces you to take a clear, simple view, crystallising your work into as few steps as possible to capture the key aspects of what you do. Theories of change grew out of evaluation planning techniques, such as logic models. They were designed to be more helpful in planning complex interventions than other methods, because they show a more detailed causal model to explain why a strategy or intervention will work. Next Generation Consultants - All rights reserved 21 THE BASICS OF MEASUREMENT A ToC can be useful in three important ways: For organisational, programme or portfolio strategy For programme or portfolio evaluation For thinking about your place in the development sector or in a development context (e.g. education or health) WHY IT IS IMPORTANT In best practice, a clearly articulated ToC is a prerequisite to effectively measuring social outcomes. It can be used to help determine the social impact a programme intends to have, why change may or may not occur, and what should be measured. In principle, a ToC should assist with: • articulating goals, internally and externally, and how they will be achieved • developing a better understanding of the programme or intervention (including breaking down parts and interactions between these parts and certain outputs and outcomes) • guiding programme planning, design, management and execution of measurement and evaluation • formulating and prioritising meaningful measurement questions and the scope of what should or can be measured • identifying intended and unintended side effects and potential risks • determining programme effectiveness and assisting with explaining cause and effect association HOW IT WORKS Theory of change is a comprehensive description and illustration of how and why a desired change is expected to happen in a particular context. It is focused on mapping out or “filling in” what has been described as the “missing middle” between what a programme or change initiative does (its activities or interventions) and how this leads to desired goals. It does this by first identifying the desired long-term goals and then works back to identify all the conditions (outcomes) that must be in place (and how these relate to one another) for the goals to occur. These are all mapped in an outcomes framework. The logic model framework then provides the basis for identifying what type of activity or intervention will lead to the outcomes identified as preconditions for achieving the long-term goal. With this approach, the links between activities and the achievement of long-term goals are more fully understood. This leads to better planning, because activities are linked to a detailed understanding of how change happens. It also leads to better evaluation, as it is possible to measure progress towards the achievement of longer-term goals that go beyond identifying programme outputs. THEORIES OF CHANGE GREW OUT OF EVALUATION PLANNING TECHNIQUES, SUCH AS LOGIC MODELS. “ “ THE BASICS OF MEASUREMENT Developing a theory of change What is the problem you are trying to solve? Who is your key audience? What is your entry point to reaching your audience? Which steps are needed to bring about change? What is the measurable effect of your work? What are the wider benefits of your work? What is the long-term change you aim for? APPLICATIONS OF THEORIES OF CHANGE Theories of change reflect development practices at different levels of design Worldview: Personal beliefs and understanding how change happens, and why. Worldview: Social and political theories and development perspectives that inform our thinking. Organisational theory of change Vision, mission, organisational values, strategic preferences and role of the organisation in social change, and its contribution to it. Portfolio theory of change How an organisation or team expects change to evolve in a specific area (subsystem, sector or thematic area), why, and its own role and contribution. Project or programme theory of change The analyis and intervention logic of a project or programme to achieve a specific change objective in a specific context, including its assumed contribution to longer-term social change. This relates to thematic or organisational ToC. USING A THEORY OF CHANGE FOR EVALUATION Many organisations are keen to measure their impact, but are unsure where to start. A ToC is a crucial basis for measurement, because it provides a theoretical framework that can be used to assess whether an intervention is working as planned and how it can be improved. Understanding all outcomes For an evaluation or measurement framework to be successful, it must measure the right things. Typically, a ToC shows what an organisation is trying to achieve (e.g. improve science and maths pass rates) and how it is planning to get there (e.g. through teacher interventions). Organisations can then determine if they are achieving their intended outcomes. If measurement is not based on a ToC, it risks not measuring the most important things and can therefore waste money. A ToC can identify key outcomes that must be measured. These might be intermediate outcomes that lead to many other outcomes, or they might be outcomes that distinguish this intervention from the usual practice. Ensuring that outcomes are realistic Many organisations have grand aims, such as alleviating poverty. Aims like these are too large for an organisation to achieve on its own, so it is not sensible to think about how to measure them. A ToC helps organisations focus on concrete, defined aims and outcomes that are potentially measurable. 22 THE BASICS OF MEASUREMENT Understanding how outcomes are connected Organisations that base their measurement on a ToC can understand how as well as whether change is happening. This means that outcome measurement can feed into the organisational strategy, to ensure that resources are allocated well. It also means that organisations can adapt their programmes to what works and predict what will happen because of their activities. Understanding progress towards the final goal Some organisations’ final goals cannot be measured easily. They involve change that happens too gradually or takes place in the lives of people who are difficult to track. Because theories of change show all the intermediate steps that lead to the end goal, they can help organisations work out whether they are making a difference towards that end goal. The ToC provides evidence for why these intermediate outcomes are a good way to achieve the long-term goal. This can reassure funders that the organisation is making progress and can help them work out which impacts they can attribute to their work. This is particularly useful for organisations that do campaigning or advocacy work. USING A THEORY OF CHANGE FOR ORGANISATIONAL STRATEGY A ToC is an excellent basis for a strategic plan, because it works methodically from the need you are trying to address to the change you want to achieve. Thinking about the organisation’s ToC at the start of a strategic review can help staff, management and trustees to focus on the goal. It ensures that causal links, supporting evidence and different stakeholders’ viewpoints are considered. Instead of becoming fixated on what the organisation is currently doing, it draws people’s minds to the activities that are needed to achieve the goals. FOCUSING ON THE GOAL The process of developing a ToC starts with the goal, vision, mission and objectives of the organisation, then works backwards through the steps that are needed to achieve it. Most organisations are not always used to this backwards mapping, as they tend to think in terms of the activities they already do. However, backwards mapping is important because it means that everything that is needed to achieve the goal is contained in the ToC, not just the organisation’s current activities. This can open up new opportunities, such as discussions about how to work more closely with others, or whether to consider mergers or partnerships. SHOWING THE CAUSAL LINKS By developing a ToC, organisations can understand how various aspects of their work are linked to achieve their final goal. Good strategies involve considering the alternatives and only discounting an option based on evidence. A ToC provides a coherent framework in which various strategies can be looked at and the evidence for and against each can be weighed up. This is brought about through the processes of backwards mapping and thinking through the causal links, which can help determine the right course of action. This allows management and practitioners to consider the importance of each activity and what resources should be invested in them. REVEALING HIDDEN ASSUMPTIONS Working through a ToC can reveal assumptions in an organisation’s strategic plans that otherwise might go unnoticed. For example, a charity that works with children for a school term to improve their literacy might question whether a term is the right length of time for the intervention to make a difference. How did that time limit come about? Was it based on evidence or on another constraint, such as funding? If it was based on a constraint, is it still in place or will it be possible to change the time limit and perhaps improve outcomes? Once assumptions are revealed through this kind of questioning, it is easier for practitioners, Next Generation Consultants - All rights reserved 23 24 THE BASICS OF MEASUREMENT management and trustees or boards to determine if they are right and whether what they are doing is likely to work in the best possible way. BASING THE STRATEGY ON EVIDENCE A good strategic plan should be based on evidence and revised as more evidence is collected about whether an approach works. The ToC process lays out all the evidence its interventions are based on. Ideally, organisations should start with a ToC based on evidence that is revised as the work continues. If there is no evidence to start with, it becomes even more important to review the ToC regularly. USING THE VIEWS OF STAKEHOLDERS A ToC is only as good as the views of the people who build it, which is why it is recommended that they should be developed in participatory workshops. This allows people with diverse experiences to think through the planned or likely outcomes and the causal links. USING A THEORY OF CHANGE TO THINK ABOUT YOUR PLACE IN THE SECTOR OR PORTFOLIO Theories of change are useful for individual organisations and programmes, but can also be used to think more broadly about how different organisations in a sector work together. This can help all the funders working in a field like education or health to achieve greater impact. COLLABORATION The process of developing a ToC helps organisations think about collaboration: In working out which outcomes must be achieved to reach the goal(s), you will come across outcomes that your own activities do not achieve. The next step is to think about who is achieving those outcomes and how closely you need to work with them to ensure results. Sometimes it is enough just to be aware of who is doing this work. At other times, you may need to work together closely. Groups of funders that use ToC in this way are better able to build common strategies to increase their impact, to think about weaknesses and to identify where new approaches need to be built. Thinking carefully through the assumptions behind a ToC can help you work out if it would be possible to work with another organisation. For example, if you are campaigning for a change in behaviour, you might think that it is important to persuade the public of your point of view and provide behaviour change incentives. Another organisation might think it is more important to campaign for a legislative or policy change. These different approaches might make it difficult to work together. MEASUREMENT FOR THE SECTOR In the same way that a ToC is a good basis for an organisation’s impact measurement, it can be used to help a group of donors or investors in a particular sector to think about how they might measure common outcomes together. This means that organisations can share the cost of developing measurement techniques, and can make it easier for funders to understand and compare investors’ outcomes. USING A THEORY OF CHANGE IN PROGRAMME MANAGEMENT PROGRAMME OR PROJECT DESIGN A ToC process for programme or project design takes place as part of the planning, preparation or inception phase. It entails a broad analysis of the system that needs transformation, identifying and involving key actors, initial programme design and strategic choices, and identifying critical assumptions. It forms the basis of adaptive management and monitoring, evaluation and learning (MEL) during implementation. The ToC products are used for internal and external communication about the initiative. REVIEW AND/OR QUALITY AUDIT OF AN EXISTING INITIATIVE A ToC process for the review or quality audit of an existing programme or project aims to improve An organisation’s theory of change (ToC) should evolve from being based mainly on assumptions about what works (for example assumptions that were made when the organisation was founded) to being based more on evidence about what works. 25Next Generation Consultants - All rights reserved THE BASICS OF MEASUREMENT its quality, revisit and sharpen strategies, clarify underlying assumptions and adjust strategies and operational aspects to changed realities. The outcomes of the review may be used to adapt plans and implementation, improve the MEL process or framework, and support communication about the programme and its results. A review can also be done to prepare for a new phase of an initiative. STRATEGIC LEARNING DESIGN AND KNOWLEDGE GENERATION A ToC process is an effective way to identify knowledge gaps and learning or research questions. It helps to create a structure to build an evidence base about what works or not, for who and why, and under which conditions. In particular, the assumptions identified in all steps of the process offer entry points for questioning, documenting and monitoring of what we think will happen and what happens in reality. The ToC analysis also helps to identify who should participate in the learning process. EVALUATION A programme or project ToC provides a good basis for a mid-term review or an ex-post evaluation, as it makes explicit what the initiative aimed to achieve, why and how it was supposed to work, and key assumptions. The evaluation will seek to substantiate the validity of the ToC, offering important information and insights for a possible next phase design or for learning with similar initiatives. The findings contribute to the body of knowledge on the intervention’s topic, such as the role of women in conflict resolution. Evaluation findings based on a clear ToC provide a sound basis for accountability to investors, either by evidencing the initiative’s contribution to the overall goal or offering in-depth and relevant learnings. If an initial ToC was not developed for the initiative, the evaluation can start with reconstructing its implicit ToC. This offers a good base for the evaluation and will support an improved and shared understanding of the initiative by the funder and other stakeholders. This often leads to improvement of implementation and/or a next phase. MONITORING A multi-actor initiative, jointly undertaking a ToC process, is critical to come to shared understanding, decision-making and ownership of the initiative design and operations. An important product of such a ToC is a collective MEL process and framework for impact monitoring, a condition for joint learning and demonstrating success. In practice, aligning the systems and MEL practices of the different partners in the project for collective impact monitoring often proves challenging. The ToC process can help to define clear and agreed roles and responsibilities. SCALING UP AND SCALING OUT A ToC process can help funders or their partners to analyse the suitability and feasibility of replicating or scaling up and/or out an initiative in a different context. The results will provide insights into the need to adapt the ToC, why and in what way, and will identify assumptions that need to be tested in the new context. A GOOD STRATEGIC PLAN SHOULD BE BASED ON EVIDENCE AND REVISED AS MORE EVIDENCE IS COLLECTED ABOUT WHETHER AN APPROACH WORKS. THE TOC PROCESS LAYS OUT ALL THE EVIDENCE ITS INTERVENTIONS ARE BASED ON. “ “ 26 THE BASICS OF MEASUREMENT THE DIFFERENCE BETWEEN THEORY OF CHANGE AND LOGICAL FRAMEWORK In recent decades there has been an ongoing debate in the international development community about the best way to describe how programmes lead to results. One approach has been to use a logical framework (logframe). Another increasingly popular approach is to create a ToC. There is no official definition of a ToC or how it differs from a logframe. Both have the same general purpose – to describe how a programme will lead to results, and to aid critical thinking about this. Some people argue that a ToC is essentially the same as a logframe, it’s just that over time people have forgotten how to do logframes properly. Although academics are still debating the relationship between the two formats, in practice there are some differences in how they are used. At the simplest level, a ToC shows the big, messy, “real world” picture, with all the possible pathways leading to change, and why they lead to change (is there evidence or is it an assumption?). A logframe is like zooming in on the specific pathway a programme deals with and creating a neat, orderly structure. This makes it easier for practitioners, intermediaries and investors to monitor programme implementation. In practice, a ToC typically: • gives the big picture, including issues related to the environment or context that one can’t control • shows all the different pathways that might lead to change, even if those pathways are not related to a specific programme • describes how and why people think change happens • could be used to complete the sentence “if we do X, Y will change because…” • is presented as a diagram with narrative text – the diagram is flexible, doesn’t have a particular format and could include cyclical processes, feedback loops, one box could lead to multiple other boxes, different shapes could be used, etc. • describes why people think one box will lead to another box (e.g. if they think increased knowledge will lead to behaviour change, is that an assumption or is there evidence to show it is the case?) • is used as a tool for programme design and evaluation In practice, a logframe typically: • gives a detailed description of a programme, showing how its activities will lead to immediate outputs, and how these will lead to outcomes and the goal (the terminology varies by organisation) • could be used to complete the sentence “we plan to do X, which will give Y result” • is normally shown as a matrix, called a logframe; it can also be shown as a flow chart, which is sometimes called a logic model • is linear, which means that all activities lead to outputs, which lead to outcomes and the goal – there are no cyclical processes or feedback loops • includes space for risks and assumptions, although these are usually only basic; does not include evidence for why people think one thing will lead to another • is mainly used as a monitoring tool • is mainly used as a programme design and management tool THEORY OF CHANGE LOGICAL FRAMEWORK Next Generation Consultants - All rights reserved 27 THE BASICS OF MEASUREMENT COMPARING AND CONTRASTING THEORY OF CHANGE AND LOGFRAME THEORY OF CHANGE Critical thinking, room for complex and deep questioning Explanatory – a ToC articulates and explains the what, how and why of the intended change process, and the contribution of the initiative Pathways of change, unlimited and parallel result chains or webs, feedback mechanisms Ample attention to the plausibility of assumed causal relations Articulates assumptions underlying the strategic thinking of the design of a policy, program or project LOGFRAME Linear representation of change, simplifies reality Descriptive – a logframe states only what is thought will happen or will be achieved Three result levels (outputs, outcomes, impact) Suggests causal relations between result levels without analysing and explaining them Focuses on assumptions about external conditions 28 SECTION THREE THE FUNDAMENTALS OF MEASUREMENT 03 30 THE FUNDAMENTALS OF MEASUREMENT Monitoring and evaluation (M&E) is an essential part of any programme, large or small. It can tell us whether a programme is making a difference and for whom, and it can identify programme areas that are on target or aspects of a programme that need to be adjusted or replaced. Information gained from M&E can lead to better decisions about programme investments. Additionally, it can demonstrate to programme implementers and funders that their investments are paying off. THE CORE CONCEPTS OF MEASUREMENT Monitoring generally involves tracking progress with respect to programme management aspects and objectives, using data that is easily captured and reported on an ongoing basis. While monitoring most frequently makes use of quantitative data, monitoring qualitative data is also possible. Evaluation involves a systematic, evidence-based inquiry that describes and assesses any aspect of a programme or project. Evaluation uses a wide variety of quantitative as well as qualitative methods, providing more comprehensive information about what is taking place, why and whether the intervention is appropriate or not, and guidance about future decision-making. Impact assessment generally shares the basic characteristics of other forms of evaluation. However, as the table below suggests, there are significant differences, underscoring the need for a variety of measurement approaches to make impact evaluation meaningful. In essence, impact evaluation or assessment considers the various dimensions of impact (e.g. social, economic, direct or negative), as well as the extent of the impact (i.e. range, depth and width) on a range of stakeholders (i.e. across a stakeholder value chain). OVERVIEW OF TYPES OF MEASUREMENT There are three broad types of measurement, namely monitoring, evaluation and impact assessment. INFORMATION GAINED FROM M&E CAN LEAD TO BETTER DECISIONS ABOUT PROGRAMME INVESTMENTS. “ “ Next Generation Consultants - All rights reserved 31 THE FUNDAMENTALS OF MEASUREMENT BASIC CHARACTERISTICS OF MONITORING, EVALUATION AND IMPACT ASSESSMENT Monitoring Evaluation Impact assessment Periodic, using data gathered routinely or readily obtainable, generally done internally, usually focused on activities and outputs, although indicators of outcomes or impact are also sometimes used Assumes appropriateness of programme activities, objectives and indicators Typically tracks progress against a small number of pre-established targets, indicators, objectives or outcomes The data is usually quantitative Cannot indicate causality Difficult to use by itself to assess impact Generally episodic, often externally done Goes beyond outputs to assess outcomes Questions the rationale and relevance of the programme, objectives, intent and activities Identifies planned as well as unintended effects Addresses “how” and “why” questions Provides guidance for future decisions or programme changes Uses data from different sources and a variety of methods A specific form of evaluation Sporadic and infrequent – generally at the end of an intervention Mostly externally conducted Usually a discreet or longitudinal research study, considering impact over time Specifically focused on attribution (causality) in some way, most often with counterfactual evidence Generally focused on long-term changes evidenced, such as in the quality of life of intended beneficiaries Needs to consider what was done (e.g. through basic M&E practices) Considers strategic intent against actual outcomes Uses data from different sources and a range of methodologies Tests the validity of underlying assumptions, basics of programme theory (ToC), and programme validity (logframe model) to the development context Determines programme effectiveness, feasibility, viability and sustainability Provides meaningful analysis of outcomes and impact 32 THE FUNDAMENTALS OF MEASUREMENT Measurement is a continuous process that occurs throughout the life of a programme. To be most effective, measurement processes should be planned at the design stage of a programme, and the time, money and personnel that will be required for assessment should be calculated and allocated in advance. Monitoring should be conducted at every stage of a programme, with data collected, analysed and used continually. Evaluations are usually conducted at specific intervals during the life cycle of a programme. It should be planned at the start of an intervention, because it will rely on data collected throughout the programme, with baseline data (data captured at the start of an intervention) being especially important. Impact assessments are usually conducted at the end of a programme. They analyse the outcome of an intervention against strategic objectives and provide detail of the impact across specific and identified impact dimensions. QUESTIONS TO CONSIDER WHEN PLANNING AN ASSESSMENT • What information is required to indicate whether the project is working well or not? • What framework must be developed that will ensure meaningful knowledge or information that will enable decision-making and learning? • What are the questions the evaluation seeks to answer? • What are the skills or competencies required to conduct meaningful monitoring, evaluation and impact assessments? • Whose perspectives and experiences are required during the measurement process? • Which sources of information are required? Is it available? Is it credible? Is it accessible? • How will data be collected? Which data collection tools or methodologies will be used? • How will learning, knowledge or insights be used? • Which lessons and negative impact must be captured? • How will the findings be used (communicated and reported) at the end of the measurement process? MONITORING WHAT IT IS Monitoring a programme or intervention involves the collection of (routine) data that measures programme progress and activities. It is used to track changes in programme performance over time. Its purpose is to permit stakeholders to make informed decisions regarding the effectiveness of programmes and the efficient use of resources. WHY IT IS IMPORTANT Monitoring is sometimes referred to as process evaluation, because it focuses on the programme implementation process and phases, and asks the following key questions: • How well has the programme been implemented? • How have the resources and inputs been utilised, leveraged or applied? • How much does implementation vary from site to site? • Did the programme deliver on the intended activities? At what cost? • What worked and what didn’t? What course correction is required? Next Generation Consultants - All rights reserved 33 THE FUNDAMENTALS OF MEASUREMENT Monitoring is carried out for different purposes, generally having little to do with evaluation. Some of the most frequent reasons for monitoring include: • Internal use by project managers and staff to better understand and track progress, mainly to identify if the project is on target. This includes tracking data on which services are provided, their quality and who receives them, as well as related considerations. Monitoring data can also often serve as an early warning system, and in the case of negative or unexpected findings may suggest the need to consider a change in programme design and implementation approach while the project or programme is still underway. • Internal use at the regional, national and/or international HQ level so that the organisation can track a project’s or activity’s status against project management plans and expectations, for planning and management purposes, as well as to address programme deliverables and outcomes. • Addressing external requirements for governance, compliance and control, such as investor-specific reporting requirements. While these are all legitimate and important reasons for monitoring, none are concerned with contributing to actual impact evaluation practices. The type of data that is collected, and the way in which it is reported, is often not ideal for evaluation purposes. HOW IT WORKS One needs to plan for programme monitoring data to be useful for management oversight and control. In ideal circumstances, those conducting an evaluation can also contribute to the design and structuring of an intervention’s monitoring system. Monitoring: • is an ongoing, continuous process • requires the collection of data at multiple points throughout the programme life cycle • can be used to determine if activities need adjustment during the intervention lifetime to improve desired outcomes Monitoring can take many forms, but mostly consists of quantitative data relating to programme management activities. It is important to note its limitations. Monitoring mainly tracks progress against predefined objectives and indicators, and assumes these are appropriate. But monitoring by itself cannot draw conclusions about attribution, or identify the reasons why changes have or have not taken place (such as the extent to which these changes are a result of the intervention or due to other causes). It is also usually unable to identify unintended effects, gaps in service, etc. For this, one usually needs evaluation. Monitoring questions are like evaluation questions, but will enable keeping track of progress and achievements against specific levels of a programme logic model – activities, short-term and long-term outcomes. The previous questions help focus the information gathered regularly and ensure that it is relevant and useful. The questions below can be a good way of finding information about a programme that may be different from what was set out in the original plan. Questions for each level of a programme logic framework should continually be developed, but every activity or outcome does not need its own questions. The table overleaf contains examples of monitoring questions at different levels of the programme logic. 34 THE FUNDAMENTALS OF MEASUREMENT Level of programme logic Example of monitoring questions Impact How are community members behaving differently after involvement with the programme? For instance, what relationships were formed between the community and local government and what difference do they make? What do we need to change or improve in the programme design, implementation and management process? Long-term outcomes Are participants developing new skills or knowledge as desired? In which areas? How are these skills used? How are participants’ attitudes and behaviours changing after participating in programme activities? Is the correct programme data being gathered? Short-term outcomes How are community members participating and what difference is that making? To what extent are activities seeing the desired shortterm changes that were planned for? Do the data management and collection processes work? Outputs How efficiently and effectively are the planned activities implemented? Who are the main participants in programme activities? To what extent are they participating? Which stakeholders are cooperating most effectively to implement activities? How many of the planned activities took place? What went right or wrong? What can be learned from the implementation process? Inputs How much of the financial resources are spent? How are the human resources coping? Are timesheets completed? Is data captured correctly? How many men, women, boys or girls are participating? How many of the people participating have a disability, or live in a rural, remote or urban setting? Are there any immediate requirements? Are there any immediate risks? Must activities be scaled? Is infrastructure needed to enhance participation? • Results monitoring tracks effects and impacts. This is where monitoring merges with evaluation to determine if the project or programme is on target towards its intended results (outputs, outcomes and impact) and whether there may be any unintended impact (positive or negative). For example, a psychosocial project may monitor that its community activities achieve the outputs that contribute to community resilience and ability to recover from a disaster. • Process (activity) monitoring tracks the use of inputs and resources, the progress of activities and the delivery of outputs. It examines how activities are delivered – time and resource efficiency. It is often conducted in conjunction with compliance monitoring and feeds into the evaluation of impact. For example, a water and sanitation project may monitor that targeted households receive septic systems according to schedule. • Compliance monitoring ensures compliance with investor or funder regulations and expected results, grant and contract requirements, local governmental regulations and laws, as well as ethical standards. For example, a shelter project may monitor that shelters adhere to agreed national and international safety standards in construction. TYPES OF MONITORING Next Generation Consultants - All rights reserved 35 THE FUNDAMENTALS OF MEASUREMENT • Context (situation) monitoring tracks the setting in which the project or programme operates, especially as it affects identified risks and assumptions, but also unexpected considerations. It includes the sector or geographic setting as well as the larger political, institutional, funding and policy context that affect the project or programme. For example, a project in a conflict- prone area may monitor potential fighting that could not only affect project success but endanger project staff and volunteers. • Beneficiary monitoring tracks beneficiary perceptions of a project or programme. It includes beneficiary satisfaction or complaints with the project or programme, including their participation, treatment, access to resources and overall experience of change. It is sometimes referred to as beneficiary contact monitoring (BCM), and often includes a stakeholder complaints and feedback mechanism. It should take account of different population groups, as well as the perceptions of indirect beneficiaries (e.g. community members not directly receiving a product or service). For example, a cash-for-work programme assisting community members after a natural disaster may monitor how they feel about the selection of programme participants, the payment of participants and the contribution the programme makes to the community (e.g. are these equitable?). • Financial monitoring accounts for costs by input and activity in predefined categories of expenditure. It is often conducted in conjunction with compliance and process monitoring. For example, a livelihoods project implementing a series of micro-enterprises may monitor the money awarded and repaid, and ensure that implementation is according to the budget and timeframe. • Organisational monitoring tracks the sustainability, institutional development and capacity- building in the project or programme, and with its partners. It is often done in conjunction with the monitoring processes of the larger, implementing organisation. For example, a national society’s headquarters may use organisational monitoring to track communication and collaboration in project implementation among branches and regional offices, countries, etc. EVALUATION WHAT IT IS Evaluation measures how well the programme activities have met expected objectives and/or the extent to which changes in outcomes can be attributed to the programme or intervention. Evaluation can make a difference and help bring about change. For this to happen, it needs to be underpinned by four core principles which consider the needs of key stakeholders and primary intended users. Evaluators, commissioners of evaluators, M&E officers and other key stakeholders need to understand and agree on these (or other) principles so that they can share common ground and experiences and learn from one another. The four core principles for making evaluations matter are that they should: be utilisation-focused, influence- and consequence-aware focus on stakes, stakeholder engagement and learning be responsive to the situation have multiple evaluator and evaluation roles 1 2 3 4 36 THE FUNDAMENTALS OF MEASUREMENT DESIGNING AND FACILITATING EVALUATION THAT MATTERS Programme and policy evaluation is the systematic application of research methods to assess programme or policy design, implementation and effectiveness, and the processes to share and use the findings of these assessments. Evaluation practice is the “doing” of evaluation, evaluation capacity is the ability to do evaluation, and evaluation use is the application of evaluation to a change process. Evaluation field-building refers to the range and diversity of efforts to strengthen practice, capacity and use. Field-building includes, but is distinct from, evaluation capacity-building or professionalisation. Field-building encompasses an understanding that these dimensions exist in a broader context that can support or weaken efforts to strengthen practice, capacity or use. The engagement of stakeholders in an evaluation process makes sense on practical and ethical grounds and will enhance the understanding of the development initiative and the usefulness of the evaluation. Engaging stakeholders in thinking through the possible consequences of choices made in the evaluation process at the individual, interpersonal and collective levels is important when designing and facilitating evaluation processes. The suggested steps for designing and facilitating evaluation that matters are: Readiness for evaluation Assess ability and readiness for evaluation (of stakeholders, organisations and programmes) Agree on participating stakeholders and primary intended users Focus the evaluation Agree on the evaluation purpose Agree on evaluation principles and standards Consider stakes, stakeholders, evaluation use and consequences Articulate the theory of change Agree on key evaluation areas and questions Further define evaluation boundaries Agree on evaluation approach Implement the evaluation Plan and organise the evaluation Develop the evaluation matrix Identify key indicators and other information needs Identify baseline information Collect and process data Analyse and critically reflect on findings Communicate and make sense of findings Next Generation Consultants - All rights reserved 37 THE FUNDAMENTALS OF MEASUREMENT COMPLEMENTARY ROLES OF M&E This table clarifies the distinct roles and values of M&E: Clarifies programme objectives Links activities and resources to objectives Translates objectives into performance indicators and targets Routinely collects data on these indicators and compares results with targets Reports progress to managers and alerts them about problems Analyses why intended results were or were not achieved Assesses specific casual contributions of activities to results Explores implementation processes Explores unintended results Highlights accomplishments or programme potential, provides lessons learned, offers recommendations for improvement MONITORING EVALUATION PURPOSE OF EVALUATION Evaluation can be carried out for several purposes and take various forms. Some of the following types of evaluation (the list is not exhaustive) may contribute to impact evaluation under certain circumstances. • Needs assessments involve assessing or evaluating the needs of programme recipients or problem situations, often prior to the initial programme development or project design stages. Such assessments frequently identify ways in which expressed community, recipient or beneficiary needs can be addressed. • Process (or implementation) evaluation describes the nature of the intervention as it is implemented. To a certain extent, monitoring may be able to provide data about programme activities that can be useful for process evaluation. However, interventions are rarely applied exactly as initially intended and frequently change over time, often for good reasons. It can be surprisingly difficult to determine what is taking place, who is being served, in what ways or to what extent change is evident, and what else is going on that affect outcomes. Process evaluation can go into more detail than monitoring, often explicitly using questions arising from monitoring results as a starting point. Without understanding exactly what the programme is, even the most sophisticated and statistically rigorous evaluation will have little meaning. Evaluations that clearly outline the programme, along with reasons for divergence from original expectations, can provide valuable information to help understand how the programme’s outputs might have made an impact (or indicate challenges in a programme’s implementation, or underlying assumptions that may impede its ability to make an impact). For example, if a programme’s impacts were limited, process evaluation data can help ascertain if this was because of a problem in the ToC (how the programme was expected to work), or due to limitations in how it was implemented (management aspects). • Formative evaluation is carried out partway through implementation and is intended to improve performance during the subsequent steps of a programme or project. Formative evaluation can help identify intermediate outcomes, at what point (if any) the intervention seems likely to make an impact and what else may be needed to enhance its effectiveness. • Organisational evaluation (due diligence) looks at an organisation’s overall effectiveness, or that of an organisational unit. Organisational factors (e.g. governance, management, human resources, finances, intra- and inter-organisational relationships) often may have more to do with an intervention’s success than its design does. These factors are essential information that must be considered during the design and interpretation of evaluation. 38 THE FUNDAMENTALS OF MEASUREMENT WHY IT IS IMPORTANT Fundamentally, evaluation is about providing a systematic approach to support learning – learning to understand which programmes worked, or not, and how to effect greater change or impact, and learning about what changes occur (descriptive), how and why change happens, or not (explanatory and evaluative), and the causes of changes or stasis (causal). Different approaches to evaluation can help provide different types of evidence and answer different kinds of research questions. A mix of approaches is often needed to answer relevant questions over the course of a programme’s life cycle. The information that evaluation provides can be put to a range of different uses, including: Designing services: Piloting and refining different parts of a service to develop a model that seems to work. Improving services: Using evaluative learning to adapt and change activities or services to maximise impact. Demonstrating programme efficacy: Answering questions about what works and providing robust evidence of the success of a development model or programme. Sharing ideas and building an evidence base: Providing insight and knowledge about specific social problems and methods of tackling them, to support collective efforts around social change. Maintaining accountability: Providing evidence to stakeholders on how money, resources and effort support the aims, objectives and outcomes of an intervention. HOW IT WORKS Evaluation requires: Data collection at the start of a programme (to provide a baseline) to measure change or impact against, and again at the end, as well as during programme implementation A control or comparison group, to measure whether the changes in outcomes can be attributed to the programme A well-planned and documented programme logic and theory of change design Next Generation Consultants - All rights reserved 39 THE FUNDAMENTALS OF MEASUREMENT When conducting an evaluation, keep the following in mind: • Understand the stakeholder audience information requirements (funder, intermediary, beneficiary, colleagues, community stakeholders, broader public, sector, experts). • Be prospective and proactive: Include evaluation expectations and requirements in the project planning stage. • Choose appropriate evaluation methods and methodologies and tailor them. • Ensure adequate access to key data and information sources. • Expand the information sources to trends and benchmarks, if possible, to ensure data comparability. • Keep it real and be proportionate – measurement can evolve over time. • Be flexible and iterative – learning is part of the process. Quality of evaluation practice is critically important. When reviewing or designing an evaluation, consider the following: • Voice and inclusion: The perspectives of beneficiaries, for example people living in poverty, including the most marginalised, and provide a clear picture and context of who an intervention affects, and how. • Appropriateness: The evidence generated through various evaluation processes must be justifiable given the nature and purpose of the assessment. • Triangulation: Conclusions about the intervention’s effects (reach of change) must include a mix of methods, data sources and perspectives. • Contribution: The data evidence must explore how change happens and the contribution of the intervention, as well as the factors outside the control of the intervention. • Transparency: The evidence of change must disclose the details of the data sources and methods of gathering information, the results achieved and any limitations in the data, the data analysis process as well as assumptions made or conclusions drawn. Every evaluation involves one or several criteria by which the merit or worth of the evaluated intervention is assessed, explicitly or implicitly. All evaluations should address the following aspects: • Effectiveness: The extent to which a development intervention has achieved its objectives, taking its relative importance into account. • Impact: The totality of the effects of a development intervention, positive and negative, intended or unintended, qualitative and quantitative, etc. • Relevance: The extent to which an intervention conforms to the needs and priorities of target groups or beneficiaries and social contexts, as well as the laws of recipient countries and specific requirements of donors or investors. • Sustainability: The continuation or longevity of benefits from a programme or intervention after the development assistance stops. • Efficiency: The extent to which the costs of an intervention can be justified by its results, taking alternatives into account. (Note: Effectiveness only refers to the extent to which an evaluated intervention has achieved its objectives. Efficiency refers to the extent to which the cost of an intervention can be justified by its results.) Each one of these evaluation criteria can and must be applied to every development intervention. Each aspect represents something important that needs to be considered before it can conclude whether an intervention could be regarded as a success. BE FLEXIBLE AND ITERATIVE – LEARNING IS PART OF THE PROCESS.“ “ 40 THE FUNDAMENTALS OF MEASUREMENT In many evaluations, procedural values and principles are also used as evaluation criteria. Participation, partnership, human rights, gender equality and environmental sustainability are prominent examples. These are all aspects and principles governing the design and implementation of interventions. Some of these include: • Appropriateness: The extent to which inputs and activities are tailored to local needs and development contexts and the requirements of ownership, accountability and cost-effectiveness. How well did the programmes respond to the changing demands of the situation or context? • Coverage: The extent to which the entire recipient or beneficiary group had access to the benefits and were given the necessary support. Key questions include: Did the benefits reach the target group as intended, or did too large a portion of the benefits leak to outsiders? Were benefits distributed fairly between gender and age groups and across social and cultural barriers? • Connectedness: The extent to which programme activities considered longer-term and larger contexts, needs and the interconnectedness of social issues. • Coherence: Consistency between aid and development, trade and humanitarian policies or legislation, and the extent to which human rights of beneficiaries were considered. Important questions are: Were policies mutually consistent, did all actors pull in the same direction, were human rights consistently respected? Did the programme adhere to the social, cultural or political context of the country, social sector or geographic region? TYPES OF EVALUATION There are many different types of evaluation. In general, the type of evaluation refers to the purpose and methodology applied or required. The two most used categories of evaluation are formative or summative. Formative evaluation includes several evaluation types: • Needs assessment determines who needs the programme, how great the need is, and what might work to meet the need. • Evaluability assessment determines whether an evaluation is feasible and how stakeholders can help shape its usefulness. • Structured conceptualisation helps stakeholders define the programme or technology, the target population and the possible outcomes. • Implementation evaluation monitors the fidelity of the programme or technology delivery. • Process evaluation investigates the process of delivering the programme or technology, including alternative delivery procedures. Summative evaluation can also be subdivided: • Outcome evaluation investigates whether the programme or technology caused demonstrable effects on specifically defined target outcomes. • Impact evaluation is broader and assesses the overall or net effects, intended or unintended, of the programme or technology. • Cost-effectiveness and cost-benefit analysis address questions of efficiency by standardising outcomes in terms of their monetary cost and value. • Secondary analysis re-examines existing data to address new questions or use methods not previously employed. • Meta-analysis integrates the outcome estimates from multiple studies to arrive at an overall or summary judgement on an evaluation question. Next Generation Consultants - All rights reserved 41 THE FUNDAMENTALS OF MEASUREMENT Because the development sector is so diverse, dynamic and complex, new approaches to M&E are constantly developed, and new terminology and definitions enter the space. Some of the new concepts and definitions in the context of M&E include: • Mixed methods approach: Mixed methods (MM) evaluations seek to integrate social science disciplines with quantitative (QUANT) and qualitative (QUAL) approaches to theory, data collection, data analysis and interpretation. The purpose of using a mixed methods approach is to strengthen data reliability and the validity of the findings and recommendations, and to broaden and deepen the understanding of the processes through which programme outcomes and impacts are achieved, and how these are affected by the context in which the programme is implemented. While mixed methods are now widely used in programme evaluation, and evaluation RFPs frequently require their use, many evaluators do not utilise the full potential of the MM approach. • Shared measurement: Shared measurement is the product as well as the process of taking a shared approach to measurement. In terms of the product, shared measurement is any tool that more than one organisation can use to measure impact. The process of shared measurement entails understanding a sector’s shared outcomes, often mapping out its theory of change. It also involves the engagement and collaboration of all stakeholders needed to result in a shared approach and outcomes. • Collective impact: Practitioners, funders and policymakers have begun to recognise that solving complex social problems on a large scale can happen more effectively when actors work together, rather than through isolated programmes and interventions. Many organisations in the social sector have embraced the concept of collective impact as a new way to achieve large-scale systems change. • Developmental or adaptive evaluation: Evaluation is about critical thinking. Development is about creative thinking. These two types of thinking are often seen as mutually exclusive, but developmental evaluation is about holding them in balance. Developmental evaluation combines the rigour of evaluation (being evidence-based and objective) with the role of organisational development, i.e. coaching, which is change- orientated and relational. SELECTING THE EVALUATION TYPE Selecting the correct evaluation type depends on: The objectives and priorities of the project The tim efram e and budget for com pleting the evaluation Thepurpose oftheproject evaluation The timeframe for conducting the evaluation (e.g. during or after the project) The nature of the project (process- orientated or outcome- orientated) Howandby whomtheresults willbeused NEW APPROACHES TO M&E ARE CONSTANTLY DEVELOPED.“ “ Evaluation type When to use What is the focus? Why is it useful? Formative process evaluation Throughout programme delivery Process evaluation can be formative, i.e. conducted on new programmes or services to inform delivery, or summative, i.e. conducted at the end of a programme or service Developing a detailed understanding of programme operations How successfully intended beneficiaries are reached How closely delivery is implemented as planned Supports an understanding of how and in what contexts the programme is delivered best Identifies ways to improve service design and programme delivery Summative impact evaluation At defined intervals during programme delivery (can have formative and summative elements) At the end of a programme Generally uses a mixed methods approach Works well with shared or collective impact models and methodologies Focuses on assessing whether the intended changes occurred for service users or recipients Researches the breadth and depth of change for beneficiaries Attributes observed changes to programme activities Can include various analyses, e.g. an economic analysis that measures a programme’s economic impact Provides evidence to demonstrate programme efficacy Supports learning around how to maximise beneficiary outcomes Analysis can also provide information on the social returns on investment for the funder or investor Outcome, developmental or impact evaluation In the early stages of developing a new social intervention or programme or development model or to scale or replicate programmes To support programme adaptation in fastchanging and complex contexts Works well with mixed methods, shared measurement and collective impact evaluations Iterative process that assesses programme delivery as well as indicators of impact on beneficiaries Rapid and real-time feedback that is linked to how the programme is delivered Provides methodological flexibility to support adaptation and learning Supports social innovation in complex and uncertain contexts 42 THE FUNDAMENTALS OF MEASUREMENT EVALUATION TYPE BY OUTCOME The most difficult part of evaluation is knowing where to begin. There is so much information to gather, but the key is determining what is most useful to know in order to make better decisions and improve performance. This table is a guide to assist with the decision-making process around types of evaluation. Next Generation Consultants - All rights reserved 43 THE FUNDAMENTALS OF MEASUREMENT Outcome, impact or developmental evaluation Formative process evaluation Summative evaluation Stage of impact development The initiative is exploring and in development The Initiative is evolving and being refined The Initiative is stable and well-established What’s happening? The initiative’s core elements are developed, action plans make provision for exploring different strategies and activities There is a degree of uncertainty about what will work, and how New questions, challenges and opportunities emerge The initiative’s core elements are in place and are implementing agreed-upon strategies and activities Outcomes are becoming more predictable The initiative’s context is increasingly well-known and understood The initiative’s activities are well-established Implementers have significant experience and increasing certainty about what works The Initiative is ready for a determination of impact, merit, value and significance Strategic question What needs to happen? How well is it working? What difference did it make? Sample evaluation questions How are relationships among partners developing? What seems to be working well and where is early progress? How should the initiative adapt in response to changing circumstances? How can the initiative enhance what is working well and improve what is not? What effects or changes are beginning to show up in targeted systems, processes or stakeholders? What factors are limiting progress and how can they be managed or addressed? What difference(s) did the initiative make? What about the process has been most effective, for whom and why? What ripple effects did the initiative have on other parts of the community or system? Key evaluation questions are related to the overall effectiveness and appropriateness of the programme. They can help an organisation, investor and/or community to understand how well the programme met its aims and how relevant it was to the local context. Examples of key evaluation questions include: • How far did we get toward our intended outcomes? (How effective were we?) • How fair and appropriate was the implementation of the programme, e.g. considering gender, culture or disability? • How relevant were the intended programme outcomes or activities for the target community? How did the outcomes fit with the local context? • What was the impact of the programme on the target community? • How cost-effective (efficient) were the programme activities? Could other activities have produced more results at the same cost? • How likely are the outcomes to be sustained after the programme ends? 44 Characteristics of a successful evaluation: • Clear programme objectives, targets and timeframe. • Participation of project “beneficiaries” in project planning, monitoring and evaluation. • Shared understanding and ownership of project objectives and how these are to be achieved by stakeholders and partners. • Manageable and realistic data collection and analysis – the more complicated the tools and methods employed, the more likely they are to fail. • Harmonised data collection tools and instruments with other systems in place. • Adequate financial and human resources to carry out the required levels of monitoring and evaluation; where technical capacity is not adequate, training and technical assistance need to be part of the programme design. • Relevance and transparency – monitoring and evaluation of programmes must be conducted in a transparent way and data should be locally driven and locally owned. • Appropriate feedback loops to ensure that results inform future planning processes and projects. • Monitoring and evaluation should be culturally appropriate and pass ethical standards established in local and national contexts. THE FUNDAMENTALS OF MEASUREMENT IMPACT ASSESSMENT OR EVALUATION WHAT IT IS The word “impact” is everywhere these days, but not everyone uses or understands it uniformly. Why does this matter? A clear definition is necessary to develop an effective and rewarding grantmaking strategy, as impact definitions drive decisions and ultimately move money. Differing assumptions about the definition of impact can also create communication difficulties between social investors, intermediaries and beneficiaries. Social investors have an obligation to seek clarity and consensus about definitions of impact, as their definitions often carry disproportionate weight – programmes can have more of an incentive to satisfy funders than beneficiaries. For social investors who are focused on historically disempowered groups such as women and girls, it is critical to include those beneficiary voices in the impact definition process, otherwise development efforts run the risk of recreating the same imbalanced power dynamics they want to counteract. Impacts and impact evaluation or assessments are sometimes defined in different ways. Nevertheless, an essential aspect of impact evaluation concerns attribution – linking documentable impacts in a cause-and-effect manner to an intervention. But it is insufficient to simply know that impacts have come about because of an intervention. To be able to apply the findings from an impact evaluation in other settings and/or to other groups of beneficiaries, one needs to know why and how the results came about, as well as the characteristics of those who benefited (or didn’t). This is where meaningful M&E can be helpful – if one identifies in advance, as specifically as possible, the types of information that will be needed and how various forms of M&E can provide input into a successful impact evaluation. In addition, the OECD Development Assistance Committee (DAC), one of the most influential bodies concerning development evaluation, has identified five basic evaluation criteria (or questions) for impact assessments: Relevance, effectiveness, efficiency, impact and sustainability. Note that evaluation of impact represents just one of these criteria. It is rarely possible to conduct an impact evaluation focused specifically on attribution without also having undertaken other forms of evaluation to better understand what has taken Next Generation Consultants - All rights reserved 45 THE FUNDAMENTALS OF MEASUREMENT place. There is little point in documenting that an intervention is making an impact if it is no longer relevant or if there may be more effective or less costly ways of addressing the basic need, or if the intervention and/or its results are not likely to continue or be sustainable. Looking across the various applications of the word “impact”, one sees the words “effect”, “change”, “differences” and “results.” These words reflect the fact that for most people, impact implies a change brought about by some sort of action. Moreover, the effect of change is generally presumed to be positive. • Actions that prevent a particular change, even if they do not change the overall status quo, can still have an impact. Maintaining the status quo is an impact, if the alternative scenario is worse, for instance blowing up a meteor which otherwise would destroy a city. • Change can occur and be observed independent of a particular action. Measuring outcomes and impacts can happen even without measuring a particular intervention’s contribution or lack thereof. The moon can be observed rising, whether or not wolves howl at it. • Impact sought is subjective, and defined by a person or group for a person or group. Impact definitions are not abstract, objective truths. They are the product of decisions people and organisations make, and often aim to change behaviour or situations for those on the receiving end of an intervention. This is not always problematic, but it must be recognised in order to account for potential bias or disempowerment. It is important to clarify some of the vocabulary around impact. Many discussions of impact explicitly or implicitly refer to what evaluation professionals call the impact, value or results chain, representing actions and resources along with their expected effects. Those effects are described as outcomes, which lead to impacts. To help frame the discussion going forward, we make the following observations: • Actions can fail to produce change due to a host of factors. The absence of observed change does not necessarily indicate an ineffective action (see the third observation below). • Actions can produce unanticipated changes, including negative ones. Impact is not always positive, and negative impact is not always evidence of an ineffective intervention. • Impact does not mean the same thing for every person or organisation in the sector, whether from a grantmaking perspective and/ or social, community or sector development perspective. “What we talk about when we talk about impact” depends on who is talking and who is listening! This confusion can be problematic: For example, in impact investing, uniform guidelines might release more capital, but the noise around impact definition makes it difficult to arrive at such a consensus. For non-profits, juggling conflicting investor definitions requires time and resources that may be limited. The confusion around impact has given rise to additional common misconceptions, and these misconceptions contribute to much of the gender bias in impact thinking. • There is a perception that impact is always change, and that the change is always positive. Neither of these ideas hold true in every circumstance, particularly for interventions relating to women and girls, where “holding the line” (perhaps preventing the passage of a restrictive law) may represent an enormous accomplishment. • Defining impact as positive change will undervalue the contributions of many effective organisations that work on hard problems or under difficult conditions (i.e. the various stakeholders in an intervention). • Another common misconception is that there are some impacts that simply can’t be measured. 46 THE FUNDAMENTALS OF MEASUREMENT • This stems in part from yet another misconception, which is that impact must be attributed to a particular actor or action. In reality, some impacts – such as a change in attitudes towards women – are very difficult or perhaps impossible to attribute to specific causes, but the impact itself can still be measured, even without a clear causal attribution. This is especially relevant for the transformative social change that many in the women-and-girls space seek; these large- scale, systemic shifts can rarely be attributed to a single group or intervention, but are visible and measurable nonetheless. Defining impact as attributable change can exclude work done as a component of collective efforts, perpetuating the idea that those efforts are outside the realm of the impact-focused investor. WHY IT IS IMPORTANT With these considerations in mind, there is no single right or common definition of impact for every person in every situation. Good definitions are inclusive, but above all they are useful in clarifying an action path. In that spirit, we offer three questions for the impact-focused investor to ask throughout their social or community engagement, investment and development processes and strategies: • What difference do we want to make? By asking this question, investors can take the first step towards defining their own desired impact clearly, openly and deliberately. • Is that difference meaningful to the population we hope to serve? Is our definition of impact aligned with that of others, particularly those we hope to help? Personal engagement with beneficiaries is valuable for many reasons, including the opportunity to hear on-the-ground perspectives on impact. For many investors, the bulk of their investment or grantmaking spend flows through intermediary organisations. Broadly speaking, these organisations can incorporate beneficiary perspectives via (1) a specific focus on women and girls, (2) representation from beneficiary groups in their leadership structures and (3) impact assessments that evaluate impact on women and girls specifically, ideally using an approach that allows for participation from the populations served. This list is by no means exhaustive, but may provide a starting point for the investor looking to bring a bottom-up approach to impact definitions. • How will we know if we are moving closer to making that difference? With this question, an investor can make deliberate decisions to guide the measurement approach in a way that reflects the desired impact: Is attribution important? Are effects on women and girls addressed specifically, with enough flexibility to capture positive, negative or neutral impacts? What is the expected timeframe? These three questions offer concrete ways to incorporate impact thinking, specifically as they relate to gender-based investing into social investment or grantmaking decision-making, giving investors more confidence to invest in interventions that benefit women and girls in a range of ways. It is our hope that by outlining different ways of thinking about impact, all actors in the space – researchers, donors, investors, non-profits and beneficiaries – may have clearer and more rewarding conversations that may lead to more money doing more good. HOW IT WORKS If a grantmaking organisation, social investor or funder aims to achieve impact, impact measurement should be an integrated, interdependent part of strategy and day-to-day operations. This is a three-step process: • Clarifying purpose: Identifying the impact goal(s). • Determining and articulating process: Understanding how the impact can be achieved. • Measuring performance: Knowing if, under what circumstances and to what extent change or impact has occurred. Next Generation Consultants - All rights reserved 47 THE FUNDAMENTALS OF MEASUREMENT THREE P-TERMS TO ACHIEVING IMPACT This diagram describes the three Ps to achieving impact: Integrating social outcomes measurement into organisational strategy, narrative and day-to-day operations. Purpose: What are we trying to achieve? Process: How are we going to achieve it? Performance: To what extent have we achieved our purpose and made a difference? The three Ps (purpose, process, performance) are interdependent. Without purpose, it will not be clear what should be measured. Without understanding how the purpose is going to be achieved, it will not be possible to understand whether and why change might have occurred. And without measuring performance, it is not possible to understand whether the purpose has been achieved or if processes need to be amended, replicated or discarded. Purpose is the reason why something exists, is done or created. Purpose matters to people, organisations and communities because it can add clarity, direction and motivation. Most successful social investors are clear about their purpose and work towards it intentionally. Establishing what your organisation is trying to achieve is the first of the three integrated stages outlined above. You will need to ask questions like: Without answers to these questions, it is difficult to ensure that activity is purposeful. It is also difficult to decide what to measure, or determine whether and to what extent objectives are being achieved. It is important to consider purpose at different levels: Society, organisation, programme/initiative and for different stakeholders. Purpose statements can often be found in mission statements, organisational objectives or strategies. • What is the purpose of the policy, programme, initiative or intervention? • What do you want to achieve? • Why does it matter? • Why are you doing what you’re doing? WITHOUT UNDERSTANDING HOW THE PURPOSE IS GOING TO BE ACHIEVED, IT WILL NOT BE POSSIBLE TO UNDERSTAND WHETHER AND WHY CHANGE MIGHT HAVE OCCURRED. “ “ 48 THE FUNDAMENTALS OF MEASUREMENT This table describes key evaluation questions for measuring impact: Overall input • Did it work? Did the intervention produce the intended impacts in the short, medium and long term? • For whom (in what ways and in what circumstances) did the intervention work? • Which unintended impacts (positive and negative) did the intervention produce? • What other impacts can be ascribed to the intervention, e.g. the range, width, depth and dimensions of impacts? Nature and distribution of impacts • Are impacts likely to be sustainable? • Did these impacts reach all intended beneficiaries? • How long is the impact value chain, e.g. the length, breadth or stakeholders impacted? • Can the extent (depth, width, reach) of the impact be quantified or qualified? Influence of other factors on impacts • How did the intervention work in conjunction with other interventions, programmes or services to achieve outcomes? • What helped or hindered the intervention regarding achieving these impacts? How it works • How did the intervention contribute to intended impacts? • What were the particular features of the intervention that made a difference? • Which variations were there in implementation? • To what extent are differences in impact explained by variations in implementation? Match of intended impacts to needs • To what extent did the impacts match the needs of the intended beneficiaries? • To what extent did the impacts meet the intended strategic objectives? Strategic and clear structures provide a solid foundation for effective and sustainable impact assessment. While the details of the evaluation systems and processes will vary by programme and desired impact, there are guidelines and considerations based on best practices that have broad applications for any impact evaluation. Next Generation Consultants - All rights reserved 49 THE FUNDAMENTALS OF MEASUREMENT Identify the problem or issue to be addressed by the programme(s) to determine the intended ultimate impact(s). Research the potential means to reach the impact based on determinants and contributing factors, best practices and successful models, and resources needed for all programme options (including evaluation). Calculate the expected return of the different programme investment options to help determine which option(s) may be best. Expected return = (benefit x likelihood of success) / cost. Identify stakeholders at all levels – from the communities affected to practitioners, influential policymakers and implementing intermediaries – to seek the most responsive input and insight possible. Design programmes and evaluations simultaneously so that programmes can be implemented in a measurable way. Determine staff and financial capacity for conducting regular, long-term impact assessments. CONDUCTING IMPACT ASSESSMENTS Steps to follow and elements to consider before developing a concrete impact assessment system: Once a conceptual framework for impact assessment is established, create the necessary conditions and establish a clear work plan that includes infrastructure, processes, timelines and costs to support the measurement system’s sustainability: • Determine underlying systems and behaviour changes necessary to reach the ultimate impact(s), such as root causes, contributing factors and other issues. Some of these changes may be addressed through partnerships or will need to be included in the design and implementation of the programme and evaluation plans. • Engage diverse stakeholders as partners throughout the process of planning, development, implementation and evaluation of the impact assessment system. Structure clear engagement points and communication channels appropriate to each type of partnership. • Research which data and systems already exist that can be incorporated into the impact assessment system and establish collection procedures, tools, roles and responsibilities for implementation. • Identify who will be responsible for evaluation (e.g. a separate team or a collaboration of local, national and global partners), how data will be collected (using current and/or new systems), and how and to whom results will be communicated. • Integrate evaluation with programmes – resource teams, programme management processes and evaluation systems – so that best learnings can emerge and be acted upon. Create an evaluation and programme practice feedback loop that supports responsive innovation and assessment. CONDITIONS TO SUPPORT AN IMPACT ASSESSMENT SYSTEM 50 THE FUNDAMENTALS OF MEASUREMENT • Educate and seek buy-in from internal and external stakeholders on the evaluation or impact process so that knowledge and learning are valued and integrated into the grantmaking processes, programme development processes, evaluation systems and cycles. • Develop an evaluation and analysis turnaround timeline to ensure that reporting and further strategic planning can inform programme implementation in a timely manner. • Establish an evaluation management oversight and governance function to ensure the production of quality evaluation and assessment reports that contain meaningful analyses of impact that speak to the strategic objectives of the organisation and its subsequent development portfolio, as well as supporting investment and development programmes. CONDITIONS TO SUPPORT AN IMPACT ASSESSMENT SYSTEM Impact assessment includes assessment and due diligence of strategic investment decisions (e.g. strategic investment and development strategies), assessment of management, operational and programmatic processes and systems, performance tracking and learning, and regular analyses of results and outcomes of M&E processes. A comprehensive impact assessment system incorporates several ways to measure impact, outcomes and impacts specific to organisations, social development contexts and issues and relevant communities (stakeholders) affected. As a prerequisite, impact assessment and evaluation require: Indicators: Measurement units and standards that can be measured or assessed directly. Indicators are proxies for impact as they measure activities and products or services, but not the actual changes in larger environments. • An impact assessment system should track short-term, intermediate and long-term indicators to clarify immediate plans and future strategies, and determine progress over time. • Indicators can also be assigned a level of control or influence (high to low) that indicates how much effect the organisation can have on reaching it. Outcomes: Ultimate changes an organisation is trying to make, or in a specific social or development context, as well as the intended and unintended side effects of the programmes implemented to affect such outcomes. Impact: The portion of the total outcome resulting directly from the organisation’s activities, programmes and investments, above and beyond what would have happened anyway. Next Generation Consultants - All rights reserved 51 THE FUNDAMENTALS OF MEASUREMENT The relationship between indicators, outcomes and impact is summarised in the following impact value chain depiction. THE IMPACT VALUE CHAIN INPUTS Resources invested in the programme ACTIVITIES Activities conducted by the programme OUTPUTS Results that can be measured and attributed to the programme OUTCOMES Results that can be measured and attributed to the programme IMPACT Goals and objectives achieved by the programme for beneficiaries Impact and return of the programme for the funder What would have happened anyway? There are many challenges to creating an impact assessment system that accurately evaluates progress toward desired impacts. These challenges may not be a barrier to development so much as they are factors to keep in mind when assessing success and planning for future investment and development strategies. • For innovative and experimental programmes, developing a set of indicators, outcomes and impacts to track may be limited by a lack of previous supporting research and data from similar programmes. Evaluators should plan to be flexible and be willing to refine the impact assessment system in response to results seen in the communities impacted, parallel to the rapid prototyping processes in programme development. • The cost per impact can be high in the early stages of the programme or initiative due to start-up costs, which can skew perception of effectiveness. The cost per impact should be tracked over time and there should be explicit distinctions between once-off and ongoing costs from the beginning. • Information and data analysis can be biased by inconsistent data collection and a desire to demonstrate high impact, so evaluators should clearly define the different systems aspects and ensure that data proportionately represents the demographics and conditions of the communities that are affected. Additionally, an organisation can regularly make the outcomes of their impact assessments public and open to comments and feedback to build greater accountability and verification of the results and impacts. • Programme implementation (monitoring, management, evaluation) can be compromised by siloing these functions. Even when an organisation establishes a separate evaluation team, the overall infrastructure should support coordination and integrated cross-functioning (e.g. evaluators go on site visits with programme officers and all staff or stakeholders are included in analysing, synthesising and interpreting data). The impact assessment system should track internal as well as external impact, so that the organisation and desired impact are aligned for greater effectiveness. CHALLENGES IN DEVELOPING IMPACT ASSESSMENT SYSTEMS 52 THE FUNDAMENTALS OF MEASUREMENT TRENDS IN MEASURING IMPACT Three emerging developments have been identified that will influence how the future of impact measurement takes shape. 1. Market convergence: The blurring of boundaries between impact investing and mainstream philanthropic or social investment and grantmaking processes, systems, strategies and subsequent programmes and investment portfolios. 2. Financial quantification: The growing desire to quantify the financial value of the social and/or environmental impact of development, community or social investment programmes and portfolios. 3. External impacts: The need to factor in the external impacts or effects of an activity, such as the impact of an intervention on local, national, regional or rural economies and its subsequent impact on society (e.g. increased economic activity or jobs for unemployed people, women or the youth) into impact measurement and social development practice. SECTION FOUR INFORMATION AND DATA MANAGEMENT 04 54 Understanding how information is tied together and how each piece of the data puzzle interrelates to inform the big picture enables better decision-making, higher process efficiencies and potentially lower overall programme costs. As organisations try to understand the true impact and return of their social or community interventions, information becomes the key component in enabling executives and programme managers to make informed decisions based on a 360-degree view of the organisation, its programmes and M&E processes. Without an adequate understanding of the importance of an organisation’s data and structures, it is difficult to develop analytical tools that will enable effective decision-making and provide an overall view of what is happening, both in the organisation and with its programmes. INFORMATION AND DATA MANAGEMENT Data management is a key element of measurement processes. In the information management cycle, consideration should be given to selecting and collecting data sources, data quality and using a set of indicators that is standardised and value-adding, while optimising the cost as well as sharing data, information and knowledge through reporting and communication practices. WHAT IT IS Information management refers to the application of management techniques to collect information, communicate it in and outside an organisation, and process it to enable managers to make quicker and better decisions. Data is the basic building blocks of knowledge. The cost of collecting data is a major but not primary factor in determining evaluation methods. The key evaluation design parameter is what information is critical. WHY IT IS IMPORTANT Information and data management processes are important in terms of choosing measurement, evaluation or assessment methods. Proper information management assists with better decision-making throughout the measurement value chain. HOW IT WORKS Management, practitioners and evaluation teams must decide which information is worth pursuing, given the difficulties in data collection and the demands of each funded activity. The teams must therefore divide the information strategically. There are three determining criteria when assessing data: • What information is critical to the programme and activity? • What information is useful and enriching to a programme and/or activity? • What information is interesting, but does not reflect the actual outcomes or impact of an intervention? INFORMATION MANAGEMENT The way an evaluation study is constructed, the way the data is analysed, the way a credible report is produced and how an activity’s results (or lack thereof) are examined should be the indicators for judging whether an evaluation was successful. The management and evaluation team must choose which method best answers the required information questions to be able to justify their choice. Next Generation Consultants - All rights reserved 55 INFORMATION AND DATA MANAGEMENT WHAT IT IS The term “data” refers to raw, unprocessed information, while “information or strategic information” usually refers to processed data or data presented in a specific context. Data sources are the resources used to obtain data for M&E activities. WHY IT IS IMPORTANT Collecting data is only meaningful and worthwhile if it is subsequently used for evidence-based decisionmaking. To be useful, information must be based on quality data and reliable, relevant sources, and it must be communicated effectively to policymakers and other interested stakeholders. The key to effective data use involves linking the data to the decisions that need to be made and to those making these decisions. HOW IT WORKS M&E data needs to be manageable and timely, reliable and specific to the activities in question, and the results (information) must be well-understood. The M&E plan should include a data collection plan that summarises information about the data sources needed to monitor and/or evaluate the programme. The plan should include information for each data source, such as: Data can come from several levels: Client, programme, service environment, population and geographic levels. It is important to use the highest quality data that is obtainable, but this often requires a trade-off with what is feasible to obtain. The highest quality data is usually obtained through the triangulation of data from several sources. It is important to remember that behavioural and motivational factors on the part of the people collecting and analysing the data can affect data quality. Errors or biases common in data collection: • Sampling bias: Occurs when the sample taken to represent the population of interest is not representative. • Non-sampling error: All other kinds of  mismeasurement, such as bias, incomplete records, incorrect or incomplete questionnaires, interviewer errors or non-response rates. • Subjective measurement: Occurs when the evaluator influences the data. DATA COLLECTION METHODS WHAT IT IS Data collection tools will vary depending on factors such as the evaluation type, data availability, local context, or available resources and time. Using a mix of different methods enhances the robustness and credibility of an evaluation. Evaluators are encouraged to carefully consider the relevant options to decide on the most cost-effective technique that will provide the most robust evidence for the evaluation. DATA COLLECTION TOOLS AND TECHNIQUES • Case study: A detailed descriptive narrative of individuals, communities, organisations, events, programmes or time periods. They are particularly useful when evaluating complex situations and exploring qualitative impact. DATA SOURCES Timing and frequency of data collection Person or organisation responsible for data collection Information needed for evaluation indicators Additional information that will be obtained or required from the data sources INFORMATION AND DATA MANAGEMENT • Checklist: A list of items used to validate or inspect that procedures or steps have been followed, or the presence of examined behaviours. • Closed-ended (structured) interview: An interview technique that uses carefully organised questions that only allow a limited range of answers, such as “yes/no,” or expressed by a number on a scale. Replies can easily be numerically coded for statistical analysis. • (Public) Community interviews/meetings: A form of public meeting open to all community members. Interaction is between the participants and the interviewer, who presides over the meeting and asks questions following a prepared interview guide. • Direct observation: A record of what observers see and hear at a specified site, using a detailed observation form. Observation may be of physical surroundings, activities or processes. Observation is a good technique for collecting data on behaviour patterns and physical conditions. • Focus group discussion: Focused discussion with a small group (usually 8 to 12 people) of participants to record attitudes, perceptions and beliefs pertinent to the issues being examined. A moderator introduces the topic and uses a prepared interview guide to lead the discussion and elicit discussion, opinions and responses. • Key informant interview: An interview with someone who has specialised or specific information about a topic. These interviews are generally conducted in an open-ended or semi- structured fashion. • Laboratory testing: Precise measurement of specific objective phenomena, e.g. infant weight or water quality tests. • Mini-survey: Data collected from interviews with 25 to 50 individuals, usually selected through non-probability sampling techniques. Structured questionnaires with a limited number of closed- ended questions are used to generate quantitative data that can be collected and analysed quickly. • Most significant change (MSC): A participatory monitoring technique based on stories about important or significant changes, rather than indicators. They give a rich picture of the impact of development work and provide the basis for dialogue over key objectives and the value of development programmes. • Open-ended (semi-structured) interview: A questioning technique that allows the interviewer to probe and follow up topics of interest in depth (rather than just “yes/no” questions). • Participant observation: A technique first used by anthropologists; it requires the researcher to spend considerable time with the group being studied (days) and to interact with them as a participant in their community. This method gathers insights that might otherwise be overlooked, but is time-consuming. • Participatory rapid (or rural) appraisal (PRA): This uses community engagement techniques to understand community views on an issue. It is usually done quickly and intensively over two to three weeks. Methods include interviews, focus groups and community mapping. • Questionnaire: A data collection instrument containing a set of questions organised in a systematic way, as well as a set of instructions to the enumerator or interviewer about how to ask the questions (typically used in a survey). • Rapid appraisal (or assessment): A quick cost- effective technique to gather data systematically for decision-making, using qualitative and quantitative methods, such as site visits, observations and sample surveys. This technique shares many of the characteristics of participatory appraisal (such as triangulation and multidisciplinary teams) and recognises that indigenous knowledge is a critical consideration for decision-making. • Self-administered survey: Written surveys completed by the respondent in a group setting or in a separate location. Respondents must be literate (for example, it can be used to survey opinions). 56 Next Generation Consultants - All rights reserved 57 INFORMATION AND DATA MANAGEMENT • Statistical data review: A review of population censuses, research studies and other sources of statistical data. • Survey: Systematic collection of information from a defined population, usually by means of interviews or questionnaires administered to a sample of units in the population (e.g. beneficiaries or adults). • Visual techniques: Participants develop maps, diagrams, calendars, timelines and other visual displays to examine the study topics. Participants can be prompted to construct visual responses to questions the interviewers pose, for example by constructing a map of their local area. This technique is especially effective where verbal methods can be problematic due to low-literate or mixed-language target populations, or in situations where the desired information is not easily expressed in words or numbers. • Written document review: A review of documents (secondary data), such as project records and reports, administrative databases, training material, correspondence, legislation or policy documents. Major sources of data and information for project monitoring and evaluation include: • Secondary data: Useful information can be obtained from other research, such as surveys and studies previously conducted or planned at a time consistent with the project’s M&E needs, in-depth assessments and project reports. Secondary data sources include government planning departments, university or research centres, international agencies other projects or programmes working in the area and financial institutions. • Sample surveys: Surveys based on random samples taken from beneficiaries or target audiences are usually the best sources of data on project outcomes and effects. Although surveys are laborious and costly, they provide more objective data than qualitative methods. Many donors expect baseline and end-line surveys if the project is large and alternative data is unavailable. • Project output data: Most projects collect data on their various activities, such as number of people served and number of items distributed. • Qualitative studies: Qualitative methods that are widely used in project design and assessment • Checklists: A systematic review of specific project components can be useful in setting benchmark standards and establishing periodic measures of improvement. • External assessments: Project implementers as well as investors often appoint outside experts to review or evaluate project outputs and outcomes.  Such assessments may be biased by brief exposure to the project and over-reliance on key informants. Nevertheless, this process is less costly and faster than conducting a representative sample survey, and it can provide additional insight, technical expertise and a degree of objectivity that is more credible to stakeholders. • Participatory assessments: The use of beneficiaries in project review or evaluation can be empowering, building local ownership, capacity and project sustainability. However, such assessments can be biased by local politics or dominated by the more powerful voices in the community. Training and managing local beneficiaries can take time, money and expertise, and it necessitates buy-in from all stakeholders. Nevertheless, participatory assessments may be worthwhile as people are likely to accept, internalise and act upon findings and recommendations that they identify themselves. USING A MIX OF DIFFERENT METHODS ENHANCES THE ROBUSTNESS AND CREDIBILITY OF AN EVALUATION. “ “ 58 INFORMATION AND DATA MANAGEMENT ADVANTAGES AND DISADVANTAGES OF DATA COLLECTION METHODS There are advantages and disadvantages to various common data collection methods. Methods Advantages Disadvantages Document review Readily available, often electronically Organisation-specific Well-aimed at target audiences Shows progress or problems over time Shows development of activity (e.g. responsiveness to change) over time Illustrates causal linkages Volume can be unwieldy Organisation-specific Does not present context or illustrate individual (or group) impact effectively May overstate Survey questionnaires If well-designed, most rigorously shows relationships, causality and impact Objectively verifiable and replicable Require trained personnel and take much longer than other methods Can be short-circuited, depending on many external variables Literacy may be problematic Rapid appraisal Illustrates visible differences “Quick and dirty” Low-cost Rapid results Requires high level of cultural sensitivity Difficult to attribute direct causality Can undercut participatory nature of activity Focus groups Can be most participatory strategy Minimise extreme views through group interaction Reasonable cost Rapid results Can be objective, valid and verifiable Facilitator bias can affect findings Bullying by an individual in the group can limit the full expression of opinions Language barriers often require interpreters or translators, slowing and filtering impressions and expressions Interview Not much preparation required Strong interpersonal rapport possible Can be objective, valid and verifiable Subject to individual’s availability Depends on evaluator’s interviewing skills to assess individual bias Strongly subjective Direct observation Minimal preparation required Low-cost Rapid results Can be objective, valid and verifiable Can be intimidating to communities Depends heavily on observer skills Present orientation can be biased and problematic Next Generation Consultants - All rights reserved 59 INFORMATION AND DATA MANAGEMENT Data collection issues to consider • Coverage: Will the data cover all the impact elements? • Completeness: Is there a complete set of data for each element of impact? • Accuracy: Have the measurement instruments been tested to ensure data validity and reliability? • Frequency: Is the data collected frequently? • Reporting schedule: Does the available data reflect the periods of activities to be measured? • Accessibility: Is the data needed collectable or retrievable? • Power: Is the sample size big enough to provide a stable estimate or detect change? Practical considerations when planning data collection • Prepare data collection guidelines: This helps to ensure standardisation, consistency and reliability over time and among different people in the data collection process. Double-check that all the data required for indicators is captured through at least one data source. • Pre-test data collection tools: Pretesting helps to detect problematic questions or techniques, verify collection time, identify potential ethical issues and build the competence of data collectors. • Train data collectors: Provide an overview of the data collection system, data collection techniques, tools, ethics and culturally appropriate interpersonal communication skills. Give trainees practical experience in collecting data. • Address ethical concerns: Identify and respond to any concerns expressed by the target population. Ensure that the necessary permission or authorisation has been obtained from local authorities that local customs and attire are respected, and that confidentiality and voluntary participation are maintained. REDUCING DATA COLLECTION COSTS Data collection can be costly. One of the best ways to reduce data collection costs is to reduce the amount of data collected. The following questions can help simplify data collection and reduce costs: • Is the information necessary and sufficient? Collect only what is necessary for project management and evaluation. Limit information needs to the stated objectives, indicators and assumptions in the logframe. • Are there reliable secondary data sources? This can reduce the cost of primary data collection. • Is the sample size adequate but not excessive? Determine the sample size that is necessary to estimate or detect change. Consider using stratified and cluster samples. • Can the data collection instruments be simplified? Eliminate extraneous questions from questionnaires and checklists. In addition to saving time and cost, this reduces “survey fatigue” among respondents. ONE OF THE BEST WAYS TO REDUCE DATA COLLECTION COSTS IS TO REDUCE THE AMOUNT OF DATA COLLECTED. “ “ 60 INFORMATION AND DATA MANAGEMENT WHAT IT IS Data quality is a perception or an assessment of data fitness to serve its purpose in a given context. There are two major types of data – quantitative and qualitative. There is a popular misconception that quantitative data is more accurate and scientific than qualitative data. In fact, both are useful, depending on the context. WHY IT IS IMPORTANT Throughout the data collection process, it is essential that data quality be monitored and maintained. Data quality is important to consider when determining the usefulness of various data sources, and the collected data is most useful when it is of the highest quality. Data collection which is planned and systematic produces more accurate and cost-effective results. HOW IT WORKS Assessment of data quality looks for the following: • Objectiveness: Data collection techniques are objective, and the results produced give a reasonably complete picture (relevant data is not omitted and results are in keeping with the realities of outcomes). Underlying assumptions are clearly laid out and supported (these may relate to the treatment of samples or proxies, or any important background information used to build an understanding of impact, or for calculations with results). • Robustness: The data is robust (accurate, consistent, specific, etc.). This may include consideration for double-counting (e.g. a beneficiary showing up multiple times using the same service), and of the margin of error in the data. • Balanced: The data can capture both good (positive) and bad (negative) performance. This is essential to facilitate or ensure a balanced assessment, to identify areas for learning and improvement. • Drop-off: This aspect relates to the fact that over time the importance or significance of impact decreases. Impacts don’t last forever, so the period associated with the impact needs to be estimated. • Displacement: This relates to the fact that with some interventions the positive effect that is seen in a certain group can be offset by the negative effect seen in a different group. For example, a new business in a community may bring about the closure of another business. • Deadweight: Relates to a consideration as to what would have happened anyway, e.g. in the absence of the programmes or activities (reducing the impact of the funding or implementing organisation), as well as the negative consequences of no intervention (decreasing the impact of the organisation or intervention). • Attribution: Relates to understanding how much of the change that has been observed is the result of the organisation’s actions or the actions of another organisation, funder or government taking place at the same time. • Unintended consequences: Effects that come about because of the organisation’s or programme’s activities, but are not part of the desired effect or planned outcomes. • Comparability: Data that is derived following consistent standards or practices, making it possible to compare results from different investments. Gathering comparable data can be a complex process. This is particularly true when an investor seeks to compare performance at a later-stage outcome or impact level, as well as across issue areas, sectors, markets and regions. The use of consistent, common standards improves comparability. • Additionality: Data that allows investors to assess the extent to which an investment has generated results that would otherwise not have been realised. DATA QUALITY Next Generation Consultants - All rights reserved 61 INFORMATION AND DATA MANAGEMENT • Universality: Data collection practices that are applied consistently across markets, geographies and sectors. To achieve a truly global development sector, M&E practices must move towards standards and practices that are consistently applied across geographies and sectors. Development becomes more vital when considering broader trends around the convergence of social and community development sectors and assessment methods. ANALYSING DATA INTERPRETING RESULTS Interpreting results is a process of linking the facts or points identified through data analysis to the purposes and values that drove the evaluation. Through this process, the information turns into evidence about the progress, success and achievements of projects and programmes. This process should also result in project learnings, improvements and suggestions for making decisions or planning in the future. To interpret the results of an M&E and impact assessment process, one needs to put the pieces of information together in a way that explains the success, failure, achievements, modifications and movements of the project toward its objectives. Caution should be applied to confirmatory (positive) and contradictory (negative) findings, as well as expected and unexpected findings. USING THE RESULTS The results of a project evaluation can be used to: • Identify ways to improve or shift project activities • Facilitate changes in the project plan • Prepare project reports (e.g. mid-term or final reports) • Inform internal and external stakeholders about the project • Plan for the sustainability of the project • Learn more about the environment in which the project is being or has been carried out • Learn more about the target population • Present the project’s worth and value to stakeholders and the public • Plan other projects and compare projects to plan for their future • Make evidence-based organisational decisions • Demonstrate an organisation’s ability in performing evaluations • Demonstrate an organisation’s concerns to be accountable for implementing its plans, pursuing its goals and measuring its outcomes • Explore various paths to communicate the evaluation results and search for opportunities to present all or part of the results Tips for interpreting evaluation results 1. Review each section of results and ask “So what?” 2. Address each project objective and evaluation question by using qualitative and quantitative results, as well as other information obtained during the project. 3. If the results are positive and confirm project achievements, explain how they support the project objectives and their success. 4. If the results are negative and contradict a planned achievement, explain how they fail to meet the project expectations and what should have been done differently. 5. Think about other questions that can be answered with the results. 6. Use these results to draw overall conclusions about the impacts of the project on its internal and external stakeholders. 7. Provide suggestions for • the future of the project • modifications that may be required • how to increase the project’s success or effectiveness • how to decrease the project’s weaknesses or potential risks • how to use the evaluation results 8. Discuss the results with the evaluation group and complete or revise interpretations and suggestions accordingly. 9. Present a summary of results to the other project stakeholders and complete or revise interpretations accordingly. 62 INFORMATION AND DATA MANAGEMENT CONTENTS OF AN EVALUATION REPORT An evaluation report should include the following sections: • Executive summary: Include a short summary of the evaluation process and a complete summary of results, objectives achieved, lessons learned, questions answered and needs fulfilled. • Introduction: Present the background and activities of the project and the purpose of the evaluation. • Evaluation methods and tools: Explain the evaluation plan, approach and tools used to gather information. It should also provide supporting materials, such as a copy of the evaluation plan and the tools that were developed. • Summary of results: Present results of the qualitative and quantitative data analysis. • Interpretation of results: Explain the interpretation of results, including impacts on participants and staff, effectiveness of services, sustainability of project activities, strengths and weaknesses of the project and lessons learned. • Connection to the project objectives: Highlight the value and achievements of the project and the needs or gaps it addressed. • Conclusions: Describe how the project objectives were met overall, how the purpose of the  evaluation was accomplished and how the project evaluation was completed. • Recommendations: Summarise the key points, make suggestions for the project’s future and create an action plan for moving forward. Recommendations can be presented in the following parts: • Refer to the usefulness of the results for the organisation, or for others, in areas such as decision-making, planning and project management. • Refer to the project limitations, the assistance required and resources that can make future project evaluations more credible and efficient. • Describe the changes to be made to the project (or similar projects or evaluations). INDICATORS WHAT IT IS • An indicator is a variable metric that measures one aspect of a programme or project that is directly related to its objectives. • An indicator is a variable whose value changes from the baseline level at the time the programme began to a new value after the programme and its activities have made their impact felt. At that point, the variable or indicator is recalculated. • An indicator is a measurement unit. It measures the value of the change in meaningful units that can be compared to past and future units. This is usually expressed as a percentage or a number. • An indicator focuses on a single aspect of a programme or project. This aspect may be an input, an output or an overarching objective, but it should be narrowly defined in a way that captures this one aspect as precisely as possible. WHY IT IS IMPORTANT Indicators provide M&E information that is crucial for decision-making at every level and stage of programme implementation. • Indicators of programme inputs measure the specific resources that go into carrying out a project or programme (e.g. the funds allocated to the health sector annually). • Indicators of outputs measure the immediate results obtained by the programme (e.g. the number of multivitamins distributed or the number of staff trained). Next Generation Consultants - All rights reserved 63 INFORMATION AND DATA MANAGEMENT • Indicators of outcomes measure whether the outcome changed in the desired direction and whether this change signifies programme success (e.g. the contraceptive prevalence rate or the percentage of children 12 to 23 months old who received DTP3 immunisation by 12 months of age). • Indicators of impact measure the depth and range of change for specific stakeholders of an intervention (e.g. an increase of 3% in the pass rates for female students in higher grade maths in Grade 12 over a 36-month period in a specific school). A good indicator should: • Produce the same results when used repeatedly to measure the same condition or event • Measure only the condition or event it is intended to measure • Reflect changes in the state or condition over time • Represent reasonable measurement costs • Be defined in clear and unambiguous terms HOW IT WORKS A reasonable guideline recommends one or two indicators per result, at least one indicator for each activity, but no more than 10 to 15 indicators per area of significant programme focus. Indicators have the following characteristics: Indicators can be quantitative or qualitative Indicators require a metric, unit or standard Indicators require clarification and definitions of terms Indicators must be consistent with international standards Indicators should be independent from one another Indicator values should have certain qualities Indicators should allow for benchmarking and measure change over time 1 2 3 4 5 6 7 INDICATORS PROVIDE M&E INFORMATION THAT IS CRUCIAL FOR DECISION-MAKING AT EVERY LEVEL AND STAGE OF PROGRAMME IMPLEMENTATION. “ “ 64 INFORMATION AND DATA MANAGEMENT Below are more detailed descriptions of these characteristics: 1. Indicators can be quantitative or qualitative. • Quantitative indicators are numeric and are presented as numbers or percentages. • Qualitative indicators are descriptive observations and can be used to supplement the numbers and percentages quantitative indicators provide. They complement quantitative indicators by adding a richness of information about the context in which the programme has been operating. Examples are “availability of a clear, strategic organisational mission statement” and “existence of a multi-year procurement plan for each product offered”. 2. An important part of what constitutes an indicator is the metric, unit or standard – the precise calculation or formula on which the indicator is based. Calculation of the metric establishes the indicator’s objective value at a point in time. Even if the factor itself is subjective or qualitative, such as the attitudes of a target population, the indicator metric calculates its value at a given time objectively. • An example of such an indicator could be the percentage of urban facilities scoring 85% to 100% on a quality of care checklist. Because this indicator calls for a percentage, a fraction is required to calculate it. The numerator, or top number of the fraction, would be the number of urban facilities scoring 85% to 100% on a quality of care checklist. The denominator, or bottom number of the fraction, would be the total number of urban facilities checked and scored. 3. In many cases, indicators need to be accompanied by clarifications or definitions of the terms used. • For instance, let’s look at the indicator number of antenatal care (ANC) providers trained If such an indicator were used by a programme, definitions would need to be included. • “Providers” would need to be defined, perhaps as any clinician providing direct clinical services to clients seeking ANC at a public health facility. For the purposes of this indicator, providers would not include clinicians working in private facilities. • “Trained” would also need to be defined, perhaps as staff who attended every day of a five-day training course and passed the final exam with a score of at least 85%. Another indicator for this programme could be percentage of facilities with a provider trained in ANC. Because the indicator is a proportion or fraction, a numerator and a denominator are needed to calculate it. • The numerator would be the number of public facilities with a provider who attended the full five days of the ANC training and who scored at least 85% in the final exam. Note that the numerator must still specify that the facilities are public and that the providers must have attended all five days and passed the exam to be counted. This information does not have to be included in the indicator itself, as long as it is in the definitions that accompany it. • The denominator would be the total number of public facilities that offer ANC services. This requires or assumes that this number must be obtainable. If it is not known and it is not possible to gather such information, this percentage cannot be calculated. • It is also necessary to know at which facility each trained provider works. This information could be obtained at the time of the training. If it is not, all facilities would have to be asked if they have any providers who attended the training. 4. Indicators should be consistent with international standards and other reporting requirements. Examples of internationally recognised standardised indicators typically include those developed by the United Nations or those included in the sustainable development goals (SDGs) or even those represented in the national Next Generation Consultants - All rights reserved 65 INFORMATION AND DATA MANAGEMENT development plan, the global reporting indictors (GRI) reporting standards, the IRIS indictor list, etc. 5. Indicators should be independent, meaning that they are non-directional and can vary in any direction. For instance, an indicator should measure the number of clients receiving counselling rather than an increase in the number of clients receiving counselling. Similarly, the contraceptive prevalence rate should be measured rather than the decrease in contraceptive prevalence. 6. Indicator values should have certain qualities, including being easy to interpret and explain, timely, precise, valid and reliable. They should also be comparable across relevant population groups, geography and other programme factors. 7. The ability to measure change over time against a starting or reference point gives indicators real value. This is known as benchmarking. In selecting or developing indicators, consider: • Over what time periods do you want to measure? • Are there existing benchmarks (e.g. population data) or do you need to establish the benchmark? • If the second point is yes and you need to establish a benchmark, what do you intend to compare or benchmark to (e.g. intervention groups, pre-, during and post-programme or other standards)? INDICATOR TRAPS Caution must be applied when selecting indicators: • Indicator overload: Indicators do not need to capture everything in a project, but only what is necessary and sufficient for monitoring and evaluation. • Output fixation: Counting myriad activities or outputs is useful for project management, but does not show the project’s impact. For measuring project effects, it is preferable to select a few key output indicators and focus on outcome and impact indicators whenever possible. • Indicator imprecision: Indicators need to be specific so that they can be readily measured. For example, it is better to ask how many children under the age of 5 slept under an insecticide- treated bed net the previous night than to enquire generally whether the household practises protective measures against malaria. • Excessive complexity: Complex information can be time-consuming, expensive and difficult for development practitioners to understand, summarise, analyse and work with. Keep it simple, clear and concise. TIPS FOR CHOOSING INDICATORS Choose indicators that will provide a variety of data types. Indicators can provide different types of data: To get the most accurate picture possible of programme performance, it can be helpful to ensure that you are collecting a variety of data types. Below are two major groupings to consider. It is recommended to choose from each of the groups to select indicators that will best serve an organisation and/or programme. Quantitative versus qualitative: The tension between quantitative and qualitative data is the subject of a timeless debate. Today evaluation experts generally agree that these two types of data support each other, and both are necessary to produce a complete picture of an organisation or project. As the saying goes: “No numbers without stories, no stories without numbers.” Numbers versus percentages: Quantitative data can take the form of whole numbers or ratios. Generally, a mix of the two is necessary to generate meaningful data. For example, an organisation placing high school students in colleges would likely want to look at the number of students who complete its programme rather than the ratio or percentage of participants who complete the programme. 66 INFORMATION AND DATA MANAGEMENT Qualitative indicators: • Response rates • Number of visits • Number of inquiries • Participants’ level of satisfaction (e.g. level 1 to 4) • Frequency of visits • Number of resources • Percentages related to the use of products or services • Average age or education • Knowledge test scores or ranks Qualitative indicators: • Types of responses • Types of enquiries • Feedback on effectiveness • Benefits of a programme • Comprehensiveness of materials • Observable changes in attitudes, behaviours, skills, knowledge and habits • Types of problems • Complaints about services • Types of resources • Perceptions of the project, programme and services Examples of indicators: REPORTING ON IMPACT AND RETURN The most obvious and visible output of an impact measurement system is high quality regular impact reporting, resulting in improved transparency and communication. Communication tools can be a meaningful part of the impact outcome process. Impact reporting most often centres on an annual report (e.g. an integrated or sustainability report), though results can inform more frequent newsletters and other pieces of published research or reports. Many organisations also use impact reports to develop standalone brochures or social reports about the achievements of their organisations and the programmes or organisations they fund. WHAT IT IS Impact reporting means communicating the difference an organisation or programme has made to the issue selected to improve. Impact reporting often takes the form of an impact report, but can also include: • Reports to funders, supporters or investors • Board reports, management information and organisational reviews • Internal communications with staff, volunteers and beneficiaries • Fundraising and communication material, such as websites, brochures and leaflets Good impact reporting is an essential part of impact measurement, as it allows the funder and others to learn from work funded and programmes implemented, and promotes a culture of accountability and transparency. However, many investors struggle with what information to include in their reports (and what to leave out), and how best to present their impact data. WHY IT IS IMPORTANT Reporting impact can help to: • Review your impact against your vision and goals • Create a learning organisation where people focus on results and adapt and improve services • Motivate staff, volunteers and trustees through celebrating achievements • Build trust and credibility with supporters, funders, policymakers and beneficiaries • Share lessons with similar organisations • Inform the practices of the development sector DEVELOPING AN IMPACT REPORT The most important requirements for an impact report are that it should be clear, readily available and appropriately distributed. Clarity is about ensuring that the general reader can easily understand the impact report, as well as relevant professionals and practitioners. Next Generation Consultants - All rights reserved 67 INFORMATION AND DATA MANAGEMENT An impact report is a way to communicate the work that was done and the impact achieved, and it should be comprehensible to the widest possible audience. This may involve unpacking any specialist and industry- or sector-specific terminology, definitions and terms, as well as and explaining what these results mean where very specific indicators have been used. Another important aspect is to briefly outline any important aspects of the social development sector that a general reader might not know. Making the report available means telling people you have published an impact report and where they can get a copy. The most obvious channel for this is probably a website where the report should be available to download via a clear and simple link, not more than a few clicks from the homepage. Consideration should be given to making printed copies available, e.g. at company or organisational service centres, branches or regional offices, investor events, conferences and sector-specific events. Beyond general availability, there are particular audiences for an impact report and it is important to check that the report is distributed to them. These include: • Shareholders and primary investors (e.g. the trustees and executive management): The impact report allows them to see the positive benefits their money has helped to generate. • Other relevant stakeholders, such as policymakers and government bodies: The impact report can provide important insights into the social issue, problem or context that is being addressed, how it is being tackled and how the response or interventions work. These can inform and shape the government’s position and response. • Other sector organisations: Sharing results with other sector organisations facilitates the comparison of development and investment approaches and techniques; moving toward the establishment of benchmarks and the promotion of common understanding and good or best practice. The impact report can be an important contribution to communication and learning on this front. • Beneficiaries or intermediaries: The results can be a powerful way to see and understand investment and measurement process, and engage on constraints, challenges and results. The results can inspire beneficiaries and intermediaries, as well as celebrate success. An impact report answers these questions: So what? What difference did this programme make? Impact reporting is important to programme managers because it: • Illustrates accountability • Improves visibility of programmes (local or national) • Is a repository of anecdotes and stories for other communication pieces, such as media releases or annual reports • Helps build greater understanding of programmes or interventions for all stakeholders Impact reporting provides a way to: • Illustrate the significance of the investment and development effort • Show accountability • Demonstrate a return on investment • Foster a better public understanding of the whole picture of research, investment and development and extension of social sectors and development issues or contexts • Obtain future funding • Increase awareness of all programmes 68 INFORMATION AND DATA MANAGEMENT WHAT MAKES A GOOD IMPACT REPORT? In lay terms, an impact report is a summary of the social, environmental, geographic or economic outcomes of development efforts. It states accomplishments and benefits to society and communities. A good impact report illustrates change in at least one of the following areas: • Individual behaviour and practices • Economic value or efficiency • Social value and contribution to sustainable development WHAT SHOULD YOU REPORT? • Need: What is the problem that you are trying to address? • Activities: What are you doing to address this? • Outcomes: What are the results of these activities? • Evidence: How do you know you have made a difference? • Lessons learned: How will you change your work for the better? HOW SHOULD YOU REPORT IMPACT? There are six principles to keep in mind: Clarity: “The reader can quickly and easily understand the organisation or intervention through a coherent narrative that connects organisational objectives and aims, plans, activities and results.” Clarity can be as overarching as how you structure your report, or about details like avoiding jargon and replacing long lists of statistics and explanations with a simple infographic. Accessibility: “Relevant information can be found by anyone who looks for it, in a range of formats suitable for different stakeholders.” Considering your audience is key – think about what they need and want to know, as well as what you want them to know. Transparency: “Reporting is full, open and honest.” Some of the best impact reports reflect on an organisation’s successes as well as its shortcomings. Accountability: “Reporting connects with stakeholders, funders, intermediaries, partners and beneficiaries to tell them what they need to know and provide reassurance.” Impact reporting is all about being accountable for your work. Your report should reflect this and you should be upfront about your commitment and motivations. Verifiability: “Claims about impact are backed up appropriately.” Others should be allowed to review and confirm the impact. This can range from informal to formal stakeholder feedback to an external audit, providing assurance over the claimed impact. Proportionality: “The level and detail of reporting reflects the size and complexity of the organisation and intermediaries, and the complexity of the changes they’re trying to bring about.” Brevity, where possible, is always welcome. Some topics are more suited to detail than others, and it is worth thinking about how topics can be summarised in places for those looking for a quick overview. Next Generation Consultants - All rights reserved 69 MAKING THE REPORT AVAILABLE MEANS TELLING PEOPLE YOU HAVE PUBLISHED AN IMPACT REPORT AND WHERE THEY CAN GET A COPY.“ “ INFORMATION AND DATA MANAGEMENT WHAT SHOULD BE COMMUNICATED ABOUT IMPACT? There are six principles that define what should be communicated about impact: CLEAR PURPOSE Why do we exist? What issue are we ultimately trying to tackle? What overall impact do we want to have? What change do we seek? What impact do our key stakeholders want us to have? DEFINED AIMS What are our specific short- and long-term aims? How does achieving these aims help us achieve our overall purpose, intent and impact? COHERENT ACTIVITIES What activities do we carry out to achieve our aims? What resources do we use to make these activities happen? What are the outputs of these activities? How do our activities help us achieve our aims and create change? Are our activities part of a coherent plan? DEMONSTRATED RESULTS What outcomes or impact are we achieving against our aims? What impact are we achieving against the overall change we seek? EVIDENCE How do we know what we are achieving? Do we have relevant, proportionate evidence of our outcomes and impact? Are we sharing evidence to back up the claims we make? LESSONS LEARNED What are we learning about our work? How are we communicating (internally and externally) what we learn? How are we improving and changing from what we learn? What has happened that we didn’t expect (positive and negative)? Are we allocating resources to best effect? 70 REPORTING MECHANISMS Different stakeholders will require different information pieces or reports. This table provides insight into the types of information and appropriate format of reports that can be developed to share and communicate the outcome of measurement. INFORMATION AND DATA MANAGEMENT Target group Stage of project cycle Appropriate format Board Monitoring, evaluation and impact reports Written reports with an executive summary and a presentation Management team Interim reports, based on monitoring and evaluation analysis Written report to be discussed at management meetings Staff and programme managers Interim reports, based on monitoring and evaluation results Written reports and presentations presented by programme managers and evaluation teams, followed by in-depth discussion of relevant recommendations Beneficiaries Outcome reports Presentations backed up by summarised documents, using tables, charts and visuals; this is particularly important if the organisation or project is contemplating major changes that will impact beneficiaries Intermediaries and other donors Outcome reports Summarised and written reports with case studies and evidence of impact Other stakeholders, such as media, customers, academia or the wider development sector Outcome reports Journal articles, media releases, seminars, conferences, websites – summarised versions of evidence of impact SECTION FIVE RETURN ON INVESTMENT 05 72 RETURN ON INVESTMENT As more investors (corporations, foundations and others) view grants as forms of investment in communities to create and stimulate positive social change, there will be an increased focus on maximising the benefits from strategic investments. There is a demand for more sophisticated tools that can evaluate how investments have been or can be used to achieve positive social impact, both ex-ante (in selecting which investments could be most promising) and ex-post (in evaluating social return). Return on investment therefore gives investors the means to measure their grantmaking, but also to make the case for why and how they should invest in their own organisations. RETURN FOR INVESTORS RETURN ON INVESTMENT As much as programme impact on intended beneficiaries is important, so is impact for the funder or investor. As primary stakeholders in the grantmaking cycle, funders are impacted by programme outcomes. This impact is generally referred to as return on investment (ROI). As important as it is to measure the impact of an intervention on the intended beneficiaries, it is equally important to understand and assess the impact for the investor. WHAT IT IS Every day our actions and activities as social investors, practitioners and developers create and/ or destroy value as it changes the world around us. Although the value we create goes far beyond what can be captured in financial terms, this is to a large extent the only type of value that is currently measured and accounted for. Things that can be bought and sold take on a greater significance and many other important things are left out. Decisions made like this may not be as good as they could be, as they are based on incomplete information about full impacts. Return on investment (ROI) is a framework for measuring and accounting for this much broader concept of value. It seeks to reduce inequality, social justice and economic empowerment, alleviate and eradicate poverty and environmental degradation, as well as improve wellbeing by incorporating social, environmental and economic costs and benefits. The methodology used to determine return on investment (ROI) should not be confused with the SROI measurement standard. The major difference is that ROI is not necessarily monetised. Social return on investment (SROI) measures change in ways that are relevant to the people or organisations that experience or contribute to it. It tells the story of how change is created by measuring social, environmental and economic outcomes and uses monetary values to represent them. This enables a ratio of benefits to costs to be calculated. For example, a ratio of 3:1 indicates that an investment of R1 delivers R3 of social value. ROI is about value, rather than money. Money is simply a common unit and as such is a useful and widely accepted way of conveying value. In the same way that a business plan contains much more information than the financial projections, ROI is much more than just a number. It is a story about change, on which to base decisions, that includes case studies and qualitative, quantitative and financial information. In essence, it quantifies and qualifies a range of impacts as returns. Next Generation Consultants - All rights reserved 73 RETURN ON INVESTMENT HOW IT WORKS An ROI analysis can take different forms. It can encompass the social value generated by an entire organisation, or focus on a single aspect of the organisation’s work or funds invested. There are also several ways to organise the “doing” of an ROI assessment. It can be carried out largely as an in-house exercise or it can be led by an external researcher. There are two types of ROI: • Evaluative, which is conducted retrospectively and based on actual outcomes that have already taken place. • Forecast, which predicts how much social value will be created if the activities meet their intended outcomes. Forecast ROIs are especially useful in the planning stages of an activity. They can help show how investment can maximise impact and are also useful for identifying what should be measured once the project is up and running. A lack of good outcomes data is one of the main challenges when doing an ROI for the first time. To enable an evaluative ROI to be carried out, one will need data on outcomes and a forecast ROI will provide the basis for a framework to capture outcomes. It is often preferable to start using ROI by forecasting what the social value may be, rather than evaluating what it was, as this ensures that the right data collection systems are in place to perform a full analysis in the future. The level of detail required will depend on the purpose of the ROI. A short analysis for internal purposes will be less time-consuming than a full report for an external audience that meets the requirements for verification. THE PRINCIPLES OF ROI ROI was developed from social accounting and cost-benefit analysis and is based on seven principles. These principles underpin how ROI should be applied: Involve stakeholders Understand what changes Value the things that matter Only include what is material Verify the result Be transparent Do not overclaim Like any research methodology, ROI requires judgement throughout the analysis, and there is no substitute for the practitioner’s judgement. The concept of materiality needs to be understood and applied. Materiality is a concept that is borrowed from accounting. In accounting terms, information is material if it has the potential to affect the readers’, decision-makers’ or stakeholders’ decisions. A piece of information is material if leaving it out of the ROI assessment would misrepresent the organisation’s activities or actions. For transparency and disclosure purposes, judgements about what is material should be documented to show why information has been included or excluded. 74 RETURN ON INVESTMENT THE STAGES IN ROI Establishing scope and identifying key stakeholders: It is important to have clear boundaries about what an ROI analysis will cover, who will be involved in the process, and how. Mapping outcomes: Engaging and involving stakeholders in developing an impact map and theory of change will more clearly indicate the relationship between inputs, outputs, outcomes and impact. Evidencing outcomes and giving them a value: This stage involves finding data to show whether outcomes have happened and then valuing them. Establishing impact: Having collected evidence on outcomes and valuing them, those aspects of change that would have happened anyway or are a result of other factors are eliminated from consideration. Calculating the ROI: This stage involves adding up all the benefits, subtracting any negatives and comparing the result to the investment. This is also where the sensitivity of the results can be tested. Reporting, using and embedding: Easily forgotten, this vital last step involves sharing findings with stakeholders and responding to them, embedding good outcomes and impact measurement processes, and verifying the impact. 1 2 3 4 5 6 WHY IT IS IMPORTANT An ROI analysis can fulfil a range of purposes. It can be used as a tool for strategic planning and improving, for communicating impact and attracting investment, or for making investment decisions. It can help guide choices that managers face when deciding where they should spend time and money. Most importantly, is that it provides evidence of impact in the form of return for investors. ROI can improve services by: • Facilitating strategic discussions and helping to understand and maximise the social value an activity or programme creates. • Helping to target appropriate resources and managing unexpected outcomes, positive and negative. • Demonstrating the importance of working with other organisations and people that have a contribution to make in creating change. • Identifying common ground between what an organisation wants to achieve and what its stakeholders want to achieve, helping to maximise social value. • Creating a formal dialogue with stakeholders that enables them to hold the programme. or organisation to account and involve them meaningfully in programme design. ROI can help make organisation more sustainable by: • Raising their profile. • Improving the case for further funding. • Providing a context between investment and return. • Aligning and integrating organisational objectives with programme outcomes and impact. ROI is less useful when: • A strategic planning process has already been undertaken and is already being implemented. • Stakeholders are not interested in the results. • It is undertaken only to prove the value of a service and there is no opportunity for changing the way things are done because of the analysis. Comparing return between different organisations: Organisations work with different stakeholders and will have made different judgements when analysing their return. It is not appropriate to compare the return on investment ratios alone. In the same way that investors need more than financial return information to make investment decisions, social Next Generation Consultants - All rights reserved 75 RETURN ON INVESTMENT investors will need to read all the information produced as part of an ROI analysis. An organisation should compare changes in its own return on investment over time and examine the reasons for changes. Organisations should also endeavour to educate beneficiaries and intermediaries on the importance of putting the ROI analysis in the context of the overall programme analysis. RETURN ON INVESTMENT: ASPECTS OF RETURN For funders, ROI needs to be quantified and qualified. This already happens during strategy setting, when strategic objectives are determined – what change do we want to facilitate and why? The real issue is how will the funder benefit from an investment and subsequent development programmes or portfolios. Once the strategic objective is clear, funders can map their expected returns. The reason why they invest in specific programmes, portfolios, beneficiaries and intermediaries is then identified and measured externally and reported internally. What distinguishes ROI analysis from impact analysis is that impact is direct at the funder or investor. Below are ROI aspects that can be considered as ROI impacts. Consider these questions in relation to an ROI analysis: 1. What is the purpose of the ROI? 2. Who is it for? 3. What is the background? 4. What resources do you have? 5. Who will undertake the ROI? 6. What activities will you focus on? 7. What period of delivery will your analysis cover? 8. Is the analysis a forecast, a comparison against a forecast or an evaluation? Goals Questions to determine return Indicators to measure return To support employee recruitment, retention and productivity Does our reputation as a good corporate citizen or funder of social programmes help attract and retain employees? Do our community programmes help attract and retain employees? Do potential employees who believe we have a strong reputation in the community choose to work for us? Are they more committed than those who are not aware of our reputation in the community? Do employees who participate in our programmes feel more committed to the company? Have morale and on-the-job performance improved as a result of social or community programmes? Measure employee awareness of social or community programmes. Measure employee participation (e.g. volunteer time, programme support and contributions). Measure employee support (e.g. testimonials regarding their experiences and development). Track employee attitudes and satisfaction with the company. Track retention and absenteeism rates for those most aware compared to those least aware of social or community programmes and contributions. Measure recruitment and retention rates of individuals who participated in company-sponsored workforce community development programmes. 76 Goals Questions to determine return Indicators to measure return Do employees who participate in our social or community programmes also develop their skills and competencies? Has on-the-job performance improved and has their leadership potential enhanced? Calculate the development of employee skills and competencies through volunteerism (can also factor costs of alternative training compared to the costs of running the programme). Conduct pre- and post-event surveys that track attitudes and behaviour before and after a major social or community initiative or programme. To support sales targets Does our reputation as a good corporate citizen help increase our sales? Do potential consumers or customers who believe we have a strong reputation in the community buy from us more often? Are they more loyal than those who are not aware of our reputation in the community? Do our community programmes help attract sales? Do potential community consumers or customers who are aware of our programmes buy from us more often? Are they more loyal than those who are not aware of our programmes? Have our relationships with key stakeholders helped to influence buying decisions? Have our community programmes provided more access to markets or customers? Measure customer awareness of community programmes. Determine market penetration of giving or development programmes or exposure. Measure attitudes of customers toward the company as a corporate citizen. Measure awareness of purchasing behaviour, measures of customer satisfaction, etc. Calculate feature space in media and compare to advertising costs for that space. Track sales from a new store in a lowincome neighbourhood and compare to other retail facilities. Collect testimonials from internal stakeholders on key relationships and business leads. Collect testimonials from customers on the role of the company’s status as a good corporate citizen (or from impressions of community programmes). Define with internal stakeholders the percentage of sales attributable to social or community programmes. Track the return of donating to a nonprofit organisation that becomes a new customer. Conduct preliminary and post-event surveys tracking attitudes and buying behaviour. RETURN ON INVESTMENT Next Generation Consultants - All rights reserved 77 RETURN ON INVESTMENT Goals Questions to determine return Indicators to measure return To support corporate reputation How do our programme(s) enhance our corporate reputation? Do they: Improve how key stakeholders view the company as a corporate citizen? • Help to increase the level of • trust key stakeholders have in • our company? • Enhance how stakeholders • • view our brand and the quality • • of our products and services? • Enhance how stakeholders see • • the quality of our management • and operations? • Increase the number of • investors or shareholders? How are our social or community programme(s) building awareness of our corporate brand? Measure attitudes of key stakeholders who are aware of the social or community programmes about the company; compare these to the attitudes and perceptions of stakeholders who are not aware; track attitude changes over time. Benchmark standing, perceptions, attitudes and knowledge against competitors in the industry. Define the stated level of trust for the company among key stakeholders; identify the aspects of the business they do and do not trust. Conduct pre-and post-surveys of awareness of brand reputation (before and after programmes). Track media exposure of the company’s social or community programmes. Track awards and recognition received. To support the licence to operate What is the level of support for us in the community? • How do our communities feel • about us? • How strong are our relationships • with key stakeholders? What actions would key stakeholders take on our behalf? Would they: • Support us at a public hearing? • Speak favourably about us to • a reporter? • Advise a friend to buy from us? • Advise a friend to seek • employment with our company? • Say that they trust us and • our decisions? Compare approval rates (tenders or licences) with and without community involvement programmes to similar projects. Calculate revenue gained by a business starting a project earlier than anticipated due to community support (no criticism). Track the regulatory process (e.g. ease of obtaining approvals, licences, tenders, length of hearings, number of interventions, the lack of negative pushback, and protests and boycotts). Track the number of public protests, permit interventions and negative comments at public hearings or in the media compared to the industry (tie these to costs avoided). 78 Goals Questions to determine return Indicators to measure return To support the licence to operate What actions did our programmes encourage? • Did they positively influence a • permitting and approval or • licence decision? • Did they help us avoid • costly delays? • Did they help us avoid or • mitigate a crisis? • Did they help us to keep our • operations going? Look at the success of legislative initiatives; public support operations and facilities citing cases connected to community programmes and relationships. Measure support from key stakeholders (e.g. testimonials regarding their experiences with the company, letters of support and public commentary). To support compliance and legal requirements Do the community programmes support any compliance or legal programmes that the organisation has to abide by or report on? Consider the BEE requirements from a skills development, procurement or SED perspective • Measure inputs invested and • number of beneficiaries affected Consider industry charter requirements • Measure inputs invested and the • number of beneficiaries affected Consider legal, mandatory, industryor country-specific requirements and standards • Do the programmes assist with • licence approvals? • Do the programmes assist with • reporting requirements? To support ESG reporting requirements What value or capital has been generated through funding community or social programmes and can be reported on? What negative operational impacts have been avoided or mitigated? How have reporting requirements enhanced, supported or influenced capital providers’ or shareholders’ opinions? Economic value: Employment • Number of primary jobs created • annually • Number of jobs by income group • Quality of jobs – secure, • sustainable, decent wages • Investments in skills to combat • unemployment and reduce • poverty RETURN ON INVESTMENT Next Generation Consultants - All rights reserved 79 RETURN ON INVESTMENT Goals Questions to determine return Indicators to measure return Equality/transformation • % women employed • % women-owned enterprises • % women educated Value chain • Creation, support and • development of innovative social • enterprises, social products, services • or processes • Support given to suppliers • Eradication of unsafe, unfair and • forced labour practices with • supply chains Infrastructure • Benefits of infrastructure (roads, • clinics, power stations, water, • schools) to communities Environment • Carbon sequestration, recycling, • waste management, green • energy, green buildings, education, • awareness, rehabilitation, • etc. in communities Labour • Increased employment • Increased skills development Social • Positive impact on neighbouring • communities • Enhanced government relations • Enhanced stakeholder relationships ONCE THE STRATEGIC OBJECTIVE IS CLEAR, FUNDERS CAN MAP THEIR EXPECTED RETURNS. “ “ 80 RETURN ON INVESTMENT CALCULATING RETURN ON INVESTMENT There is no standard way to calculate return on investment – this is a new practice in grantmaking. New methodologies are ongoingly developed and introduced globally. Most simply put, the ROI process involves: • Talking with stakeholders to identify what social value means to them • Understanding how that value is created through a set of activities • Finding appropriate indicators or ways of knowing that change has taken place • Putting financial proxies on indicators that do not lend themselves to monetisation • Comparing the financial value of the social change created to the financial cost of producing these changes HOW DOES IT WORK? The process is straightforward – just add up the economic value of all the benefits generated by a particular programme and compare the sum to the total cost of the programme. This involves the following steps: 1. Measure the total cost: Add up the costs of the programme. Include the value of any in-kind donations, such as rent-free office space or volunteers’ time. 2. Enumerate and measure the outcomes: List all the demonstrated or planned impacts of the programme on individuals and institutions. 3. Value the outcomes: Convert the outcomes to an amount by using direct or indirect economic calculations. 4. Compare the benefits and costs: The results may be stated as an amount of benefit per currency of cost or as a percentage return similar to a financial investment. The actual analysis may be quite complex. Producing a comprehensive and credible ROI analysis can present a number of challenges for the analyst. These include: • Identifying the net cost of a programme: Not only must the complete cost of the programme be included, but any cost that would be incurred in the absence of the programme must be subtracted to generate the true added cost of the programme, compared to alternatives. • Measuring the impact of the programme accurately: Sometimes data on a wide range of outcomes must be collected and analysed. Valid comparison data must also be available to reflect the net effects of the programme. • Finding methods to value outcomes: Sometimes a great deal of ingenuity and analysis is required to estimate the monetary value of some outcomes. For example, in analysing a programme for troubled youth, an economist might need several steps to estimate the economic value of reducing truancy. The economist might first consult educational research that shows how lowering truancy increases high school graduation rates. Economic research on the value of a high school education would then be used to estimate the monetary value of reduced truancy. The following are examples for consideration. SOCIAL RETURN ON INVESTMENT Social return on investment (SROI) is a method for measuring values that are not traditionally reflected in financial statements, including social, economic and environmental factors, which can identify how effectively an organisation uses its capital and other resources to create value for the community. While a traditional cost-benefit analysis is used to compare different investments or projects, SROI is used to evaluate the general progress of certain developments, showing the financial as well as the social impact of the corporation. SROI is useful to corporations because it can improve programme management through better planning and evaluation, increase investors’ understanding of their impacts and allow Next Generation Consultants - All rights reserved 81 RETURN ON INVESTMENT better communication about the value of the organisation’s work (internally and to external stakeholders). Philanthropists, venture capitalists, social investors, donors, foundations and non-profits may use SROI to monetise the social impact in financial terms. A general formula used to calculate SROI: SROI = (social impact value – initial investment amount) / initial investment amount *100% Assigning a monetary value to the social impact can present challenges, and various methodologies have been developed to help quantify impact. While the approach varies depending on the programme that is evaluated, four main elements are needed to measure SROI: While SROI is one way to measure a programme’s impact, it does not prove causality and cannot replace measuring actual outcomes. The strength of an SROI model increases as the organisation gains confidence of its outcomes through best practices research or self-evaluation studies. This model is not comprehensive – there are many unquantifiable social benefits, such as increasing self-confidence or family stability. PRESENT VERSUS FUTURE VALUE In most cases, SROI is used to assess value that has already been generated. This is called an evaluative SROI analysis. A recent development is the use of SROI to forecast how much social value a project or organisation could generate if it meets its intended objectives. The information from a forecasted SROI can be used to feed into strategic planning, helping to show how an investment can generate the most social value. In order to validate the findings of a forecasted SROI, an evaluative analysis needs to be carried out once the project or organisation is up and running. One of the advantages of completing a forecasted SROI is that you will have identified the outcomes data you need to collect and can put in place mechanisms for data collection from the outset. The process is not too different if you do a forecasted SROI analysis. The key difference is in the data collection phase, where instead of collecting actual outcomes data you forecast (usually with help from others) what you would expect the outcomes to be. Outcomes can have longevity even if the organisations supporting them are no longer involved. For this reason, we often project value into the future. In doing so, three factors need to be considered: 1. Discount rate 2. Benefit period 3. Drop-off rate To calculate the SROI ratio, we need to compare the present value of benefits to the present value of the investment made to generate those benefits. Before we can do this, we need to understand a concept called “time value of money”. This means that in general, R1 is now worth more than it will be in a year’s time. This is something most of us are familiar with and as such, we hope our employers will adjust our salaries each year by inflation to compensate for how the value of money changes over time. The discount rate you use should reflect the uncertainty (or risk) of achieving the estimated benefits, as well as the uncertainty of your assumptions. Inputs – resources invested in the activity (e.g. the cost of running a job readiness programme) Outputs – the direct and tangible products from the activity (e.g. the number of people trained Outcomes – the changes to people resulting from the activity (e.g. new jobs, better income, improved quality of life for the individuals; increased taxes and reduced support by the government) Impact – the outcome less an estimate of what would have happened anyway (e.g. if 20 people got new jobs but 5 of them would have anyway, the impact is based on the 15 people who got jobs as a result of the job readiness programme) 82 RETURN ON INVESTMENT How does this apply to SROI? If the benefits we aim to achieve in a project take two years to occur and we want to know how an investment of R10 000 given to us now will compare with the benefits achieved over that two-year time period, we need to discount the future value of those benefits. In doing this, we are able to see what the value of the benefits created over two years would be worth now, and then compare this amount to the investment. For some benefits, such as environmental benefits, discounting may not be appropriate, as the value of the outcome is not likely to decrease in the future. We also need to decide on the benefit period. Be as realistic as possible about assuming a period over which your model will account for accrued benefits. The period should be long enough to comprise most of the benefits your activities will generate, but not so long as to overestimate your impacts. The longer the period, the more likely other interventions will contribute to the impact, such as another training course that leads to a promotion. The final consideration when projecting into the future relates to drop-off. The concept of drop-off recognises that the benefits will not endure for all stakeholders over the entire benefit period. For example, a training programme with ex-offenders may help 30% of participants to get a job by the end of the first year. Some of these participants will remain in their jobs and so the benefit of the intervention continues over future years. However, some will also fall back out of work. Drop-off adjusts the projected future benefits to take into consideration cases where the benefit does not endure.   SECTION SIX SETTING UP THE MEASUREMENT SYSTEM 06 84 SETTING UP THE MEASUREMENT SYSTEM A well-functioning M&E system is a critical part of good project or programme management and accountability. Timely and reliable M&E provides information to: • Support project or programme implementation with accurate, evidence-based reporting that informs management and decision-making to guide and improve project or programme performance. • Contribute to organisational learning and knowledge-sharing by reflecting on and sharing experiences and lessons so that the full benefit of what is done and how it is done can be gained. • Uphold accountability and compliance by demonstrating whether the work has been carried out as agreed and in compliance with established standards and other investment requirements. • Provide opportunities for stakeholder feedback, especially beneficiaries, to provide input into and perceptions of the work, modelling openness to criticism and willingness to learn from experiences and to adapt to changing needs. FROM THINKING TO DOING SPECIAL NOTE Not all programmes and projects need to conduct all types of M&E activities that may be part of the overarching organisational M&E system. However, all programmes and projects are expected to participate in basic levels of M&E, including assessing needs and monitoring inputs and outputs once implementation begins. Expectations to conduct additional levels of M&E vary by the nature, size, scope and maturity of the programme or project, as well as organisational competencies, capabilities and resources. Firstly, programmes need to use their resources wisely, so the extent and costs of M&E activities should be commensurate to their size, reach and cost. In short, M&E should never compromise or overtake programme implementation. Secondly, not all M&E activities are appropriate for all programmes or all the development stages at which programmes happen to be at any given time. Evaluation logic suggests a staged approach. That is, most programmes that conduct outcome evaluations should have implemented some level of process evaluation prior to this more rigorous assessment. Also, input and output monitoring data are essential for informing process evaluation, and outcome monitoring data is a prerequisite to outcome evaluations. As the diagram below (also referred to as the M&E pipeline) reflects. there are varying expectations for M&E among different programmes and projects. The framework suggests ALL projects Input and output monitoring MOST projects Process evaluation SOME projects Outcome monitoring or evaluation FEW projects Impact assessment Next Generation Consultants - All rights reserved 85 • All programmes and portfolios (national, subnational and portfolio- or sector-based) and projects should conduct basic programme input and output monitoring for the purpose of good programme management and for selecting a few indicators to report to key stakeholders to whom the programme is accountable. • Most programmes and projects should periodically conduct some basic process evaluations. This component often includes implementation assessments, quality assessments, basic operations research, case studies and cost analyses. • Only some programmes (usually the larger national or community or social sector programmes) will be able to conduct outcome monitoring and rigorous outcome evaluations; not only because of the additional time, expertise and resources these methods require, but also because they are only relevant to the more established programmes (outcome monitoring) or programmes for which there is insufficient evidence that they work (outcome evaluation), as they are new or innovative or simply have never been evaluated. • Only in a few situations would impact evaluation be warranted in which an attempt is made to attribute long-term effects (impact and return) to a specific programme. These are usually done at national or flagship or portfolio levels, as they require large population sizes and considerable resources. Monitoring the unlinked distal impacts (impact monitoring) can feasibly be done through surveillance systems and repeated population- based biological and behavioural surveys. All programmes and practitioners should be aware of national and subnational data and know how this is relevant to their programmes. Comparing local programme results with national and subnational data provides a basis for determining programme effectiveness. Such data also allows for determining the overall success or collective effectiveness of all programmes on a national level. At this stage, triangulation of multiple data sources is important. Long-term effects should be interpreted in the context of results from process and outcome evaluations and from existing survey data and output monitoring. THE MEASUREMENT PROCESS Measurement is an iterative process of forecasting, reviewing and evaluating the impact of programmes or interventions on beneficiaries and other stakeholders. There are multiple methods and tools to measure impact – the consensus is that any impact measuring process contains numerous elements. In this section, we begin to develop an M&E system: SETTING UP THE MEASUREMENT SYSTEM • Level of assessment: Will you be assessing the impact or changes of an individual project, or a range of programmes (in a portfolio), or at an organisational level or even a sector level? • Forecast or report: Will you be using a specific measurement tool to forecast the expected evaluation impact of certain projects or programmes that must still be undertaken, or will you use it to evaluate projects or programmes conducted, funded or implemented in the past or programmes that are still being implemented, or are you doing it to influence the development design or implementation and funding of future programmes? Combining the two is also an option. It is recommended that you determine your point of departure before setting up your measurement requirements and systems. • Audience: Which stakeholder groups are you measuring – only those affected by the impact? Whom are you measuring for? Who is the audience of your M&E reports, i.e. who will be viewing results and reading the reports? Will the results be shared with the board, employees, aspiring grantees, other potential donors or intermediaries, recipients of programme benefits or even the public? QUESTIONS TO DETERMINE THE SCOPE OF M&E PROCESSES 86 • Purpose: What is the purpose of the M&E measurement or assessment? M&E processes can serve a number of purposes. Being clear about the outcome or objective of the M&E process will determine the range or scope (depth and reach) of the assessments. Determining the objectives or intent beforehand can help to make choices about how rigorous the assessments will need to be. General purposes of measurement include: Selection: To determine which projects to fund (forecast) Knowledge: Knowing whether an intervention is actually creating change, how much change is evident, as well as who was affected by the change, will define and clarify where the most value was created and for whom, i.e. supports the strategic objectives of the organisation or an intervention Benchmark and improve: Assessing and comparing outcome, impact and return, and analysing the information resulting from an M&E process can help to further improve and maximise impact and return across strategies, operations, programmes and portfolios Prove: Measurement practices provide evidence (proof) of impact and indicate accountability (of all stakeholders) for all the resources utilised, as well as ROI of those resources applied and invested. Communicate: To create and increase awareness and raise support for the vision, mission and strategic objectives of the organisation and its subsequently funded programmes Attract funds: Successful programmes and interventions, i.e. high-impact or high-return programmes will attract new donors, investors or funders SETTING UP THE MEASUREMENT SYSTEM Scoping the M&E framework Identify information requirements Determine the extent of stakeholder participation Identify possible and preferred approaches and methodologies Review resource parameters (financial and human) Confirm purpose and parameters (depth, width and reach) of M&E processes Agree on roles and responsibilities Consider reporting and communication requirements Document and distribute M&E processes to all stakeholders Evaluation questions Plan stakeholder engagement Develop programme-specific assessment issues, questions and processes Confirm information requirements of various key stakeholders Develop assessment and evaluation questions Facilitate stakeholder participation and input to M&E processes Scope the number and range of questions against available data and information sources Present and discuss questions with all stakeholders for confirmation of M&E processes Finalise evaluation questions SETTING UP AN M&E SYSTEM STAGES IN THE DEVELOPMENT OF A PROGRAMME M&E PROCESS Next Generation Consultants - All rights reserved 87 SETTING UP THE MEASUREMENT SYSTEM Develop monitoring and evaluation plans Identify the focus and outcomes of M&E processes Develop indicators and targets where appropriate Identify data collection sources, processes and tools Determine responsibilities and timeframes for M&E processes Determine overall evaluation approach and methodologies Identify evaluation and research questions Identify focus (anticipated outcomes) and evaluation insights required (programme or focus area) Determine responsibilities and timeframes for M&E processes Review monitoring and evaluation plans Data collection and management Develop data collection plans Develop data management systems Consider approach to data collection, analysis and synthesis Consider approaches to making evaluative judgments, reaching evaluative conclusions and making recommendations Consider the basis for the identification of recommendations, conclusions and lessons learned Provide guidance for reporting and dissemination strategies of M&E results and reports Planning for implementation Confirm programme management arrangements Develop a work plan to implement M&E processes Develop a plan to review the M&E framework 88 Identify the purpose and use of the information Clarify the programme design Clarify the impact Identify what information is required Plan data management Plan data analysis THE M&E PROCESS The diagram below shows the M&E process in more detail. SETTING UP THE MEASUREMENT SYSTEM Plan reporting and utilisation Develop measurement (M&E) plans Develop guidelines to manage the measurement cycle STEP 1: IDENTIFY THE PURPOSE AND USE OF THE INFORMATION The first step in developing an M&E system is to discuss who will use the information that is collected. There will likely be many potential users, including: • Community members participating (or not) in the programme • Programme implementation staff or operational and management staff (not directly involved) • Donors, investors, development agencies and intermediaries (any partners) • Government and government departments – (local, provincial and national) Next Generation Consultants - All rights reserved 89 SETTING UP THE MEASUREMENT SYSTEM • Other local stakeholders (community investment partners and beneficiary or recipient groups, as well as sector specialists and role players) • Other external stakeholders – the media, regulators, sector specialists, academia and the public at large You need to define why these groups want or need the information and how they will use it. Doing this early will ensure that the information you collect is relevant and useful. Common reasons organisations want and need information include: • Track progress of programme activities (outcomes and impacts) • Understand the effects of programme activities on programme participants, beneficiaries and others • Understand their programme participants and other stakeholder groups better and to share and use better, more informed and relevant information • Test the theory of change to see if what is being done is leading to the planned or anticipated changes • Justify their programme activities and expenditure to communities • Assimilate, triangulate and synthesise information to learn, report and communicate better M&E information is used in different ways. It is good to clearly define how you and other stakeholders will use the information you collect so that you do not gather information that will not be used. Typically, funders and other stakeholders use M&E information to: • Share with others what the programme has achieved and to make more informed future investment and development decisions • Decide whether to keep, change or stop programmes, investments, activities, etc. • Improve the work they have been doing to ensure greater impact and return • Secure ongoing funding, ensure future innovation, bring new partners on board, scale or replicate successful programmes, etc. The M&E information is usually collected and shared in a variety of ways, including: • Research reports and case studies • Progress or management reports and opinion pieces • Annual reports (e.g. foundation, sustainability and integrated reports) • Mid-term evaluation reports or end-of- programme or exit evaluation reports • Social media and printed or published subject-specific or academic research These activities all require meticulous data collection at different points during programme implementation. The specific information requirements should be identified in the initial stages of setting up an M&E system. EXAMPLES OF KEY STAKEHOLDERS AND INFORMATIONAL NEEDS 1. Communities (beneficiaries) provided with information are able to better understand, participate in and own a project or programme. 2. Donors, investors and partners typically require information to ensure compliance and accountability. 3. Project or programme managers use information for decision-making, strategic planning and accountability. 4. Project and programme staff use information for project and programme implementation and to understand or inform management decisions. 5. The board and trustees may require information for donor accountability, long-term strategic planning, knowledge sharing, organisational learning and advocacy. 6. Implementing partners (intermediaries) use information for coordination and collaboration, as well as for knowledge and resource sharing. 7. Government and local authorities may require information to ensure that legal and regulatory requirements are met, and it can help build political understanding and support. 90 SETTING UP THE MEASUREMENT SYSTEM STEP 2: CLARIFY THE PROGRAMME DESIGN After you know why you want to collect information, who will use it and how, you need to understand the programme you are collecting information about. It is very important that the programme manager and supporting and implementing staff (practitioners and intermediaries) review the programme design before conducting any M&E, and regularly throughout the life of the programme, to ensure that it reflects the current reality so that you collect relevant information. Having a clear programme design (based on clearly developed and explained theory of change and logic frameworks) ensure that: • All stakeholders, especially implementing resources and organisations, understand the targets and objectives and anticipated outcomes • The monitoring information that is collected is relevant to the specific programme objectives and anticipated outcomes and impact • The evaluation processes are relevant, objective, effective and efficient to measure strategic objectives, anticipated outcomes impact and return As explained previously, the two tools used to design or revise programmes are theory of change (ToC) models and programme logic frameworks. For effective monitoring and evaluation, a programme design should: • Outline a clear programme logic or theory of change that indicates the connections between programme activities, outcomes and short- and long-term changes or impacts • Focus on people and changes in their behaviour, life or circumstances (rather than just changes regarding things) • Identify assumptions or conditions for change to occur • Be easily understandable by all programme stakeholders and programme participants A theory of change (ToC) describes how we believe change happens in relation to our programmes. It explains the connections between the current situations, small changes in the behaviour of different stakeholders and eventually the long-term changes we hope to see for our beneficiaries, as well as how we think change happens over time. The ToC also identifies key assumptions. Assumptions are beliefs about the underlying causes of the current situation, about the connections between changes and about the context or environment in which the change is happening. This knowledge is important as these assumptions often determine a programme’s success. We must test these in our monitoring and evaluation assessment processes to see if they hold true in reality. A programme logic model or framework is a related tool you can use to help define the way you expect change to happen, and what activities or interventions may contribute to that change. A programme logic model is a visual representation tool of the programme theory, often presented in a diagram but sometimes in a table. The logic framework shows a series of expected results that indicate a pathway of change. To be most effective and relevant to all stakeholders, both the ToC and programme logic should be peoplecentred – based primarily on changes in people’s behaviour, lives or circumstances. Next Generation Consultants - All rights reserved 91 SETTING UP THE MEASUREMENT SYSTEM STEP 3: CLARIFY THE IMPACT REVIEW YOUR PROGRAMME LOGIC OR DESIGN TO IDENTIFY YOUR ANTICIPATED IMPACT • Think about what success would look like at the end of your programme to test if your assumptions are correct. • Ensure that your impact reflects the unique contribution this programme is making. REVIEW YOUR STAKEHOLDERS • Review and revise (if needed) your stakeholder identification, prioritisation and analysis to identify all those you need to influence or impact to reach your desired outcomes. • Ensure that stakeholders include those who will be directly involved in the desired changes and outcomes (e.g. direct participants in the target communities) and those who will help you achieve those changes (representatives of government, service providers, intermediaries, programme partners, etc.). • If you have a lot of stakeholders, group them into similar clusters (per type and/or kind of influence or impact you would like to have), e.g. primary, secondary or tertiary stakeholders or direct and indirect stakeholders. REVIEW, REFINE OR DEFINE BEHAVIOUR CHANGE (SHORT- AND LONG-TERM OUTCOMES) • For each stakeholder group, review what they would be doing differently by the end of your programme because of their participation in the activities. Did their behaviour change because of your influence? Are the changes you originally thought of still relevant? Have some stakeholders benefited more than others? Are all stakeholders’ needs met equally or is there a bias, for example girls or women? • Ensure that these stakeholder changes relate to your impact. CONSIDER PROGRAMME ACTIVITIES AND OUTPUTS • Review the activities in your programme design and directly link it to programme outputs and outcomes. • Ensure that programme outputs relate directly to programme outcomes and impact as well as the relevant programme objectives and subsequent indicators. • Add any new activities as needed to bring about the desired behavioural changes of your targeted as well as relevant stakeholder groups. STEP 4: IDENTIFY WHAT INFORMATION IS REQUIRED TYPES OF EVALUATION QUESTIONS Evaluation questions help guide the M&E process and system, and relate to the whole programme outcome or impact, its design and effectiveness. Developing clear evaluation questions early in your M&E system development process can ensure that you collect information that is relevant and useful. It is important to develop questions that are most relevant to your programme objectives and development context and relate to your overall strategic purpose, mandate and objectives. Remember what you need the information for and who will use it to help guide you. Note that these are not questions you ask directly during data collection, but rather to guide what information you want to collect. There are three main kinds of questions that are valuable to ask in a programme M&E process: 1. Key evaluation outcome questions 2. Questions to help monitor or evaluate and access programme progress and activities 3. Questions to test assumptions, programme logic or theory of change 92 SETTING UP THE MEASUREMENT SYSTEM You may also be asked to report on certain numbers and details about your programmes or specific programme activities, such as how many men, women, boys and girls participate, whether people have a disability, or live in rural, remote or urban settings. In research and evaluation, these kinds of details are often called disaggregating data, which simply means breaking down different data sets within broader information processes based on additional categories or detail. You will need to keep these disaggregation requirements in mind when you design your monitoring questions, data collection tools and collecting and analysing your M&E information sets. QUESTIONS TO TEST ASSUMPTIONS Monitoring, evaluation and impact assessments should be used to test your most critical assumptions about the programmes you fund – those things we believe will occur, and if they do not, will risk the programme’s success. Questions that test assumptions are related to the programme logic or theory of change of your programme – HOW you think change will happen and whether the programme design and implementation match the real-world situation or specific development context it is working in or applied to. If we have a programme that runs employment skills training and the desired outcome is participants finding employment, we may be assuming things like “The training provides relevant, employer-appropriate skills to participants” and “There are enough employers to provide opportunities for training participants”. If we collect information about these assumptions, we can adjust programme activities to respond better to the real development situation and context in which our programme is working. Examples of questions that test these assumptions: How relevant is the training to the needs of local employers? Are there enough employers willing to offer training opportunities for participants? Do employers and participants feel the training is of high enough quality to prepare people for employment? • Accuracy and validity: Does the information show the true situation? • Relevance: Is the information relevant to user interests? • Timeliness: Is the information available in time to make necessary decisions? • Credibility: Is the information believable? • Attribution: Are the results due to the project or something else? • Significance: Is the information important or relevant? • Representativeness: Does the information represent only the target group or the wider population as well? • Spatial: Issues of comfort and ease to determine monitoring sites. • Project: The assessor is drawn to sites where information and contacts are readily available and may have been assessed before. • Person: Key informants tend to be those who are in a high position and have the ability to communicate. • Season: Assessments are conducted during periods of pleasant weather, or areas cut off by bad weather are neglected in analysis and many typical problems go unnoticed. • Diplomatic: Selectivity in projects shown to the assessor for diplomatic reasons. • Professional: Assessors are too specialised and miss links between processes. • Conflict: Assessors go only to areas of ceasefire and relative safety. • Political: Informants present information that is skewed towards their political agenda; assessors look for information that fits their political agenda. • Cultural: Incorrect assumptions are based on one’s own cultural norms; assessors do not understand the cultural practices of the affected populations. FACTORS THAT AFFECT THE QUALITY OF M&E INFORMATION Next Generation Consultants - All rights reserved 93 SETTING UP THE MEASUREMENT SYSTEM • Class and ethnicity: Needs and resources of different groups are not included in the assessment. • Interviewer or investigator: Tendency to concentrate on information that confirms preconceived notions and hypotheses, causing one to seek consistency too early and overlook evidence inconsistent with earlier findings; partiality to the opinions of elite key informants. • Key informant: Biases of key informants carried into assessment results. • Gender: Male monitors may only speak to men; young men may be omitted. • Mandate or specialty: Agencies assess areas of their competency without an interdisciplinary or interagency approach. • Time of day or schedule bias: The assessment is conducted at a time of day when certain segments of the population may be over- or under-represented. • Sampling: Respondents are not representative of the population. REVIEW AND DEVELOP TARGETS AND INDICATORS You also need to develop some key indicators and performance targets to help track progress towards your desired outcomes. These targets and indicators are directly related – the target sets the specific amount you are aiming for and the indicator measures how much of it was achieved. For example, your target might be “50 young people have started new businesses” and the indicator to match would be “the number of young people who have started businesses”. Targets help make outcomes more specific and include something to aim for in terms of how much and by when. For example, if your outcome is “Children are gaining new skills and knowledge”, your target might be “60% of children by year three of the programme”. Setting targets for your outcomes can be useful when they are carefully selected and meaningful to programme participants as well as grantmaking staff and development practitioners. Indicators are a measure of something happening that contributes to your outcomes. For example, if your outcome is “Children are regularly attending school”, an indicator of this might be “Number of children attending school more than three days a week”. While indicators can be useful measures of programme progress and success, they need to be realistic to collect and tell you something meaningful about your programme. For example, if your outcome is “Children are gaining new skills and knowledge”, an indicator such as “Percentage of children with increased knowledge in science” would not tell you much, as this focuses only on one aspect of knowledge gained. Indicators that contain a percentage can also complicate accurate information collection, unless you have full control over the participants you are measuring. In this example for school children, it might be difficult to get accurate numbers of children from schools, so reporting about this indicator could be difficult. It is important to select indicators about which information can be easily accessed. Certain targets and indicators may be a requirement from investors, related to specific strategic intent. For example, an intervention can be directed only at girls, people with learning disabilities, previously disadvantaged individuals or people within or below a specific income category. It is important to review these to check if they are realistic and useful. Do you think your programme will realistically meet these targets? Can you gather accurate information about these targets and indicators? Are these targets important to the success of your broader programme? If you answer no to any of the questions, it is important to renegotiate the validity of the indicator to ensure that you are reporting on realistic targets and indicators. 94 SETTING UP THE MEASUREMENT SYSTEM STEP 5: PLAN DATA MANAGEMENT Data management refers to the processes and systems for how a project or programme will systematically and reliably store, manage and access M&E data. It is a critical part of the M&E system, linking data collection with its analysis and use. Poorly managed data wastes time, money and resources; lost or incorrectly recorded data affects not only the quality and reliability of the data, but also the time and resources invested in its analysis and use. Data management should be timely and secure, and in a format that is practical and user-friendly. It should be designed according to the project or programme needs, size and complexity. Typically, project or programme data management is part of an organisation’s, project’s or programme’s larger data management system and should adhere to established policies and requirements. The following are seven key considerations for planning a project or programme data management system: 1. Data format: The format in which data is recorded, stored and eventually reported is an important aspect of overall data management. Generated data comes in many forms, but are primarily: • Numerical (e.g. spreadsheets or database sets) • Descriptive (e.g. narrative reports, checklists or forms) • Visual (e.g. pictures, video, graphs, maps or diagrams) • Audio (e.g. interview recordings) 2. Data organisation: A project or programme needs to organise its information into logical, easily understood categories to increase its access and use. Data organisation can depend on a variety of factors and should be tailored to the users’ needs. Data is typically organised by one or a combination of the following classification logic: • Chronologically (e.g. month, quarter or year) • By location (e.g. per site or location – rural, urban or region) • By content or focus area (e.g. different objectives of a project or programme) • By format (e.g. project reports, investor and intermediary reports or technical documents) 3. Data availability: Data should be available to its intended users and secure from unauthorised use (discussed below). Key considerations for data availability include: • Access: How permission is granted and controlled to access data (e.g. shared computer drives, folders or intranets). This includes the classification of data for security purposes (e.g. confidential, public, internal or departmental). • Searches: How data can be searched and found (e.g. keywords). • Archival: How data is stored and retrieved for future use (e.g. weekly, monthly or quarterly). • Dissemination: How data is shared with others (e.g. per stakeholder group). 4. Data security and legalities: Projects or programmes need to identify security considerations for confidential data, as well as legal requirements of government, investors, intermediaries, beneficiaries and other partners. Data should be protected from non-authorised users. This can range from a lock on a filing cabinet to computer virus and firewall software. Data storage and retrieval should also conform to privacy clauses and regulations for auditing purposes. 5. Information technology (IT): The use of computer technology to systematise the recording, storage and use of data is especially useful for projects or programmes with considerable volumes of data, or as part of a larger programme for which data needs to be collected and analysed from multiple smaller projects or programmes. IT systems can help to reorganise and combine data from various sources, highlighting patterns and trends for analysis and to guide decision-making. It is also effective for data and information sharing with multiple stakeholders in different locations. The use of IT systems should be balanced with the associated Next Generation Consultants - All rights reserved 95 SETTING UP THE MEASUREMENT SYSTEM cost for computers and software, resources to maintain and safeguard the system, and the capacity among intended users. Examples of IT systems for data management in M&E include: • Handheld personal digital assistants (PDAs) to record survey findings • Excel spreadsheets to store, organise and analyse data • Microsoft Access to create user-friendly databases to enter and analyse data 6. Data quality control: It is important to identify procedures for checking and cleaning data, and how to treat missing data. In data management, unreliable data can result from poor typing or input, duplication of entries, inconsistent collection methodologies, and accidental deletion and loss of data. These problems are particularly common with quantitative data collection for statistical analysis. Another important aspect of data quality is version control. This is how documents can be tracked for changes over time. Naming a document as “final” does not help if it is revised afterwards. Versions (e.g. 1.0, 1, 2.0 or 2.1) can help, but using dates is also recommended. 7. Responsibility and accountability of data management: It is important to identify the individuals or teams responsible for developing and/or maintaining the data management system, assisting team members with its use and enforcing policies and regulations. It is important to identify who authorises the release of confidential data, or access to it. STEP 6: PLAN DATA ANALYSIS Data analysis is the process of converting collected (raw) data into usable information. This is a critical step of the M&E planning process, because it shapes the information that is reported and its potential use. It is a continuous process throughout the project or programme cycle to make sense of gathered data to inform ongoing and future programming. Such analysis can occur when data is initially collected, and certainly when data is explained in data reporting (discussed in the next step). Data analysis involves looking for trends, clusters or other relationships between different types of data, assessing performance against plans and targets, forming conclusions, anticipating problems and identifying solutions and best practices for decision-making and organisational learning. Reliable and timely analysis is essential for data credibility and utilisation. DATA PREPARATION Data preparation, often called data reduction or organisation, involves getting the data into a more usable form for analysis. Data should be prepared per its intended use, usually informed by the logframe’s indicators. Typically, this involves cleaning, editing, coding and organising raw quantitative and qualitative data, as well as crosschecking the data for accuracy and consistency. DATA ANALYSIS (FINDINGS AND CONCLUSIONS) Data analysis can be descriptive or interpretive. Descriptive analysis involves describing key findings; conditions, states and circumstances uncovered from the data. Interpretive analysis helps to provide meaning, explanation or causal relationship from the findings. Descriptive analysis focuses on what happened, while interpretive analysis seeks to explain why it occurred – what might be the cause(s). Both are interrelated and useful in information reporting as descriptive analysis that informs interpretive analysis. DATA VALIDATION It is important to determine if and how subsequent analysis will occur. This may be necessary to verify findings, especially with high-profile or controversial findings and conclusions. This may involve identifying additional primary and/or secondary sources to further triangulate analysis, or comparisons can be made with other related research studies. For instance, there may need to be additional interviews or focus group discussions to further clarify (validate) a finding. Subsequent research can also be used in following up identified research topics emerging from analysis for project or programme extension, additional funding or to inform the larger development community. 96 SETTING UP THE MEASUREMENT SYSTEM DATA PRESENTATION Data presentation seeks to effectively present data so that it highlights key findings and conclusions. A useful question to answer when presenting data is “So what?”. What does all this data mean or tell us – why is it important? Try to narrow down your answer to the key conclusions that explain the story the data presents and why it is significant. Other key reminders in data presentation include: • Make sure that the analysis or finding you are trying to highlight is sufficiently demonstrated. • Ensure that data presentation is as clear and simple as accuracy allows, so that users can easily understand it. • Keep your audience in mind, so that data presentation can be tailored to the appropriate level or format (e.g. summary, verbal or written). • Avoid or limit technical jargon or detail. RECOMMENDATIONS AND ACTION PLANNING Recommendations and action planning happen where data is used as evidence or justification for proposed actions. It is closely interrelated with the utilisation of reported information, but it is presented here because the process of identifying recommendations usually coincides with analysing findings and conclusions. It is important that there is a clear causality or rationale for the proposed actions, linking evidence to recommendations. It is also important to ensure that recommendations are specific, which will help in data reporting and utilisation. It is useful to express recommendations as specific action points that uphold the SMART criteria (specific, measurable, achievable, relevant and time-bound) and are targeted to the specific stakeholders who will take them forward. It is also useful to appoint one stakeholder who will follow up with all others to ensure that actions have been taken. STEP 7: PLAN REPORTING AND UTILISATION Having defined the project’s or programme’s informational needs and how data will be collected, managed and analysed, the next step is to plan how the data will be reported as information and put to good use. Reporting is the most visible part of the M&E system, where collected and analysed data is presented as information for key stakeholders to use. Reporting is a critical part of M&E because no matter how well data may be collected and analysed, if it is not well-presented, it cannot be well-used, which can be a considerable waste of valuable time, resources and personnel. Sadly, there are numerous examples where valuable data has proved valueless because it has been poorly reported on. IDENTIFY THE SPECIFIC REPORTING NEEDS AND AUDIENCE Reports should be prepared for a specific purpose or audience. This informs the appropriate content, format and timing for the report. For example, do users need information for ongoing project or programme implementation, strategic planning, compliance with investor requirements, evaluation of impact and/or organisational learning for future projects and programmes? It is best to identify reporting and other informational needs early in the M&E planning process, especially reporting requirements. Next Generation Consultants - All rights reserved 97 SETTING UP THE MEASUREMENT SYSTEM INTERNAL AND EXTERNAL REPORTING INTERNAL REPORTING Primary audience is the project or programme team and the organisation in which it operates Primary purpose is to inform ongoing project management and decision-making (monitoring reporting) Frequency is on a regular basis, according to project monitoring needs Content is comprehensive in content, providing information that can be extracted for various external needs Format is typically determined by the project team according to what will best serve the project or programme needs and the organisational culture EXTERNAL REPORTING Primary audience is stakeholders outside the immediate team or organisation, e.g. intermediaries, beneficiaries, partner organisations, government or development agencies Primary purpose is typically for accountability, credibility, to solicit support, recognition, awareness, celebration of accomplishments and highlight any challenges and how they are addressed Frequency is less often in the form of periodic assessments (evaluations) Content is precise, typically abstracted from internal reports and focused on communication points (requirements) specific to the target audience Format is often determined by external requirements or preferences of intended audiences DETERMINE THE REPORTING FREQUENCY It is critical to identify realistic reporting deadlines. They should be feasible in relation to the time, resources and capacity necessary to produce and distribute reports, including data collection, analysis and feedback. Some key points to keep in mind when planning the reporting frequency: 1. Reporting frequency should be based on the informational needs of the intended audience, timed so that it can inform key project or programme planning, decision-making and accountability events. 2. Reporting frequency will also be influenced by the complexity and cost of data collection. For instance, it is much easier and affordable to report on a process indicator for the number of workshop participants than an outcome indicator that measures behavioural change in a random sample household survey (which entails more time and resources). 3. Data may be collected regularly, but not everything needs to be reported to everyone all the time, for example: • A security officer might want monitoring situational reports on a daily basis in a conflict setting. • A field officer may need weekly reports on process indicators around activities to monitor project or programme implementation. • A project or programme manager may want monthly reports on outputs and services to check if they are on track. • Project or programme management may want quarterly reports on outcome indicators of longer-term change. • An evaluation team may want baseline and end-line reports on impact indicators during the project start and end. DETERMINE SPECIFIC REPORTING FORMATS Once the reporting audience (who), purpose (why) and timing (when) have been identified, it is important to determine the key reporting formats that are most appropriate for the intended user(s). This can vary from written documents to video presentations on the internet. Sometimes the 98 SETTING UP THE MEASUREMENT SYSTEM reporting format must adhere to strict requirements, while at other times there can be more flexibility. Typical reporting formats include: • Project management reports • Evaluation reports • Programme updates, mid-year and annual reports • Operational updates • Situation reports • Activity and event reports • Memos, pictures or videos • Brochures, pamphlets, handouts or posters • Newsletters, bulletins, publications, reference articles or leadership articles • Media releases, public presentations, success stories or case studies THE PROJECT OR PROGRAMME MANAGEMENT REPORT Attention should be given to the project or programme management report because it typically forms the basis for internal information that will, in turn, provide information for external reporting. Other reporting formats may occur more frequently, e.g. for specific activities, or less frequently, such as evaluation reports, but the project or programme management report is usually the primary reporting mechanism for compiling information from various reports for project or programme management and providing information for other reports for accountability. Project or programme management reports should be undertaken regularly enough to monitor project or programme progress and identify any challenges or delays with sufficient time to adequately respond. Most organisations undertake management reporting monthly or quarterly; there are pros and cons to both. Monthly reporting allows a more regular overview of activities which can be useful, particularly in a fastchanging context, such as during an emergency operation. However, more frequent data collection and analysis can be challenging if monitoring resources are limited. Quarterly reports allow more time between reports, with less focus on activities and more on change in the form of outputs and even outcomes. A PROJECT OR PROGRAMME MANAGEMENT REPORT OUTLINE 1. Project or programme information: Summary of key project or programme information, e.g. name, dates, manager or codes. 2. Executive summary: Overall summary of the report, capturing the project status and highlighting key accomplishments, challenges and planned actions. This could also include the indicators for people reached and volunteers, number of sites, etc. 3. Financial status: Concise overview of the project’s or programme’s financial status, based on monthly financial reports for the reporting quarter. 4. Situation or context analysis (positive and negative factors): Identify and discuss any factors that affect the project’s or programme’s operating context and implementation (e.g. change in security or a government policy), as well as related actions to be taken. 5. Analysis of implementation: Critical section of analysis based on the objectives as stated in the project or programme logframe and data recorded in the project or programme indicator table. 6. Stakeholder participation and complaints: Summary of key stakeholders’ participation and complaints that have been filed. 7. Partnership agreements and other key actors: Lists any project or programme partners and agreements (e.g. project or programme agreement, or a memorandum of understanding), and related comments. 8. Cross-cutting issues: Summary of activities undertaken or results achieved that relate to any cross-cutting issues (e.g. gender equality or environmental sustainability). 9. Project or programme staffing (human resources): Lists new personnel or other changes in project or programme staffing. Should also include whether management support is needed to resolve any issues. Next Generation Consultants - All rights reserved 99 SETTING UP THE MEASUREMENT SYSTEM 10. Exit or sustainability strategy summary: Update on the progress of the sustainability strategy to ensure that the project or programme objectives will be able to continue after handover to local stakeholders. 11. Project management evaluation reporting status: Concise update of the project’s or programme’s key planning, monitoring, evaluation and reporting activities. 12. Key lessons: Highlights key lessons and how they can be applied to this or other similar projects, programmes or situations in future. 13. Annexure: Any other supplementary information. STEP 8: DEVELOP MEASUREMENT (M&E) PLANS MOST ORGANISATIONS UNDERTAKE MANAGEMENT REPORTING MONTHLY OR QUARTERLY; THERE ARE PROS AND CONS TO BOTH. “ “ Developing an M&E plan, tools and guideline documents is the necessary foundation for building an M&E system. The M&E plan covers objectives, indicators, data sources, plans for data collection, analysis, reporting, use and budget. It clearly outlines who should collect, analyse and report on certain data. It all looks great on paper, but systems are needed to operationalise every aspect. To implement M&E, the systems elements that need to be addressed are human resources, information systems, capacity-building, decision-making processes and finances. From an organisational development perspective, the system elements are related and work together to create the whole. As M&E is an organisation-wide effort, the system is also organisation-wide. Every project or intervention or investment portfolio and focus area should have a monitoring and evaluation (M&E) plan. This is the fundamental document that details the various programme’s objectives and the interventions developed to achieve them. It also describes the procedures that will be implemented to determine whether the objectives are met. It indicates how the expected results of a programme relate to its goals and objectives, and describes the data or information required or needed, how this information will be collected and analysed, how it will be used, the resources that will be needed to obtain the information, and how the programme will be accountable and report back to stakeholders on the impact made or the change that was facilitated. Effective measurement plans: • State how a programme will measure its achievements and provide accountability or evidence of impact. • Document consensus and provide transparency. • Guide the implementation of M&E activities in a standardised and coordinated way. • Preserve institutional memory. 100 SETTING UP THE MEASUREMENT SYSTEM Measurement plans should be created during the design phase of a programme and can be organised in various ways. Typically, these plans include: • The underlying assumptions on which the achievement of programme goals depend • The anticipated relationships between inputs, activities, outputs and outcomes • Well-defined conceptual measures and definitions, along with baseline values • The monitoring, evaluation and impact assessment schedule • A list of data sources to be used • Cost estimates for the M&E activities • A list of the partnerships and collaborations that will help achieve the desired results • A plan for the dissemination and utilisation of the information and knowledge gained COMPONENTS OF MEASUREMENT (M&E) PLANS The components of an M&E plan are described in more detail below. 1. The introduction to the M&E plan should include: • Information about the purpose of the programme, the specific M&E activities that are needed and why they are important. • A development history that provides information about the motivations and expectations of the various internal and external stakeholders, and the extent of their interest, commitment and participation. 2. The programme description should include: The objectives listed in the programme description should be “SMART”: • A problem statement that identifies the specific problem to be addressed (a concise statement that provides information about the situation, the background, the context that needs changing, who the situation affects, and the situation’s causes, magnitude and impact on society). • The programme goal and objectives: The goal is a broad statement about a desired long-term outcome of the programme (e.g. improvement in the reproductive health of adolescents or a reduction in unwanted pregnancies in X population). Introduction Programme description and framework Detailed description of plan indicators Data collection plan Monitoring plan Evaluation plan Plan for utilising the information gained Mechanism to update the plan S Specific: Is the desired outcome clearly specified? M Measurable: Can the achievement of the objective be quantified and measured? A Appropriate: Is the objective appropriately related to the programme’s goal? R Realistic: Can the objective realistically be achieved with the available resources? T Timely: Over what period will the objective be (realistically) achieved? Next Generation Consultants - All rights reserved 101 SETTING UP THE MEASUREMENT SYSTEM Objectives are statements of desired and specific measurable programme results (e.g. to reduce the total fertility rate to 4.0 births by year X or to increase contraceptive prevalence over the life of the programme). • Descriptions of specific interventions to be implemented and their duration, geographic scope and target population. • The list of resources required: Financial, human and those related to the execution of the programme, e.g. the infrastructure (office space, equipment, staff, rental and supplies). • The conceptual framework, which is a graphic depiction of the factors thought to influence the problem of interest and how these factors relate to one another – a detailed description of the intervention; how and why the design of the intervention will be successful; evidence that it has worked; evidence of needs assessments, risks, threats to be managed, etc. • The logical framework or results framework that links the goal and objectives to the outcome of the intervention. 3. One of the most critical steps in designing an M&E system is selecting appropriate indicators. The M&E plan should include descriptions of the indicators that will be used to monitor programme implementation and achievement of the goals and objectives. Indicators are clues, signs or markers that measure one aspect of a programme and show how close a programme is to its desired path and outcomes. They are used to provide benchmarks for demonstrating the achievements of a programme. 4. The data collection plan should include diagrams depicting the systems used for data collection, processing, analysis and reporting. The strength of these systems determines the validity of the information obtained. Data sources are sources of information used to collect the data needed to determine the indicators. Potential errors in data collection, or in the data sources themselves, must be carefully considered when determining the usefulness of data sources. 5. The monitoring plan describes: • Specific programme components that will be monitored, such as provider or intermediary performance or the utilisation of resources or activities conducted (e.g. on site, through completion of timesheets or attendance records). • How the monitoring will be conducted. • The indicators that will be used to measure results. Because monitoring is concerned with the status of ongoing activities, output indicators, also known as process indicators, are used, for example: How many children visit a child health clinic in one month? How many of these children are vaccinated during these visits? 6. The evaluation plan provides the specific research design and methodological approaches to be used to identify whether outcomes changes can be attributed to the programme. For instance, if a programme wants to test whether quality of patient care can be improved by training providers, the evaluation plan would identify a research design that could be used to measure the impact of such an intervention. One way this could be investigated would be through a quasi-experimental design in which providers in one facility are given a pre-test, followed by training and a post-test. For comparison purposes, a similar group of providers from another facility would be given the same pre- and post-test, without the intervening training. Then the test results would be compared to determine the impact of the training. 7. How the information gathered will be stored, disseminated and used should be defined at the planning stage of the project, and described in the M&E plan. This will help ensure that findings from M&E efforts are not wasted because they are not shared or used in subsequent management decisions. Aspects to be considered may include: • Dissemination channels which could include written reports, media releases and stories, case studies in mass media, specialised publications and speaking events. Phase Guideline Description Plan Set goals Articulate the desired impact of the investments or interventions Establish a clear investment thesis and theory of change (ToC) to form the basis of strategic planning and ongoing decision-making and to serve as a reference point for investment programme performance Develop a framework and set metrics Determine metrics or indicators to be used to assess the performance of the investments or programmes Develop an effective measurement framework that integrates metrics and indicators, and outlines how specific data is captured and used; utilise metrics that align with existing standards Do Collect and store data Capture and store data in a timely and organised fashion Ensure that the information technology, tools, resources, human capital and methods used to obtain and track data function properly Validate data Validate data to ensure sufficient quality Verify that data is complete and transparent by crosschecking calculations and assumptions against known data sources, where applicable 102 SETTING UP THE MEASUREMENT SYSTEM • The capacities needed to implement the efforts described in the measurement plan should be included in the document, e.g. specific subject knowledge or development expertise required, or if impartial independent evaluators are required, especially in low-resourced organisations or conflict situations. • The activities described in M&E plans should be conducted legally, ethically and with respect for those involved in and affected by them. • M&E plans should convey technically accurate information and should be realistic, prudent, diplomatic and frugal. • The various users of this information should be clearly defined, and the reports should be written with specific audiences in mind. M&E plans should serve the information needs of the intended users in practical ways. These users can range from those assessing national programme performance at the highest central levels to those allocating resources at the district or local level. 8. A mechanism for reviewing and updating the measurement plan should also be included. This is because changes in the programme can and will affect the original plans for monitoring as well as evaluation. STEP 9: GUIDELINES FOR MANAGING THE MEASUREMENT CYCLE The next element of measurement is the continuous forecasting, monitoring and evaluation of results by using indicators or metrics that explain whether the expected and/or intended change has happened, and by how much. This summary can assist with the plan-do-assess-review cycle of managing and measuring impact. GUIDELINES FOR MANAGING THE MEASUREMENT PROCESS Next Generation Consultants - All rights reserved 103 SETTING UP THE MEASUREMENT SYSTEM Phase Guideline Description Assess Analyse data Distil insights from the data collected Review and analyse data to understand how investments are progressing against goals, objectives and targets Review Report data Share progress with key stakeholders Distribute impact data coherently, credibly and reliably to effectively inform decisions by all stakeholders Make datadriven investment management decisions Identify and implement mechanisms to strengthen the rigour of investment process and outcomes Assess stakeholder feedback on reported data and address recommendations to make changes to the investment thesis or ToC and logic model framework Assessment & planning Programme development and research phase GENERIC M&E FRAMEWORK WITH ILLUSTRATIVE DATA Inputs (Resources) Programmebased data Activities (Intervention services) Programmebased data Outputs (Immediate effects) Programmebased data Outcomes (Intermediate effects) Populationbased data Behavioural data Impacts (Long-term effects) Population- based data: social research studies Situation analysis Response analysis Stakeholder needs analysis Collaboration plans Staff Funds Materials Facilities Supplies Training Services Products Treatments Interventions Number of staff trained Number of clients served Number of tests conducted Persentage of workshops Behaviour beneficiary Capacity intermediary Risk behaviour Service use Clinical outcomes Quality of life Social norms HIV prevalence HIV incidence STI incidence Aids morbidity Aids mortality Economic impact 104104 SECTION SEVEN THE MEASUREMENT PROCESS 07 106 THE MEASUREMENT PROCESS THE EVALUATION PROCESS The evaluation process has three core stages: • Planning the evaluation • Conducting the evaluation • Using the results Planning an evaluation should always occur during programme design. Once the programme has been implemented, the development of evaluation findings should be timed to inform programme decisions and drive continuous improvement in programme delivery. For existing programmes, programme managers should endeavour to review whether evaluation arrangements are in place and, if not, make arrangements to ensure that an evaluation plan is developed and implemented. PROGRAMME STAGES Identification and appraisal • Identify problem (cause and effect) • Establish case for involvement, funding and action, and identify and appraise solutions Select preferred approach • Funder approval Planning • Implementation plan for selected approach Approval to proceed • Funder approval Implementation and delivery Are we getting it right? • Monitoring and evaluation Planning evaluation • Appointment of evaluation manager and development of evaluation plan Include details of evaluation planning in submission for programme approval Ensure appropriate data collection to support evaluation Conduct the evaluation and use the results • Analyse data and collate findings • Ensure findings are disseminated and used EVALUATION PROCESS To deliver findings that inform decision-making and inform continuous improvements of programmes, every evaluation process should: • be underpinned by a clear, considered evaluation plan • involve clear, transparent reporting that outlines methods, assumptions and key findings • result in findings that can inform decisions regarding programme delivery Next Generation Consultants - All rights reserved 107 THE MEASUREMENT PROCESS PLANNING THE EVALUATION Planning an evaluation is probably the most important step in the evaluation process. Different types of evaluation provide different kinds of information and support different decisions. This makes it important to plan upfront which questions need to be answered, as well as how and when. Evaluation planning should begin while the programme is being designed. Evaluations that are planned simultaneously with the plan for programme implementation will be more likely to result in meaningful findings than those that are planned after the programme has been implemented. Every evaluation plan should identify: • Why – the background and rationale for the programme and the evaluation, and who will use the results • What – the scope and objectives of the evaluation and questions for the evaluation to answer • Who – those responsible for managing and carrying out the evaluation • How – the methods of gathering and analysing data to answer the questions, strategies to manage risk and the plan for dissemination, disclosure and use of results • When – the timing of evaluation in the programme cycle, key milestones and deliverables • Resources – the people, materials, infrastructure and logistics needed for the evaluation There are key steps involved in the development of an evaluation plan. While each step is important, they do not necessarily have to be performed in the order listed. Some steps may need to be brought forward or revisited to ensure that the evaluation plan effectively identifies all the factors listed above. STEP 1: IDENTIFY STAKEHOLDERS Engaging key stakeholders early in the evaluation process can help inform an appropriate evaluation design and improve the usefulness and acceptability of the findings. Stakeholders are also much more likely to engage with and support the evaluation if they are involved in the process from the beginning. In general, stakeholders fall into three groups: • Those involved in running the programme, e.g. programme managers and staff, funding agencies and delivery bodies. • Those served or affected by the programme, e.g. programme participants (and their families), individuals who purchase services or products delivered by the programme and the general public. • Those who are interested in the programme and would use the evaluation results, e.g. senior public sector managers and government ministers, as well as community, industry or advocacy groups. Considering the needs of key stakeholders will influence decisions on the evaluation objectives, research questions and evaluation design. For example, stakeholders can inform: • How the evaluation can be timed to feed into decision-making. • Ways to increase the effectiveness of evaluation findings, including data requirements, presentation of the results, and mechanisms for and timing of dissemination. • How robust the findings need to be to support the intended end use and what level of scrutiny the findings will undergo. Stakeholder engagement and management will be more challenging where multiple agencies or proponents work together to deliver a programme. In these cases, identification and ongoing management of stakeholder expectations will be critical to ensure that evaluation objectives are relevant and that findings from an evaluation can be used to inform decision-making. 108 THE MEASUREMENT PROCESS STEP 2: UNDERSTAND PROGRAMME OBJECTIVES AND INTENDED OUTCOMES Before designing an evaluation, it is important to develop a complete understanding of how the programme works (or is intended to work), what it is trying to achieve (in terms of measurable objectives), and why (the underlying policy problem). This is often referred to as programme theory, programme logic or service logic. This information should have already been determined as part of the programme design, but needs to be carried across into the evaluation plan. To get a complete understanding of the programme, the evaluator and programme manager should identify: • Need: Why the programme is required • Objectives: What the programme aims to achieve and why • Inputs: Resources needed to operate the programme (labour, materials, etc.) • Activities: Processes, tools, events, technology and actions integral to programme implementation • Outputs: Direct products of programme activities, such as types of services to be delivered • Short-term outcomes, such as changes in awareness, knowledge, skills and attitude • Medium-term outcomes, such as behaviour changes • Long-term outcomes, such as wider economic, environmental and social impacts Where a programme involves multiple delivery strategies, the inputs and actions of each strategy should be identified separately. This will help to identify whether there are improvements that can be made to individual elements to increase the success of the programme as a whole. It will also be necessary to identify economic, social or political factors that might present challenges for the programme and potentially result in realised impacts that differ from intended impacts. STEP 3: DECIDE ON EVALUATION OBJECTIVES AND KEY EVALUATION QUESTIONS The purpose of an evaluation could be to examine one of the following elements of a programme, or a combination thereof: • The relevance and appropriateness of the programme’s objectives and activities in addressing recognised needs and priorities. • Programme processes governing design, implementation and delivery. • The effectiveness of the programme in achieving outcomes. • The efficiency of the programme in delivering outputs and outcomes. The specific objective/s of an evaluation will vary from programme to programme and will depend on the type of programme being delivered, the existing body of evidence about the programme, and the requirements for stakeholders to assess the programme and make improvements. Clear establishment of the objectives (or goals) of an evaluation are vital to ensure that the evaluation produces the right information for stakeholders and decision-makers. For example, an assessment of efficiency may provide information on whether the services are cost-effective, but this might be irrelevant if the services are not meeting the needs of clients, stakeholders or the broader communities (relevancy, appropriateness and effectiveness). For this reason, many evaluations will often involve assessment of multiple elements of a programme (such as effectiveness and efficiency) to ensure that a more complete picture of the programme is provided to stakeholders and decisionmakers. There will always need to be a balance between the desired level of information and the resources required to produce evaluation findings. Once the objective has been established, specific questions need to be formulated for the evaluation to address. The questions that an evaluation will need to investigate will depend on the scale of the programme and its intended impact.  For programmes with multiple delivery strategies, each strategy may need to be evaluated, individually as well as collectively. Examining individual elements of a multifaceted programme can help to answer the following kinds of questions: • Which programme initiatives are providing the greatest impact? • Are there elements of programme delivery that are more or less effective than others in generating desired outcomes? • Is greater impact achieved when specific strategies are combined in a package of initiatives? • What are the contextual settings in which mechanisms are triggered to achieve desired outcomes? Next Generation Consultants - All rights reserved 109 THE MEASUREMENT PROCESS PROGRAMME Objectives Inputs, activities and outputs Short-, medium- and long-term outcomes Relevance and appropriateness Does the identified need still exist and is the programme relevant to the current need? Does the programme align with government objectives and goals? Does the programme align with community needs? Are the programme activities and outputs consistent with the identified objectives? EVALUATION QUESTIONS EVALUATION QUESTIONS Efficiency How can the programme be more efficient in achieving its objectives? Is the current mix of programmes efficient? Do the benefits of the programme exceed the cost of delivery? Could others provide the services more efficiently? Process (design, implementation and delivery) Was the programme delivered as planned? If not, why? How can the programme (delivery, outcomes, reach, etc.) be improved? Effectiveness What impacts resulted from the programme? Were there any unintended consequences of an initiative, and if so, why? How were different stakeholders impacted? Sustainability Are the delivery mechanisms sustainable? Are the beneficiaries and recipients sustainable after the programme? Are there other means of making the programme sustainable? 110 THE MEASUREMENT PROCESS STEP 4: CHOOSE APPROPRIATE EVALUATION METHODS After deciding on the objectives of the evaluation and the key questions the evaluation will need to address, the evaluator must consider: • Methods: Which methods will be used to answer the evaluation questions? The choice of methods will depend on what information is needed, the programme characteristics and organisational capacity and resources. • Data collection and analysis: What information is required, and from whom and how can the information best be obtained? There are several methods that can be used to source or collect data, and the evaluation plan should outline what data is required and how it will be sourced. Where possible, evaluations should incorporate qualitative and quantitative data collection methods. The plan should also identify whether there are any cultural or ethical considerations or privacy concerns that need to be taken into account in the collection and use of data, and strategies to deal with limitations or deficiencies in data collection methods. There are different methods for collecting and analysing data, and various issues that need to be considered when selecting an appropriate method. • Reporting: How will the information be used? Analysis and reporting requirements will depend on the objectives of the evaluation and the intended audience. Consideration should be given to the extent of analysis required to develop valid findings and the best mechanism for communicating findings (such as the format, language and structure of reporting). While the specific elements of evaluation design may be refined during the evaluation, these issues should be considered while the programme is planned, as they will impact resourcing, timing and consultation requirements. STEP 5: SPECIFY CRITERIA FOR MEASURING SUCCESS The evaluation plan should specify explicit criteria for determining the success of a programme, to ensure that objective conclusions can be drawn. Appropriate criteria may depend on the questions that are asked, the methods chosen for the evaluation and the objectives of the programme. • The criteria could overlap with performance measures or key performance indicators (such as the percentage of people changing behaviour as a result of the programme), or could be specific to the evaluation method (such as a positive benefit cost ratio for a cost benefit analysis). Evaluation questions Criteria requirement Example of success criteria Design, implementation and delivery Measure the extent to which the programme delivered its activities and outputs in line with the implementation plan. Number of people assisted was within a specified percentage of a target. Effectiveness Measure the quantifiable extent of the effect of the programme (the outcomes achieved) as a result of programme activities and outputs. At least a specified percentage of participants changed behaviour as a result of the programme. The programme resulted in a statistically significant improvement in behaviour. Efficiency Measure how resources are used to produce outputs for the purposes of achieving desired outcomes. The programme has a positive cost-benefit ratio. The cost of a unit of activity is in line with or lower than the national average of a specified benchmark. Next Generation Consultants - All rights reserved 111 THE MEASUREMENT PROCESS STEP 6: CONSIDER WHO SHOULD MANAGE AND CONDUCT THE EVALUATION The evaluation team could be internal to the programme, external, or a combination of both. The expertise and size of the evaluation team will depend on the objectives of the evaluation, the evaluation design and the scale of the evaluation required. If the evaluation is outsourced, the terms of reference must cover: • The rationale for the evaluation • Key evaluation questions that need to be investigated • The type of evaluation required (e.g. process evaluation or cost-benefit analysis) • Scope of the evaluation • Key stakeholders • Milestones, deliverables and modes of reporting • Expectations for dissemination, disclosure and use of results STEP 7: CONDUCT A RISK ASSESSMENT AND DEVELOP MITIGATION STRATEGIES Consider potential risks to the effectiveness of the evaluation and identify strategies to prevent or minimise them. The development of contingency plans will improve the likelihood of evaluation managers being able to quickly implement appropriate responses to emerging problems and ensure that the evaluation stays on track to meet its objectives. Risks and risk management strategies Potential risks Possible risk management strategies Evaluation support Ensure that the need for evaluation is clearly communicated to senior officers. Ensure that the evaluation findings will be valid and timed to inform decision-making. Maintain regular contact with stakeholders to ensure that their needs are met. Timing of activities Regular monitoring of evaluation plan milestones by evaluators. Allocate sufficient time between evaluation milestones to allow a small amount of project creep. Ensure stakeholders are informed of the time involved in carrying out different evaluation approaches. Reliable data Identify and clearly communicate data requirements at the earliest possible stage during programme planning to enable data collection during programme implementation. Attempt to access similar data from an alternative source. Modify research design if it is not possible to source appropriate data. Funding availability Include evaluation budget as a separate item in the programme budget. Ensure funding proposals are aligned with established funder (grantmaking) processes. Ensure that the need for the evaluation has been appropriately communicated and that findings from the evaluation can be used to inform decision-making. 112 THE MEASUREMENT PROCESS CONDUCTING THE EVALUATION It is important to note that, even when an evaluation is meticulously planned, not all processes will go smoothly. Evaluators can take a number of steps to help ensure successful implementation of the evaluation: • Ensure clear and regular communication between the programme evaluator, programme manager and key stakeholders throughout the evaluation process. • Identify early on whether there are conflicting stakeholder interests and determine strategies to minimise tension about the findings of the evaluation among stakeholders. • Ensure flexibility in programme implementation and coordination between programme and evaluation objectives. If the evaluation is not progressing smoothly, it is important to identify why and to set it right as soon as possible. This demands good project management and strategic skills from the programme manager and the evaluator, and a willingness to call a halt to the evaluation and determine what is wrong. If the evaluation occurs over a staged process rather than being a once-off exercise, ongoing risk management should take place. Risks need to be actively reviewed and managed throughout the evaluation process to take account of changing circumstances that may impact the success of the evaluation. This reiterates the need for risk management strategies established as part of the evaluation plan. USING THE RESULTS The findings of the evaluation should be used for decision-making, accountability and improving existing programmes. Decisions about improvements can be made at any time during the evaluation for continuous improvement, or they can be used to improve the way in which future programmes are designed. Evaluation findings can also be used to inform decisions about the way the programme is managed. Different audiences will have different expectations when it comes to being informed about evaluation findings. The approach for reporting on and disseminating evaluation findings will need to be adapted to suit the audience. For many evaluations, an evaluation report will be the main means of communication with stakeholders and decision-makers about the programme. A formal evaluation report should include the following: • An executive summary and a list of findings, judgements and/or recommendations • A brief description of the evaluation objectives, method, participants and limitations • A brief description of the programme background, description, management, participants, objectives and method • A section on evaluation findings (including results and their sources) • A section on the conclusions drawn from the evaluation • A conclusive summary, describing what the evaluation brought to light and who should know about it (this could be an executive summary). For some audiences, effective communication of learnings may require communication of findings by more contemporary means. This could include presentations and information sessions, or making use of video, audio and written stories disseminated via a wide range of media. PRACTICAL CONSIDERATIONS There are practical considerations that need to be taken into account when planning and undertaking an evaluation. These include the need for: • clarification of roles and responsibilities • specific timing requirements for an evaluation and the impact on resourcing • reflection on the success of an evaluation in meeting objectives and the potential for improvement • a strategic approach when determining appropriate resourcing for an evaluation Next Generation Consultants - All rights reserved 113 THE MEASUREMENT PROCESS CLARIFYING ROLES AND RESPONSIBILITIES For any evaluation, there needs to be clear roles and responsibilities. The roles required for an evaluation will vary depending on the nature of the programme and the evaluation requirements. Where the evaluation manager is not the programme manager, the two parties need to work closely together to ensure that the programme planning and implementation support effective evaluation, and that the evaluation findings are timed and relevant to be able to influence decision-making. To enable this, the selection of an evaluation manager should occur during programme development, to allow the evaluation manager to work with the programme manager to ensure that programme planning and evaluation planning occur simultaneously. Where the programme involves multiple delivery partners, consideration should be given to formalising roles and responsibilities through MOUs, agreements, contracts or committees, which set out: • the objectives of the arrangement, including desired outcomes and timeframes • the roles and responsibilities of the agencies, including their capacity to contribute • the details of the activity, including specifications of services or projects to be undertaken • resources to be applied by the agencies and related budgetary issues • the approach to identifying and sharing risks and opportunities • agreed dispute resolution arrangements Roles and responsibilities in programme evaluation Role Potential responsibilities Funding providers Ensure that there is sufficient rationale for programme development and implementation. Encourage programme developers to consider evaluation requirements during programme design and provide support to encourage and enable the effective evaluation of programmes. Over time, ensure that there is sufficient evidence to support understanding of a programme’s effectiveness and efficiency. Programme developer Ensure that the programme has appropriate and clear objectives, targets and performance criteria. Appoint an evaluation manager. Ensure adequate data collection to support any planned evaluation. Work with the evaluation manager to ensure that the evaluation is timed to enable the incorporation of evaluation findings into programme decisions and that the evaluation will produce findings that are relevant to decision-making. 114 Roles and responsibilities in programme evaluation Role Potential responsibilities Evaluation manager Preparation of terms of reference for evaluation, including clarifying objectives, scope and key stakeholders. Work with the programme manager to ensure that the evaluation is timed to enable the incorporation of evaluation findings in programme decisions and that the evaluation will produce findings that are relevant to decision-making. Ensure stakeholder involvement throughout the evaluation process. Ensure that evaluation stays on track, meets its objectives, is on time and is delivered within budget. Quality assurance. Dissemination of evaluation findings. Evaluator Conducts the evaluation. Prepares evaluation reports. THE MEASUREMENT PROCESS On a broader scale, investors and grantmakers need to ensure appropriate support and resources to enable effective programme evaluation. It is ultimately the responsibility of funders and investors to encourage a culture that articulates and supports the need for programme evaluation. EVALUATION TIMEFRAMES The time taken to undertake an evaluation will vary depending on the type of evaluation approach, the programme’s complexity and the strength of evidence required for the evaluation. A simple process evaluation could take only a few days. Complex programme evaluations could take several years, depending on the time required to detect changes in short-, medium- and long-term outcomes. The process of designing a prospective impact evaluation and collecting a baseline from scratch can often take a year or more. Once the programme starts, the intervention needs sufficient exposure to affect outcomes. Depending on the programme, that can take between one and five years, or even more. Collecting one or more follow-up surveys, doing the analysis and disseminating evaluation findings will also involve substantial effort over several months. Altogether, a complete impact evaluation cycle from start to finish can take three to four years of intensive work and engagement, with adequate financial and technical resources required each step of the way. It is important that stakeholders are realistic about the cost, complexity and time involved in carrying out different evaluation approaches. For evaluations that extend over a longer period, it is important to have predetermined reporting points as part of the evaluation plan. These points provide an opportunity to disseminate information on how the evaluation process is conducted on an ongoing basis, considering these elements: • Planning: Has an appropriate evaluation plan been developed that includes clear consideration of objectives, design, methods, outputs, governance arrangements and risk management strategies? • Competency: Do the personnel assigned to the evaluation have sufficient capability (in terms of knowledge, skills, abilities and experience) to undertake the required tasks? Next Generation Consultants - All rights reserved 115 THE MEASUREMENT PROCESS • Evaluation scope and design: Is the evaluation scope and design appropriate, given the programme and evaluation objectives, as well as the resourcing and timing constraints? • Data and analysis: Was the data collected sufficiently reliable for the intended use? If not, what could improve data collection in the future? Have data and processing techniques been reviewed for accuracy and reliability? Have weaknesses in data collection or analysis been identified and, where possible, corrected? • Findings and conclusions: Is there sufficient supporting evidence to underpin the evaluation findings? Are the conclusions and recommendations supported by the findings? Have the assumptions and limitations of the evaluation been made transparent? • Quality assurance: Were appropriate internal controls developed and adhered to in order to ensure the accuracy, reliability and applicability of the findings? A useful strategy for ongoing improvement in evaluation practices can be to subject the evaluation process and findings to a peer review, undertaken by a qualified individual (or team) not involved in the evaluation. The purpose of undertaking a peer review should be to determine if the evaluators made the most of the available data and used appropriate methods (given external factors, unforeseen events and constraints), and to identify actions that could be taken to improve future evaluations. SCALE OF THE EVALUATION A strategic approach should be adopted when deciding on the scope, governance arrangements and resource requirements of an evaluation. As the scale, complexity and risk associated with a programme increase, the evaluation of the programme must become more extensive. In general, the level of resources devoted to the evaluation should be proportional to those devoted to the programme and the magnitude of the expected impacts. More resources should be committed to evaluations where the programme is expensive, complex, large-scale or high-risk. Fewer resources will be allocated for an evaluation of a programme that is low-spend and low-impact. Resourcing of evaluations should ensure that evaluations are done to an acceptable standard and that the evaluation represents value for money. There are cases where a high level of resources might be required for the evaluation, even though total expenditure on the programme could be considered low: • If the evaluation findings will be used to inform decisions on whether to roll the programme out to a wider area or group (such as with a pilot or trial). • If the evaluation findings are to be generalised (used as evidence of other programmes’ effectiveness). In general, the programme characteristics and scale are important factors to consider in decisions on: • evaluation objectives and design • evaluator appointment • stakeholder consultation requirements A STRATEGIC APPROACH SHOULD BE ADOPTED WHEN DECIDING ON THE SCOPE, GOVERNANCE ARRANGEMENTS AND RESOURCE REQUIREMENTS OF AN EVALUATION. “ “ 116 THE MEASUREMENT PROCESS Scaling evaluation Programme characteristics Evaluation design Evaluator Stakeholder consultation Low risk Simple programme design Low resource requirements Ongoing programme Single delivery body Low potential for behavioural impacts Qualitative assessment of implementation success and programme efficiency built into programme reporting could be sufficient. Additional assessment should be undertaken where it is cost-effective. The evaluation can be conducted by an internal evaluator, seeking advice and assistance from experts if required. Consultation may simply involve project managers, but could also include individuals or institutions directly impacted by the programme. High risk Complex programme design High resource requirements Pilot programme Multiple delivery bodies High potential for behavioural impact Comprehensive evaluation design that assesses the programme’s implementation, efficiency and effectiveness from a societal perspective. The evaluation should be managed by an external party independent of the agencies involved in programme delivery. Extensive consultation with the stakeholders listed above. Evaluation experts on the appropriateness of evaluation design, methods and assumptions. Subject experts on the fit-for-purpose nature of the evaluation approach and potential findings. The interdependence of evaluation and programme characteristics reinforces the need for developing an evaluation plan while the programme is designed, rather than after the programme has been implemented, so that the programme budget can take into account the resources required for an effective evaluation. IN GENERAL, THE LEVEL OF RESOURCES DEVOTED TO THE EVALUATION SHOULD BE PROPORTIONAL TO THOSE DEVOTED TO THE PROGRAMME AND THE MAGNITUDE OF THE EXPECTED IMPACTS. “ “ Next Generation Consultants - All rights reserved 117 THE MEASUREMENT PROCESS HOW TO DEVELOP IT Step 1: Decide on the process Who will be involved with which roles and responsibilities, and which sources of information will be used? Inadequate practice: Brainstorm with post-it notes or repackage planning documents. Adequate practice: Systematic process drawing on previous planning, research and evaluation. Better practice: Ensure that processes are in place for direct input from beneficiaries in terms of what the project or programme is trying to achieve, and how. Step 2: Do a situation analysis Identify the problem or needs that will be addressed, their underlying causes and which strengths and opportunities could be leveraged Inadequate practice: Only focus on problems. Adequate practice: Include all relevant problems and needs, identify existing resources and opportunities, and then analyse. Match with foundation strategies, focus areas, criteria and priorities. Better practice: Conduct a Socio-economic baseline study of the region where the intervention is planned. Conduct stakeholder engagement with the identified beneficiaries or intermediaries. Consult sector research reports and local government or departmental development plans. Step 3: Identify intended outcomes and impacts In broad terms, not limiting at this stage to what can be easily measured Inadequate practice: Narrowly define impacts in terms of what can be readily measured. Adequate practice: Describe intended outcomes and impacts in broad terms and address standards of performance and measurement plans separately. Better practice: Consider international standards and units of measurement. Consider existing practices and performance management standards. Conduct baseline studies to determine validity, usability and comparability of measurement standards. PRACTICE NOTES FOR DEVELOPING A THEORY OF CHANGE AND LOGIC MODEL 118 THE MEASUREMENT PROCESS Step 4: Identify change theories In broad terms, how it is understood that a change come about Inadequate practice: Identify none or only one; only address one or two levels of change, e.g. individuals, organisations or the eco-system. Adequate practice: Be explicit about different change theories at different levels and in different contexts. Match to the foundation’s priority focus areas and funding criteria. Better practice: Identify the interconnections between change theories at the different levels, e.g. programme, portfolio, organisational or sector. HOW TO DEVELOP IT Step 5: Identify action theories Identify action theories about what will be done to activate each change theory Inadequate practice: List activities. Adequate practice: Identify activities to achieve specific outcomes at various levels and stages in the causal chain. Better practice: Identify different activities relevant in different contexts. Step 6: Address sustainability How it is understood that the achievements of the project will be maintained (It might not involve continuing the project activities.) Inadequate practice: Not addressed. Adequate practice: Strategy for sustainability is explicit and plausible. Better practice: Strategy for sustainability is explicit, shared with all stakeholders and supported by an exit plan. Step 7: Address scaling How it is understood that the scale of activities and impact will be increased Inadequate practice: Not addressed or not logically represented. Adequate practice: Strategy for scaling is explicit and plausible. Better practice: Strategy for scaling is explicit and supported by additional indicators, e.g. forecasting (replication). Next Generation Consultants - All rights reserved 119 THE MEASUREMENT PROCESS Step 8: Identify possible impacts Consider dimensions of impact – short-, medium- or long-term, social, economic, environmental, positive or negative, intended or unintended, etc. Inadequate practice: Possible negative impacts or possible unintended positive impacts not addressed. Adequate practice: Possible significant negative impacts are identified and monitored as part of risk management. Possible significant unintended positive impacts are identified and included. Better practice: Where appropriate, strategies are put in place to reduce the risk of identified possible negative impacts. Ongoing scanning for additional possible significant unintended impacts. Step 9: Review and revise the theory of change and logic model At periodic intervals and when necessary Inadequate practice: Either set-and-forget or constant tinkering, with little benefit. Adequate practice: Develop a “good enough” version and iteratively review and revise to address important issues. Better practice: Review and update theory of change in conjunction with monitoring, evaluation and impact reports. Step 10: Represent the theory of change and logic model Use one or more diagrams and narrative Inadequate practice: Use a linear logic framework model of inputs, activities, outputs, outcomes and impacts. Use one diagram for all purposes; develop an idiosyncratic diagram. Adequate practice: Use an outcomes hierarchy diagram to show the change theories and explain the action theories in an accompanying narrative. Use different but related versions for different purposes, especially in terms of levels of detail. Develop a diagram that explicitly draws on theories of change for that portfolio. Better practice: Use a triple-column outcomes hierarchy that shows how activities and other factors jointly produce a chain of results (logic framework model). Use a diagram that is integrated with nested diagrams for related projects, programmes and the overall foundation theory of change and logic framework model. 120 THE MEASUREMENT PROCESS HOW TO USE IT Use the theory of change and logic framework model Plan and integrate research, monitoring and evaluation and impact assessment Inadequate practice: Use only to identify indicators and specific causal relations to be tested. Adequate practice: Use in all discussions of findings to shape and improve thinking about how the programme or project works. Better practice: Use in association with monitoring, evaluation and impact assessment reports that are updated regularly. EVALUATION STAGES Stages of evaluation Outcomes of evaluation Examples of evaluation questions Pre-project stage Assess needs and assets of target population or community. Specify goals and objectives of planned services and activities. Describe how planned services and activities will lead to goals. Identify which community resources will be needed and how they can be obtained. Determine the match between project plans and community priorities. Obtain input from stakeholders. Develop an overall evaluation strategy. COMMUNITY NEEDS ASSESSMENT CONDUCTED DURING THE APPLICATION AND DUE DILIGENCE PHASE OF A NEW PROGRAMME To obtain a context for future evaluation and consider the demographics in the community and access and opportunities regarding services and programmes. To understand, consider and focus on the social, economic, environmental, cultural and political situation in the community and the stakeholder groups affected by the intervention. To identify existing community action groups and understand the history of their efforts. To identify existing formal, informal and potential leaders. To identify community needs and service gaps. To identify community strengths and opportunities. To understand community stakeholders – beneficiaries and recipients to improve, build and secure project credibility in the community. To create momentum for project activities by getting community input. To determine the appropriateness of project goals and provide baseline data for later evaluations. Next Generation Consultants - All rights reserved 121 THE MEASUREMENT PROCESS Stages of evaluation Outcomes of evaluation Examples of evaluation questions ORGANISATIONAL ASSESSMENTS CONDUCTED BEFORE FUNDING AN ORGANISATION To contextualise and understand project implementation and management aspects. To consider the internal resources of the implementing organisation, such as leadership styles, staff characteristics (e.g. training, experience and cultural competence), organisational culture, mission, partner associations, financial stability and industry credibility. To understand how the organisation and the envisaged programme will interact and how it will affect the effectiveness of the programme (e.g. to consider critical competencies and skills of the implementing organisation, to understand the research conducted to develop the programme, to understand the engagement conducted to implement the programme, to understand the development context and sector in which the programme will be executed). To understand which resources (e.g. funding, staffing, organisational and/or institutional support and infrastructure), specialist expertise and opportunities exist and are available to the project and to the evaluation To understand to what extent opportunities to participate in the evaluation process are available for people who have a stake in the project’s outcome. SPECIFIC QUESTIONS TO INCLUDE COMMUNITY NEEDS ASSESSMENT Determine the conditions on the ground by conducting a Socio-economic survey or baseline study. Consider economic, social and environmental benchmarks that speak to the proposed focus areas (e.g. concerning education, research the number of schools and teachers, teacher-to-learner ratio, general educational statistics, people with matric, people with tertiary qualifications, labour statistics, conditions in schools, number of graduates per subject). 122 Stages of evaluation Outcomes of evaluation Examples of evaluation questions Compare the findings of the Socio-economic survey or baseline study to community needs through stakeholder engagement. Identify communities that represent the proposed programme (e.g. teachers, learners, principals, department of education) and test needs and possible solutions. Identify intermediary organisations that could potentially be used to implement proposed programmes (e.g. NGOs or specialist organisations working in education, the kind of programmes they developed, the success rate, their challenges and sustainability). Identify what kind of skills reside in the foundation to be able to design programmes, manage programmes, or assess and evaluate programmes. ORGANISATIONAL ASSESSMENT How credible the organisation is (test for sustainability, relevance, effectiveness, efficiency, recognition and awards). How credible the programme design is (test for relevance, feasibility, viability, cost-benefit, sustainability and innovation). How credible the implementing team is (test for specialist skills, subject expertise, research and evaluation competencies, number of programme staff and infrastructure requirements). Design or planning stage Determine underlying programme assumptions. Develop a theory of change and programme logic model. Develop a system to obtain and present information to stakeholders. Assess feasibility of procedures given actual staff and funds. NEW PROGRAMMES To understand, from multiple perspectives, what is happening with the project. How is it implemented and how or why have decisions been made along the way? In short, to answer the following questions: To what extent does the project look and act like the one that was planned? Are the differences between planned and actual implementation based on what made sense for the beneficiaries and goals of the project? How is the project working now and what additional changes may be necessary? THE MEASUREMENT PROCESS Next Generation Consultants - All rights reserved 123 THE MEASUREMENT PROCESS Stages of evaluation Outcomes of evaluation Examples of evaluation questions Assess data that can be gathered from routine project activities. Develop a data collection system if it will answer desired questions. Collect baseline data on key outcome and implementation areas. SPECIFIC PROGRAMME EVALUATION QUESTIONS Which characteristics of the project implementation process have facilitated or hindered project goals? (Include all relevant stakeholders in this discussion, such as beneficiaries, participants, residents, consumers, staff, administrators, board members, other development agencies, intermediaries and policymakers.) Which initial strategies or activities of the project are implemented? Which are not? Why or why not? How can those strategies or activities not successfully implemented be adapted to the realities of the project? Is the project reaching its intended audience? Why or why not? What changes must be made to reach intended audiences more effectively? What lessons have been learned about the planned programme design? How should these lessons be applied to continually revise the original project plan? Do the changes in programme design reflect these lessons or other unrelated factors (e.g. personalities or organisational dynamics)? How can we better connect programme design changes to documented implementation lessons? Project implementation and modification stages: MONITORING Assess organisational processes or environmental factors that inhibit or promote project success. Describe project and assess reasons for changes from original implementation plan. Analyse feedback from staff and participants about successes and failures, and use this information to modify the project. Provide information on shortterm outcomes for stakeholders and decision-makers. ESTABLISHED PROGRAMMES: PROGRAMME EFFECTIVENESS AND RELEVANCE QUESTIONS TO UNDERSTAND Which project operations work? Which aren’t working? Why or why not? Which project logistics (e.g. facilities, scheduling of events, location, group size, transport arrangements or catering arrangements) appear to be most appropriate and useful for meeting the needs of communities and beneficiaries? What strategies have been successful in encouraging community and beneficiary participation and involvement? Which have been unsuccessful? 124 THE MEASUREMENT PROCESS Stages of evaluation Outcomes of evaluation Examples of evaluation questions Use short-term outcome data to improve the project. Describe how short-term outcomes affect long-term outcomes. Continue to collect data on short- and long-term outcomes. Assess assumptions about how and why programme works, and modify as needed. How do the project components interact and fit together to form a coherent whole? Which project components are the most important to project success? How effective is the organisational structure in supporting project implementation? What changes need to be made? TO IMPROVE THE DESIGN AND PERFORMANCE OF AN ONGOING PROGRAMME – A FORMATIVE EVALUATION What are the programme’s strengths and weaknesses? What kinds of implementation problems have emerged and how are they addressed? What is the progress towards achieving the desired outputs and outcomes? Are the activities planned sufficient (in quantity and quality) to achieve the outputs? Are the selected indicators pertinent and specific enough to measure the outputs? Do they need to be revised? Has it been feasible to collect data on selected indicators? Have the indicators been used and have they been useful for monitoring? Why are some implemented activities working better than others? What is happening that was not expected? How are programme staff, intermediaries and beneficiaries interacting? What are implementers’ and target groups’ perceptions of the programme? What do they like or dislike? What would they like to change? How are funds used compared to the initial expectations? Where can efficiencies be realised? How is the external environment affecting internal operations of the programme? Are the originally identified assumptions still valid? Does the programme include strategies to reduce the impact of identified risks? What new ideas are emerging that can be tried and tested? Next Generation Consultants - All rights reserved 125 THE MEASUREMENT PROCESS Stages of evaluation Outcomes of evaluation Examples of evaluation questions Project management, maintenance and sustainability stages: EVALUATION Share the findings with the community and other projects. Inform alternative funding sources (other funders) about accomplishments. Continue to use the evaluation to improve the project and monitor outcomes. Continue to share information with multiple stakeholders. Assess long-term impact and implementation lessons, and describe how and why the programme works. PILOTING FUTURE PROGRAMMES To inform and pilot new ideas and determine if these ideas make sense and are achievable. SPECIFIC QUESTIONS FOR THIS EVALUATION What is unique about this project? Which project strengths can we build on to meet unmet needs? Where are the gaps in services and programme activities? How can the project be modified or expanded to meet needs that still exist? Can the project be effectively replicated? What are the critical implementation elements? How might contextual factors impact replication? TO MAKE AN OVERALL JUDGEMENT ABOUT THE EFFECTIVENESS OF A COMPLETED PROGRAMME, OFTEN TO ENSURE ACCOUNTABILITY – A SUMMATIVE EVALUATION Did the programme work? Did it contribute towards the stated goals and outcomes? Were the desired outputs achieved? Was implementation in compliance with funding mandates? Were funds used appropriately for the intended purposes? Should the programme be continued or terminated? Expanded? Replicated? Replication, repeat funding stages and/or policy stages: IMPACT ASSESSMENT Assess project fit with other communities. Determine critical elements of the project that are necessary for success. Highlight specific contextual factors that inhibited or facilitated project success. As appropriate, develop strategies for sharing information with policymakers to make relevant policy changes. TO GENERATE KNOWLEDGE ABOUT GOOD PRACTICES What is the assumed logic or theories through which it is expected that inputs and activities will produce outputs, which will result in outcomes that will ultimately change the status of the target population or situation? What types of interventions are successful, and under what conditions? How can outputs or outcomes best be measured? Which lessons were learned? Which policy options are available because of programme activities? What is the total impact or return of the intervention for all stakeholders? 126126 SECTION EIGHT GLOSSARY OF TERMS 08 128 GLOSSARY OF TERMS Term Definition Additionality The impact of a programme or intervention that reflects any other or additional impact over and above the change that would have taken place had the programme not been implemented. The difference between the situation with the programme’s impact and what would have happened anyway reveals the programme’s additionality. Assessment Synonym for evaluation, but often used to refer to a technique (e.g. practical assessment) or a mini-study. Attrition Attrition occurs when some units drop out from the sample between one round of data collection and another, for example when people move and can’t be located. Attrition is a case of “unit non-response”. Attrition can create bias in the impact estimate. Analysis Using data collected during research to arrive at results which can be used to present a picture of a project’s impact and outcomes. The analysis should provide insight into the basic principles on which the project has operated. Analysis should include data from a range of sources with appropriate weighting given to each source, depending on the reliability of data. Accountability The process by which balanced and respectful relationships with diverse stakeholders are developed, enabling them to hold a party to account for its commitments, decisions and impact. Accountability is based on ensuring transparency, feedback and participation, through monitoring, evaluation, accountability and learning. Activities A specific action or set of tasks undertaken by programme staff and/ or partners to reach one or more objectives – sometimes called actions, activities, interventions, responses or strategic actions. Assumptions Beliefs about the underlying causes of the current situation, about the connections between changes and about the context or environment in which the change is happening. While there are many definitions for the terms below, the following descriptions reflect the practice of monitoring, evaluation and impact assessment. Term Definition Audit An independent, objective quality assurance activity – sometimes referred to as due diligence – designed to add value and improve an organisation’s operations. It helps an organisation accomplish its objectives by bringing a systematic, disciplined approach to assess and improve the effectiveness of risk management, control and governance processes. Baseline data Initial collection of information to serve as a basis for comparison with information that is gathered subsequently. A baseline is an account of the initial situation or data of a project or programme that can be used to compare changes and impacts over time. Benchmark A reference point or standard against which performance or achievements can be assessed. Beneficiaries The individuals, groups or organisations, whether targeted or not, that benefit directly or indirectly from an intervention, project or programme. Community A local community is a group of people who share a common place of residence and a set of institutions; the word also refers to collections of people who have something else in common (e.g. a national or donor community). Coverage The extent to which a programme or intervention is implemented in the right places (geographic coverage) and is reaching its intended target population (individual coverage). Cost-benefit analysis Estimates the total expected benefits of a programme, compared with its total expected cost. It seeks to quantify all the costs and benefits of a programme in monetary terms and assesses whether benefits outweigh costs. Counterfactual What the outcome would have been for participants if they had not participated in the programme. The counterfactual cannot be observed and must therefore be estimated using a comparison group. Data The raw detailed information gathered for research, learning, evaluation, monitoring and assessment purposes. Data analysis The process of turning raw, detailed information into a synthesised understanding of patterns and trends that are useful for learning about programmes and investment portfolios. Effectiveness The extent to which a programme or intervention has achieved its objectives under normal conditions in a real-life setting. Next Generation Consultants - All rights reserved 129 GLOSSARY OF TERMS 130 Term Definition Efficacy The extent to which an intervention produces the expected results under ideal conditions in a controlled environment. Efficiency A measure of how economically inputs (resources such as funds, expertise, time or skills) are converted into results. Empowerment People’s capacity to make choices. In practical terms, it describes a process in which feelings of being powerless are developed into actions that can achieve changes in social and physical environments. It is a central idea in community development. Evaluation The rigorous, scientifically-based collection and analysis of information about programme or intervention activities, characteristics and outcomes that determine the merit or worth of the programme or intervention. Evaluation studies provide credible information to improve programmes or interventions, identifying lessons learned and informing decisions about future resource allocation. Goal A broad statement of a desired, usually longer-term, outcome of a programme or intervention. Goals express general programme or intervention intentions and help guide the development of a programme or intervention. Each goal has a set of related, specific objectives that, if met, will collectively permit the achievement of the stated goal. Impact Long-term, sustainable changes in the conditions of people and/or the state of the environment that structurally reduce poverty, improve human wellbeing and protect and conserve natural resources. Impact value chain A representation of how an organisation or programme achieves its impact by linking the programme to its activities, and the activities to outputs and outcomes. Indicators Criteria or performance measures (units or standards) against which changes can be assessed. Indicators are used to access or understand whether a programme or portfolio is moving in the right direction towards the results, objectives or targets. Inputs Resources applied to implement activities. Intervention A specific activity or set of activities intended to bring about change in some aspect(s) of the status of the target population (e.g. HIV risk reduction or improving service delivery). Logframe/Logic model A logframe or logic model is a tool for improving the planning, implementation, management, monitoring and evaluation of projects. The logframe is a way of structuring the main elements in a project and highlighting the logical links between them. Methodology The study of methods (the tools of research). GLOSSARY OF TERMS Term Definition Monitoring Monitoring is the systematic assessment of a programme’s performance over time. It involves the ongoing collection and review of data from multiple sources, and is focused on programme management and implementation aspects. M&E plan A multi-year implementation strategy to collect, analyse and use data needed for programme or project management and accountability purposes. The plan describes a) the data needs linked to a specific programme or project; b) the M&E activities that must be undertaken to satisfy the data needs and the specific data collection procedures and tools; c) the standardised indicators that need to be collected for routine monitoring and regular reporting; d) the components of the M&E system that must be implemented and the roles and responsibilities of different organisations or individuals in their implementation; e) how data will be used for programme or project management and accountability purposes. The plan also indicates resource requirement estimates and outlines a strategy for resource mobilisation. Needs assessments Determine whether existing services are meeting needs, where there are gaps in services and where there are available resources. These are often conducted prior to initiating an evaluation or in response to evaluation findings. Objective A statement of a desired programme or intervention result that meets the criteria of being specific, measurable, achievable, realistic and time-based (SMART). Outcome evaluation A type of evaluation that determines if, and how much, intervention activities or services achieved their intended outcomes. An outcome evaluation attempts to attribute observed changes to the intervention. An outcome evaluation is methodologically rigorous and generally requires a comparative element in its design, such as a control or comparison group, although it is possible to use statistical techniques when control or comparison groups are not available (e.g. to evaluate a national programme). Outcomes The observable positive or negative changes (because of or resulting from an intervention) in the actions and behaviour of people who have been influenced (directly or indirectly, partially or totally) by outputs. This is the observable results and subsequent changes as the result of an intervention. Outputs The processes, products, goods and services that the programme produces through the activities it conducts. Participation (participatory evaluation) Participation refers to involvement of stakeholders in the project, e.g. funders, staff, project participants, local community or local government. A participatory evaluation is one where all these groups have a say in the evaluation process. This may involve planning, carrying out research or deciding how the evaluation is acted upon. The process can increase local people’s involvement in and ownership of the project. The tools used in participatory evaluation will be the same as those used in qualitative research. The space created for open and honest discussion among a range of stakeholders is important. Next Generation Consultants - All rights reserved 131 GLOSSARY OF TERMS 132 Term Definition Participants Members of the community concerned about or affected by the project, towards whom the project interventions are directed, who are actively involved in project development, implementation, monitoring and evaluation. Performance The degree to which an intervention or organisation operates according to specific criteria, standards or guidelines, or achieves results in accordance with stated goals, objectives or plans. Programme An overarching national or geographic specific response to a social issue. A programme generally includes a set of interventions marshalled to attain specific global, regional, country or subnational objectives, involving multiple activities that may cut across sectors, themes and/or geographic areas. Process evaluation A type of evaluation that focuses on programme or intervention implementation, including (but not limited to) access to services, whether services reach the intended population, how services are delivered, client satisfaction and perceptions about needs and services, and management practices. In addition, a process evaluation might provide an understanding of cultural, socio-political, legal and economic contexts that affect implementation of the programme or intervention. Programme evaluation A study that intends to control a social problem or improve a social service. The intended benefits of the programme are primarily or exclusively for the study participants or their community (i.e. the population from which the study participants were sampled); data collected is needed to assess and/or improve the programme or service, and/or the health of the study participants or their community. Knowledge that is generated does not typically extend beyond the population or programme from which data is collected. Programme records Programme documentation (e.g. activity reports and logs) and client records which compile information about programme inputs (e.g. resources used in the programme) and programme outputs (i.e. results of the programme activities). Examples include budget and expenditure records, logs of commodities purchased and distributed, client records which compile information about the time, place, type and amount of services delivered, and about the clients receiving the services. Programme logic model A way of thinking about programmes that includes a theory of change indicating a series of expected results from activities to outcomes. These results are often shown in a diagram, indicating a pathway of change. It can be applied at the project, programme, portfolio or strategic level. Quantitative data Information that can be measured by numbers and quantities and aim to answer the “what” of a programme. The data can consist of numbers and/or words, but in either case they are based on standard questions or measures so that they can be counted. Qualitative data Information that tells us about stories, experiences, opinions and the quality of things, aiming to answer the “how” and “why” of a programme. The data consists of words and observations, things that can be seen, thought and experienced. GLOSSARY OF TERMS Term Definition Qualitative research Qualitative methods are drawn largely from the fields of sociology and anthropology and rely on observation and in-depth study, largely through interviews with key respondents. Reasoning is achieved through building an overall picture by putting together information from different sources. Quantitative research Quantitative methods are based on a more positivist, empirical tradition. Research depends on precise measurements generally achieved through highly structured and controlled means of collecting information. Reasoning and interpretation are mainly carried out using statistical techniques to test predetermined hypotheses about how key variables might be related. Reflection A process where teams consider and reflect on the lessons of their work – analysing their achievements and why things went well, their challenges and why things did not go so well, and how they would change things to improve. They identify what they will keep doing, what they will stop doing and how or what they will change. Research The investigation or search for knowledge. There are two forms of dominant research methodology: qualitative and quantitative. Relevance The extent to which the objectives, outputs or outcomes of an intervention are consistent with beneficiaries’ requirements, organisations’ policies, country needs, and/or global priorities. Reliability Consistency or dependability of data collected through the repeated use of a scientific instrument or a data collection procedure used under the same conditions. Results The outputs, outcomes or impacts (intended or unintended, positive and/or negative) of an intervention. Return An awareness of the social impact an organisation is achieving that is fed back to a capital provider. While the return has no financial value, the knowledge it gives the socially-motivated capital provider/investor/funder that their capital is actively and effectively driving impact can act as a form of compensation. Returns may also be prospective: an impact or social investment may propose itself to investors because it offers a high social return (while presenting a comparatively low financial return). Results-based management A management strategy focusing on performance and achievement of outputs, outcomes and impacts. Respondent A person who replies to something, especially one supplying information for a survey or questionnaire. Stakeholder A person, group or entity with a direct or indirect role and interest in the goals or objectives and implementation of a programme or intervention and/or its evaluation. Next Generation Consultants - All rights reserved 133 GLOSSARY OF TERMS 134 Term Definition Surveillance The ongoing, systematic collection, analysis, interpretation and dissemination of data regarding a social issue. Surveillance data can for instance help predict future trends and target needed prevention and treatment programmes. Situation assessment Research that collects information needed to plan a project or programme. It can identify things such as the context, major issues, resources and current activities in a community. It can also be referred to as a situation analysis or needs assessment. Target The objective a programme or intervention is working towards, expressed as a measurable value; the desired value for an indicator at a point in time. Target group A specific group of people who are to benefit from the result of the intervention. Theory of change (ToC) A narrative and/or diagram that clarifies what should happen at different levels in policies, practices, programmes ideas and beliefs in short-term, medium-term and long-term timeframes, to contribute to an overall strategic objective and/or programme goal. It is a map for the programme journey and a prediction about the complex web of activity that is required to bring about change. Triangulation Using at least three information sources to make overall findings more robust and increase the validity of the information and analysis. GLOSSARY OF TERMS SECTION NINE TOOLS 09 136 TOOLS TOOL 1 – DEVELOPING STRATEGY This template can be used to develop organisational strategy, portfolio strategies or programme strategies. INPUTS The resources or WHAT you have to carry out the programme, e.g. time, money, skills or technology Insert resources here OUTPUTS WHAT you do – the specific activities you will undertake, such as meetings, media releases, training or direct services WHO will participate or be reached – for each activity, the people who will be impacted or who will benefit Insert activities here Insert stakeholders here OUTCOMES WHAT happens or change as a result of what you do (over time) Insert short-term change here Insert medium-term change here Insert long-term change here OBJECTIVES Insert objectives here Insert assumptions here Insert targets here Next Generation Consultants - All rights reserved 137 TOOL 2 – DEVELOPING A THEORY OF CHANGE This template can be used to develop a theory of change for organisational strategy, portfolio strategy or programme strategy. What is the problem you are trying to solve? Who are your key audiences? What is your entry point for reaching your audience(s)? What steps are needed to bring about change? What is the measurable effect of your work? What are the wider benefits of your work? What is the long-term change you see as your goal? What are your assump- tions? TOOLS QUESTIONS TO ASK ANSWERS TO BE PROVIDED Programme name What is the problem or issue you are trying to solve? Who is the key audience, primary stakeholder or beneficiary group? What is your entry point for reaching the stakeholders? What is the long-term change you see as your goal? What are the desired outputs, outcomes and impacts? What are your assumptions? What are the priorities? Consider your organisational vision, mission, values, mandates, resources or local dynamics. What are the factors that could influence the results, such as budgets, availability, policies, macro-economic conditions, capacity or skills? Which strategies will you employ or consider to effect the change? What possible solutions could there be? Who else is working on the issue? 138 QUESTIONS TO ASK ANSWERS TO BE PROVIDED Programme name Problem statement Programme summary Programme goals Programme rationale and assumptions Programme resources Programme inputs Programme activities What are the tangible products of our activities? Programme outputs What are the evidenced outcomes as a result of our programme? Programme outcomes What are the evidenced impacts as a result of the programme? Programme impacts What are the documented impact and returns of the programme? Programme returns What are the documented return on investment for the funder? TOOLS TOOL 3 – DEVELOPING A LOGIC MODEL FRAMEWORK This template can be used to map the logical framework for organisational strategy, portfolio strategies or programme strategies. Inputs Activities Outputs Outcomes Impacts Returns Next Generation Consultants - All rights reserved 139 TOOL 4 – DUE DILIGENCE OF ORGANISATIONS This template can be used during the due diligence or assessment process of an intermediary with the view to fund a specific programme. Programme detail Organisation name Organisational contact details Portfolio allocation Budget requirement Documentation checklist Trust deed List of trustees and identification documentation List of current donors and investors Organogram (operational teams and volunteers) Financial statements NPO/DSD registration documents Programme financial documentation (budget and expenses) Programme checklist Programme theory of change Programme logic framework model Programme management or implementation plan Programme stakeholder engagement or management plan TOOLS 140 Organisational checklist GOVERNANCE What is the oversight structure and what is the evidence of its effective operation in recent times? Is there an effectively operating audit, risk or remuneration committee? How often are meetings held? Are minutes taken and distributed? Is there evidence of actions being followed through? Does the organisation employ an external auditor? Is there a transparent and competitive process for the selection of an external auditor and members of the board or audit committee? Does the organisation have a legal department? How is compliance with laws and regulations ensured? FRAUD, BRIBERY AND CORRUPTION Is there evidence of formal policies on fraud, bribery and corruption? Is there regular communication and training on staff responsibilities in relation to reporting fraud, bribery and corruption? Is there a corporate level risk framework and associated policy? Is there a risk register that is regularly reviewed? Who reviews it and how often? Is there a network of risk owners responsible for day-to-day risk management? ETHICS What connections (if any) are there between senior members of the organisation and the government or politically exposed persons? Is there a published conflict of interest policy? How are potential conflicts of interest registered and monitored? Is there a published policy on gifts and hospitality? TOOLS Next Generation Consultants - All rights reserved 141 Organisational checklist RISKS Are there any open source materials that highlight concerns or negative reputational risks? Are there any issues linked to the organisation which might be particularly controversial or pose reputational risks for the funder and how might these be tempered? Are there any recurring issues that are continually brought up at board meetings? Evidence of minutes? Is the lifestyle of senior members of the organisation commensurate with their declared salary levels? INTERNAL CONTROLS Are there any observable weaknesses in internal controls? Are there documented policies and procedures? Is there evidence that these are being followed? Is there adequate segregation of duties? Ability to deliver checklist CAPACITY What is the capacity and capability of the organisation to deliver the portfolio of projects (value and complexity) as well as the specific project under review? STAFF COMPETENCY What is the capacity and capability of the senior management team in the organisation? What is the capacity and capability of the staff directly involved with managing the finances of the organisation? Can the organisation absorb the increased volume of activity associated with this grant? What is the capacity and capability of the staff directly involved with the programme? TOOLS 142 Ability to deliver checklist What additional capacity will be required to undertake this programme? How will this be secured, and how quickly? Are there any concerns about the implementation timetable? Are senior management positions characterised by high levels of staff turnover? How are people recruited? Is there an open and transparent recruitment process? What mechanisms are available to deal with poor performance? Do managers exercise adequate supervision to ensure that officers to whom they have delegated responsibility are exercising adequate control? Are job descriptions and relevant CVs available for all senior posts? Is there effective leadership? How is it demonstrated? Is there a formal pay scale and who agrees and reviews it? If the organisation works with children (up to 18 years old) or vulnerable adults, does it have adequate policies and procedures to keep these people safe? Programme management checklist Has the organisation implemented a foundation-funded project before? Has the organisation implemented this type of project in the past? What is the risk assessment for this programme? Have significant areas of risk been identified, and how will these be mitigated? What systems are in place to ensure regular monitoring and evaluation of the programme? How is programme risk managed and monitored? TOOLS Next Generation Consultants - All rights reserved 143 Programme outcomes checklist What are the key assumptions behind the programme theory and logic? What evidence exists to support these assumptions? What will be the key challenges in achieving the programme outcomes, impact and return? What are the constraints of the programme? What are the skills and capacity challenges? What are the financial challenges? Are the costings complete, e.g. are there allowances for extra overheads and other costs as the programme is implemented? Are there other “holes” in the budget? What is the scientific evidence of the development model? What are the major strengths or weaknesses of the development model or the programme design? Are there other similar programmes – how does this programme compare regarding design, management, implementation and evaluation? CONCLUSIONS RECOMMENDATIONS ANY OTHER FINDINGS TOOLS 144 TOOL 5 – DESIGNING AN M&E FRAMEWORK FOR PROGRAMMES AND PORTFOLIOS This template can be used to design an M&E framework for programmes. Programme detail Programme objectives Programme outcomes What do you want to evaluate? What is the purpose of the evaluation? What type of evaluation do you want to use? Formative or process evaluation – MONITORING? Summative or outcome evaluation – EVALUATION? End of programme – IMPACT EVALUATION? What data do you need to answer your questions? How do you gather the information? How will you analyse the information? How will you use and share the results? TOOL 6 – DESIGNING AN EVALUATION PLAN FOR A PROGRAMME This worksheet can be used to scope the programme assessment in the programme design and management processes. Programme information Programme name Programme period Programme partners Purpose of M&E Why is this M&E done? What decisions will be made with the information collected? TOOLS Next Generation Consultants - All rights reserved 145 Programme information Programme name Primary users and uses of M&E information Who will analyse, reflect on and make decisions with the information collected? How will the M&E information be used? Scope of M&E What will be included and addressed in this plan and what will not be? Stakeholder participation How will programme participants be involved in M&E activities? How will specific groups, such as children or people with disabilities, be involved? How will any other stakeholders or partners participate? Methodology Baseline Is there already a baseline? What is the plan for collecting baseline data? How will baseline data be used? Key evaluation questions Programme-specific: These are high-level questions that help guide the M&E process and provide answers that relate to the purpose defined above. The questions relate to the whole programme and what you (and other stakeholders) would like to know about it. Data collection methods – quantitative and qualitative What methods will you use to collect data? Will you use qualitative methods (e.g. focus groups, stories or observations)? Will you use quantitative methods (e.g. attendance records)? What will you use each for, and why? How will you ensure that the data collected is valid, reliable and comparable? TOOLS 146 Programme information Programme name How will you monitor for unintended outcomes, e.g. factors you do not have indicators for? Similarly, who will you monitor for negative outcomes? Cross-cutting themes How will you monitor for cross-cutting themes, such as gender, disability inclusion, disaster preparedness, human rights, equality or race? Data analysis and storage Who will be responsible for analysing the data, and how? How and where will data (soft and hard copies) be stored and secured? Reporting and data use What reports will be produced and how will reports and other data be used? How will information be shared with staff, communities and other stakeholders? Reflections and evaluations What reflections and evaluations are planned for the programme? When are they planned? Will they be internal or external, and for which purposes? Resources Which resources will be needed to develop and conduct M&E activities, input, analyse and store data, build staff capacity and disseminate findings? TOOLS Next Generation Consultants - All rights reserved 147 TOOL 7 – DATA COLLECTION This template can be used to consider the data management process for monitoring, evaluation and impact assessment. Programme objective Purpose of the programme Decide why you want to do M&E (benefits) The purpose of M&E for my programme: Decide on the guiding principles The guiding principles for M&E for my programme: Which programme are you going to assess, and when? Which programme or project? Monitoring and/ or mid- or endterm evaluation or impact assessment? When? The following are stakeholders to my M&E process Internal stakeholders External stakeholders Others Stakeholders for the design of key questions or issues Stakeholders for the design of a detailed framework, e.g. indicators, data or collection methods Stakeholders for Implementation, e.g. who is collecting the data and how Stakeholders for analysis Stakeholders for communication of findings Decide on the key issues and questions you need to investigate Key issue 1 Key issue 2 Key issue 3 Organisational capacity and group processes – how well are they working together in relation to e.g. needed resources, leadership, management, cost-effectiveness or sustainability? Joint working – how well are they working with others in relation to e.g. partnerships; movement-building alliances, coalitions, political allies or disseminating learning? Relevance – how relevant are your projects to different sections of your community? TOOLS 148 Contribution – what contribution have you made to outcomes and impacts compared to other actors or factors? Clarify aims, objective and change pathway (impact chain) Objectives or impacts The desired impacts you want to have, e.g. social, economic, short-term or environmental, or on people’s lives, e.g. improved wellbeing, a fairer and more inclusive community or reduced carbon emissions. Objectives or outcomes The changes you need to make to achieve your aims or impacts, e.g. increased pass rates, increased personal efficiency, income, effectiveness, more sustainable behaviours, increased community capacity, supportive and fair government policies. Objectives or returns The changes you need to make to achieve return on investment, e.g. increased publicity, increased employee or management involvement, improved strategy or alignment. Identify what information you need to collect. Key to how you will achieve this change are: Sustainability indicators Organisational capacity and group process Indictors Joint working indicators Relevance indicators Effectiveness indicators Impact indictors Contribution or attrition indicators TOOLS Next Generation Consultants - All rights reserved 149 Open questions We will include the following open-ended questions to track e.g. unintended changes, understand why and how change happens (including your contribution and the plausibility of your change assumptions), and/or understand people’s experiences of change. Decide how you will collect your information Issue 1: Question/ indicator Issue 2: Question/ indicator Issue 3: Question/ indicator We will assess our contribution to the observed changes in the following ways What When How and who We will analyse the information in the following ways What When How and who We will communicate the information in the following ways What When How and who The key audiences we will communicate our findings to will be e.g. community groups, donors or policymakers Audience 1 Audience 2 Audience 3 We will tailor and present the information for different stakeholders and audiences through e.g. graphs and pie charts to simplify the data Audience 1 Audience 2 Audience 3 Ethics and data protection We will gain informed consent from research respondents or participants in the following ways We will ensure the anonymity of research participants by We will ensure the anonymity of research participants by TOOLS 150 TOOL 8 – EVALUATION REPORT This template can be used by intermediaries and must be completed during the evaluation phase that is associated with the implementation of a programme. The template can be expanded on and interpreted in the context of a specific intervention. CONTENT OF THE EVALUATION REPORT A standard evaluation report starts with a cover page, a table of contents, a list of abbreviations and acronyms, an executive summary and a matrix of findings, evidence and recommendations. The evaluation report should also contain the following main chapters: EXECUTIVE SUMMARY 1. Introduction (including a geographic map and pictures) 2. Evaluation findings (supported by evidence) 3. Conclusions 4. Recommendations 5. Lessons learned THE ANNEXURES TO THE REPORT SHOULD INCLUDE THE FOLLOWING CHAPTERS 1. Terms of reference of the evaluation 2. Evaluation tools 3. Desk review list 4. List of people contacted during the evaluation (anonymised and gender-disaggregated) The main body of the report should not exceed 25 to 30 pages, depending on the scope of the evaluation exercise (annexures excluded). Annexures should be kept to a minimum (no longer than 15 pages). Only annexures that serve to demonstrate or clarify an issue related to a major finding should be included. Information should only be included in the report if it significantly affects the analysis and clarifies issues. Rather than repeating, references should be made to annexures or other parts of the report. Sources of information should be referenced in a consistent manner. STRUCTURE OF THE EVALUATION REPORT EXECUTIVE SUMMARY The executive summary should be concise (no more than four pages) and include: 1. Introduction and background 2. Short description of the project evaluated, including its objectives 3. Major findings of the evaluation 4. Main conclusions 5. Major recommendations – there should be a clear illustration of how the recommendations build on the conclusions, which in turn build on the findings 6. Major lessons TOOLS Next Generation Consultants - All rights reserved 151 The executive summary should not be a repetition of the report’s text, but should be drafted in a crisp and clear manner. The objective is to convey the most important information about the evaluation to different audiences. EVALUATION PURPOSE AND EVALUATION QUESTIONS This should set out the overarching purpose of the evaluation and how the findings are expected to inform decisions. This section also describes the evaluation questions (which should be limited to just a few key questions). It can also identify key audiences for the evaluation. PROJECT BACKGROUND Enough information should be provided to give sufficient context. In the executive summary, this section can receive less emphasis than it might in the overall report so that more attention can be paid to the evaluation purpose, design, limitations and findings. In the main report, it should describe the problem the project addresses, and the project logic theory about why it will facilitate better outcomes. This could include a logical framework for the project and the development hypothesis, or causal logic, or theory of change of the project or the programme of which the project is a part. EVALUATION QUESTIONS, DESIGN, METHODS AND LIMITATIONS This section describes the overall design, specific data collection and analysis methods linked to the evaluation questions, and limitations of the data, methods or other issues that affected the findings. FINDINGS AND CONCLUSIONS This section should report the findings based on evidence generated by the evaluation data collection and analysis methods. Findings should be fact-based and not rely only on opinion, even expert opinions. Conclusions are drawn directly from findings and help summarise the “so what” of the findings. Several findings can lead to one or more conclusions. Whenever possible, data should be presented visually in easy-to-read charts, tables, graphs and maps to demonstrate the evidence that supports conclusions and recommendations. SUMMARY MATRIX OF FINDINGS, EVIDENCE AND RECOMMENDATIONS The summary matrix should not include all the findings and recommendations emerging from the evaluation, but only the most significant ones. These should be grouped under two categories – key and important ones. The recommendations should be relevant, actionable and directed to a specific stakeholder or a group of stakeholders. TOOLS 152 Findings Evidence (Sources that substantiate the findings) Recommendations Key recommendations Important recommendations MAIN REPORT INTRODUCTION BACKGROUND AND CONTEXT This subsection should include: a. The overall concept and design of the project and include an assessment of its strategy, the planned time and resources and the clarity, logic and coherence of the project document. b. The purpose (objective) and scope (coverage) of the evaluation. c. The composition of the evaluation team. EVALUATION METHODOLOGY This is a statement of the methods used to obtain and collect the data, as well as the approach and methods used to analyse it. This subsection provides the basis for the credibility of the evaluation results. Reference should be made to the annexure encompassing evaluation tools. TOOLS Next Generation Consultants - All rights reserved 153 The evaluation methodology should support the purpose of the evaluation and should be sufficient to answer the evaluation questions posed in the terms of reference (ToR) by creating the conditions for the study’s internal and external validity. The evaluation report should have a logical sequence, from evidence to assessment, findings, conclusion and recommendations. Reference must be made to the desk review, identification of stakeholders, sampling strategy and triangulation. LIMITATIONS TO THE EVALUATION The report should highlight major constraints that had an impact on the evaluation process, e.g. limited field missions due to security constraints, limited budget, limited time and unavailability of major stakeholders for interviews. This section should also include how the limitations were addressed. EVALUATION FINDINGS This section is the most important, as it covers the analysis of information and articulates the findings of the evaluation. It is the longest and most detailed section of the report and should be based on facts. The other sections of the report draw on and make references to it. A finding uses evidence from several sources to allow for a factual statement.   DESIGN This subsection addresses the design of a project by measuring: a. Appropriate participatory needs assessment and context analysis. b. The logical framework approach, with measurable expected objectives, outcomes and outputs, performance indicators (including gender equality, race, disability and human rights), targets, risks, mitigation measures and assumptions. c. The theory of change with clearly defined assumptions, the issue and how it was addressed, overall goal and specific objectives of the initiative, who the initiative was aimed at, which services were rendered and which activities took place, who was involved in providing the services or activities, involvement of other organisations and sectors, ways in which community people or beneficiaries were involved, costs of the programme (staff time in planning and implementation, as well as other costs), how the initiative contributed to return on investment for the funder. RELEVANCE This part should address the relevance of the project in meeting the needs, solving the problems identified and contributing to the specific or relevant programme outcomes and strategic objectives of the programme or portfolio. Relevance is the extent to which the objectives of a project are continuously consistent with recipients’ needs, the funder mandate and overarching strategies and policies. EFFICIENCY Efficiency is a measure of how resources or inputs (funds, expertise, time, etc.) are converted to outputs. The report should indicate the extent to which the planned outputs have been delivered and how they contributed to reaching the objectives, as well as show how the outputs have been delivered in the planned timeframe and with the available resources. This part of the report should also address how the outputs have been implemented, noting any constraints. TOOLS 154 It should examine the following: • The appropriateness of overall institutional and management arrangements, and the impact these had on the implementation and delivery of the outputs. • The support received from the funder and the relevant programme manager. • Whether and how the outputs were monitored during implementation. PARTNERSHIPS AND COOPERATION This part of the report should examine the coordination and collaboration arrangements that have been made with partners and stakeholders. Partnerships and cooperation indicate the level and quality of the funder’s cooperation with external partners and stakeholders, e.g. other donors, NGOs or government, through: a. The extent to which the right partnerships have been identified. b. The extent to which partnerships have been sought and established, and how synergies have been created in the delivery of assistance. c. The extent to which there was effective coordination among partners. d. The extent to which partnerships’ responsibilities were fully and effectively discharged. e. The extent to which partnerships’ inputs were of quality and provided in a timely manner. f. The extent to which the project contributes to the funder’s strategic objectives. EFFECTIVENESS Effectiveness is the extent to which a project achieves its objectives and outcomes. The report should show whether and how the objectives and outcomes have been achieved. When objectives and outcomes have been fully met, the report should show how these contribute to the attainment of the results contained in the funder strategy and the foundation’s theory of change, as well as the portfolio or programme strategy of which the intervention forms part. When some of the objectives and outcomes have not been attained, the report should show what progress has been made towards achieving them. It must be clear how they contribute to the attainment of the results contained in the funder’s strategy and the relevant portfolio or programme strategies and frameworks (including theories of change and logic model frameworks). The report should cover the objectives and outcomes of the project and demonstrate the short- and medium-term effects the project is likely to achieve or have already achieved, e.g. whether the project has made a difference, and how. The report should highlight major constraints and problems that have impacted the implementation and delivery of the project. The aim is to learn from these constraints and avoid them in the future, or find solutions to improve performance. IMPACT This subsection should try to capture the contribution of the intervention under evaluation to positive and negative, primary and secondary long-term economic, environmental, social change(s) produced or likely to be produced by a project, directly or indirectly, intended or unintended, after the project was TOOLS Next Generation Consultants - All rights reserved 155 implemented. Other indicators that were agreed on between the intermediary and the funder must be included. Other indicators that are supported by evidence and confirmed by beneficiaries can also be included. Reference must be made to the methodology that was used to determine the impact as well as the return on investment. SUSTAINABILITY Sustainability is concerned with measuring whether the benefits of a project are likely to continue after its completion. This subsection should describe the probability of continued long-term benefits and the resilience over time of the net effects of the intervention. HUMAN RIGHTS, RACE, DISABILITY AND GENDER This section should address the programming principles required by a human rights-based approach of the interventions and should identify and analyse the inequalities, discriminatory practices and unjust power relations within the limits of the funder’s strategy and mandate. Evaluating human rights, disability, race and gender requires paying attention to which groups benefit and which groups contribute to the intervention under review. Groups need to be disaggregated by relevant criteria – disadvantaged and advantaged groups depending on their race, disability, gender or status. INNOVATION This subsection should deal with the extent to which: a. A project deviates from its planned activities and outputs to initiate efficient and effective innovative practices. b. New practices (methods, procedures or devices) are introduced, piloted and disseminated. RETURN ON INVESTMENT This subsection should deal with: a. The benefits the funder or investor got from funding the programme. b. The specific benefits as highlighted by intermediaries and beneficiaries. CONCLUSIONS The report must draw overall conclusions based on the evaluation findings. Conclusions should add value to the findings and draw on data collection and analyses undertaken, through a transparent chain of arguments. Conclusions point out the successes and failures of the evaluated project, with special attention paid to the intended and unintended results and impacts, and more generally to any other strengths or weaknesses. There must be a clear link between the findings, conclusions and recommendations. TOOLS 156 RECOMMENDATIONS This part of the report should provide clear, useful, timebound and actionable recommendations aimed at enhancing the project performance and improving the sustainability of results. The report should clearly present recommendations that are aimed at, for example, improving project design, programme delivery and overall programme management. The recommendations should build on the conclusions, which in turn build on the findings. Each recommendation should clearly indicate the action to be undertaken or the decision to be made, as well as the person to whom the recommendation is addressed (assigning responsibility).   LESSONS LEARNED Lessons learned are generalisations based on evaluation experiences with projects, programmes or policies that abstract from the specific circumstances to broader situations. Frequently, lessons highlight strengths or weaknesses in preparation, design and implementation that affect performance, outcome and impact. Lessons are a key component of any knowledge management system and are important for continuously improving performance. Sometimes these lessons will be derived from success and sometimes from areas where there is room for improvement. The purpose of a lesson is to see what works and what does not. Lessons can be success stories that should be repeated or they can be areas in which change towards improvement must take place. They can offer advice on how to improve processes (how things were done) or products (outputs). The evaluation report should focus on the most important lessons, especially those with wider applicability and those that have the following characteristics: a. The lessons learned from a specific project should highlight the strengths and weaknesses in preparation, design and implementation that affect performance, outcomes and impact. They are also applicable to other projects and programmes, as well as policies, and have the potential to improve future actions. b. Lessons should be based on findings and evidence presented in the report. c. Lessons should neither be written as recommendations, nor as observations or descriptions. VI. ANNEXURES The annexures need to include: a. The full terms of reference of the evaluation. b. Evaluation tools (including questionnaires and interview guides). c. Desk review list and a list of people contacted during the evaluation (in an aggregate manner – no names included and gender- or race-disaggregated data). d. Any other information relevant to the evaluation. TOOLS NEXT GENERATION CONTACT INFO +27 11 593 2316 +27 83 440 0654 rrossouw@nextgeneration.co.za www.nextgeneration.co.za