• Keine Ergebnisse gefunden

8 Monitoring and Evaluation

Im Dokument THE GREEN BOOK (Seite 79-83)

8.1 Chapter 8 sets out the approach to monitoring and evaluation including different types of evaluation and uses before, during and after implementation.

8.2 Monitoring and evaluation should be part of the development and planning of an intervention from the start. They are important to ensure successful implementation and the responsible, transparent management of public resources. Guidance on conducting evaluation is contained in the Magenta Book.

8.3 Evaluation is a systematic assessment of an intervention’s design, implementation and outcomes. It involves:

¨ understanding how an intervention is being or has been implemented, what effects it had, for whom and why

¨ comparing what happens with what was expected under Business As Usual (the appropriate counterfactual)

¨ identifying what can be improved, estimating overall impacts and cost-effectiveness.

8.4 When used properly, evaluation can inform thinking before, during and after implementation as set out in Box 21.

8.5 It is important to incorporate consideration of monitoring and evaluation into the development, design and appraisal stage of a policy, programme or project. Pilots can be used to test policy effectiveness of what works. Policies can also be designed with inbuilt variation to test the effectiveness of different approaches in real time. And some implementations are able to benefit from use of controlled experimental methods or the use of phased pilot roll outs in which adaptation and learning about what works are part of a programme.

Box 21. Uses of Evaluation

During Implementation – Monitoring allowing improved management and adaptation of implementation in response to evidence based on live data collection and analysis and inform subsequent operational delivery.

¨ Is the intervention being delivered as intended?

¨ Is the intervention working as intended?

After Implementation – Evaluation provides an assessment of the outcome of the intervention and a summative assessment of the lessons learned throughout design and delivery.

¨ How well did the intervention meet its SMART objectives?

¨ Were there unexpected outputs and outcomes?

¨ Were costs benefits and delivery times as predicted at approval?

¨ Was delivery achieved as expected and were any changes needed?

¨ What can be learnt for future interventions

8.6 Evaluation is often broken down as follows:

¨ Process Evaluation – involves assessing whether an intervention is being implemented as intended within its cost envelope, whether the design is working, what is working more or less well and why. It supports understanding of internal processes used to deliver outputs, alongside what was actually delivered and when.

¨ Impact Evaluation – involves an objective test of what changes have occurred, the extent of those changes, an assessment of whether they can be attributed to the intervention and a comparison of benefits to costs. It supports understanding of the intended and unintended effects of outputs, as well as how well SMART objectives were achieved.

8.7 Regulations may require post-implementation reviews (PIRs) which are closely related to policy evaluations. The aim is to review regulations at timely intervals to assess whether they are still necessary, whether they are having the intended effects and what the costs to business are. PIRs will generally focus on measures with significant impacts on business and should be conducted proportionately, supported by appropriate monitoring and evaluation. Better Regulation guidance provides more information on conducting PIRs.

8.8 The planning of monitoring and evaluation for spending proposals should follow the HM Treasury Business Case guidance for both programmes and projects available at this link. This allows a wide range of analytical and logical thinking tools to be used when initially considering the objectives and potential solutions. Planning and provision of resources for monitoring and evaluation should be proportionate when judged against the costs, benefits and risks of a proposal both to society and the public sector.

8.9 Monitoring and evaluation typically use a mixture of qualitative and quantitative methodologies to gather evidence and understand different aspects of an intervention’s operation. Surveys, and interviews may be needed to understand effects on a wide range of stakeholders. At each stage questions should reflect the need- to manage and assess an intervention. Evaluation is important because:

¨ it can be used to improve current interventions

¨ it supports transparency, accountability

¨ it adds to the evidence base available for future decision making

¨ importantly by improving understanding of change and how it is caused, it improves understanding of the logical change processes informing future proposals about what works and why.

8.10 Monitoring and evaluation typically use a mixture of qualitative and quantitative methodologies to gather evidence and understand different aspects of an intervention’s operation.

Surveys, interviews and focus groups may be needed to understand the views of a wide range of stakeholders, evaluation questions should reflect immediate needs to manage and assess the success an intervention. Evaluation is important as:

¨ it facilitates transparency, accountability and development of the evidence base

¨ it can be used to improve current interventions

¨ it expands learning of ‘what works and why’ to inform the design and planning of future interventions.

8.11 Building monitoring and evaluation into the design of a proposal, and building resources into a proposal, supports provision of timely, accurate and comprehensive data. Data collection should be done alongside the monitoring of costs; either within the intervention itself, or as part of the organisation’s wider cost monitoring. Well designed data collection:

¨ ensures monitoring and evaluation can take place

¨ allows for relatively minor adjustments to be made to the implementation design which can greatly improve the delivery of benefits

¨ supports provision of high-quality evaluation evidence and reduces the likelihood of retrospectively attempting the collection of data.

¨ where creation of a natural comparison group is possible as part of the implementation it allows valuable insights into what works and why

¨ informs management during implementation enabling identification of threats to delivery.

8.12 Monitoring and evaluation objectives should be aligned with the proposal’s intended outputs, outcomes and the internal processes, although they may also be wider. Policies and programmes that involve a series of related sub-programmes must also be subject to monitoring and evaluation in programme terms during and after implementation.

8.13 SMART objectives should be objectively observable and measurable. Their design should take into account monitoring and evaluation processes. Their suitability for use in monitoring and evaluation is a necessary condition for inclusion as SMART objectives (Chapter 4). Without verifiable and measurable objectives success cannot be measured, proposals will lack focus and be less likely to achieve Value for Money.

8.14 Data on Business As Usual, along with continuing data collection, is vital to manage delivery and monitor the intervention during and after implementation. Monitoring and evaluation should examine what happens compared to:

¨ the objectives expected at the outset, in the business case or impact assessment if available

¨ the BAU situation at the start of implementation.

8.15 In terms of the Five Case Model, a core set of questions to consider are set out in Box 22.

A more detailed set of evaluation questions can be found in the Magenta Book.

Box 22. Core Evaluation Questions

To what extent were the SMART objectives achieved and by when, in particular:

¨ to what extent were outputs delivered and when?

¨ to what extent were the anticipated outcomes produced and by when?

¨ what continuing change is expected as a result of the above?

¨ how well did the process of delivering the outputs and outcomes work?

¨ were there significant unintended effects?

¨ what social value was created as defined in the economic dimension?

¨ what level of confidence can be attributed to the estimates of impact, including social value?

¨ what was the cost to the public sector as defined in the financial dimension?

8.16 Monitoring and evaluation evidence and reports should be actively owned by the Senior Responsible Officer and the team responsible for an intervention’s delivery. Data and findings should be reported regularly, and reports should be timed to correspond to decision points where they can be of maximum use. Major findings should also be reported to the organisation’s Accounting Officer and to the relevant external approving organisation.

8.17 Evaluation reports, and the research that informs them, should be placed in the public domain in line with government transparency standards and Government Social Research:

Publication Protocol, subject to appropriate exemptions.

A1. Non‑market Valuation and

Im Dokument THE GREEN BOOK (Seite 79-83)