Site navigation

Improving evaluation across the sector

Introduction

This section of the Fair Access Toolkit provides links to a wide range of support materials, guidance documents, frameworks and tools that you can use to help design and deliver evaluations of widening access activities and interventions. The aim is that by signposting useful resources, the Toolkit will help raise awareness of the importance of high quality evaluation and improve understanding of how to achieve this.  It is not intended to turn practitioners into professional evaluators – you may still want to get expert help, but we hope that the resources collected here will help practitioners make more confident decisions about if, when and how to evaluate and feel better equipped to design and deliver small-scale evaluations in particular. If you decide you want to commission someone else to carry-out an evaluation for you, the resources here should help you develop your specification and assess proposals.

Some of the materials signposted are designed specifically for evaluating widening access initiatives. Others are more generic good practice guides. Some are designed for other sectors, but are likely to be useful to widening access practitioners.

If you are aware of a resource that you think would be a useful addition to the Toolkit, please get in contact.

Complete evaluation guides

The Education Endowment Foundation (EEF) DIY Evaluation Guide is designed for teachers wanting to undertake small-scale evaluations of interventions. Although the context is different, it provides accessible guidance on all stages of evaluation, including preparation (what questions you want to answer, choice of measures, selecting a comparison group), implementation and analysis and reporting.

The Magenta Book is HM Treasury’s comprehensive guidance on evaluation. It provides general information on what makes a good evaluation as well as more detailed guidance on planning and undertaking evaluations, including data collection and reporting. There is also supplementary guidance on particular types of evaluation, such as designing impact evaluations.

The Higher Education Academy offers a series of toolkits for practitioners on outreach to widen participation in higher education, which includes a volume on Evaluation. Aimed at widening participation managers, the toolkit covers understanding and planning your evaluation, sourcing and analysing data (both new and existing) and reporting findings.

Guides and toolkits for evaluating specific types of interventions

The Office for Students’ Financial support evaluation toolkit includes tools to help higher education providers assess the impact of financial support on student success. It includes a set of survey questions, a semi-structured interview framework and a framework for statistical analysis as well as guidance on using the tools and interpreting the results.

The Student Engagement Partnership (TSEP) have published an evaluation framework and accompanying guidance and worked examples of Evaluating Student Engagement Activity.

Getting started: planning your evaluation

To ensure good quality evidence, evaluations need careful thought and preparation. As many of the guides signposted here highlight, to be effective, evaluation needs to be considered from the start and as part of the process of designing a new intervention. This is because the design of the intervention can often influence or constrain options for evaluation.

The Magenta Book is HM Treasury’s comprehensive guidance on evaluation. The first section is aimed at policy makers (rather than analysts) and covers the benefits of evaluation, the questions that different types of evaluation can help you answer and practical considerations when planning an evaluation.

One way to consider whether and how to evaluate your programme or policy is to conduct an Evaluability Assessment. This Working Paper from What Works Scotland sets out the core stages of an Assessment, including engaging stakeholders and developing a common understanding of intervention goals.

Developing a framework for your evaluation

When developing an evaluation it helps to have a clear and common understanding among key stakeholders about what your intervention or activity is intended to achieve and how. This can be done through developing a theory of change or a logic model. As well as helping communicate your intervention to others, such a model or framework can help you to approach your evaluation planning in a systematic way. A theory of change, logic model or similar can help you identify the things that need to happen for your intervention to be effective, and that you may want to measure or test as part of your evaluation.

New Philanthropy Capital’s Creating your theory of change is a practical guide covering the benefits of a theory of change and how to create and present your theory.

Logic mapping: hints and tips was commissioned by the Department of Transport to inform better transport evaluations, and provides helpful and plain English advice and guidance for developing a logic model.

The Evidence Based Practice Unit (EBPU) Logic Model is a template for developing a logic model for complex interventions. Developed for child mental health, it can be used with any intervention. It comprises a blank template for the EBPU logic model, a step-by-step guide on completing the model and a worked example.

Project Oracle also provides templates for creating a theory of change as well as guidance and checklists, monitoring and evaluation plan templates.

Logic models can also be a helpful starting point for determining the different kinds of data that you will need to collect as part of your evaluation. The Higher Education Funding Council for England (HEFCE) commissioned an in-depth study to develop a Student Opportunity Outcomes Framework. The aim was to support the enhanced evaluation of funding to widen participation in higher education. The resulting report provides high-level logic models for broad categories of widening participation activity (outreach, retention, student success and supporting disabled students) along with suggested sets of indicators and metrics for each element in the logic chain.

The NERUPI Framework offers an alternative, practice-based approach to identifying clearly defined aims and outcomes to underpin activity design and evaluation.

What kind of evidence is best?

When thinking about how to carry out your evaluation and what methods to use, you will want to provide the best quality evidence possible. However, there is debate and sometimes confusion about what kinds of evidence are best.

What counts as good evidence?, a provocation paper from the Alliance for Useful Evidence, explores this issue and the role of hierarchies of evidence, which aim to help classify the quality of evidence.

In recent years, a number of different types of evidence hierarchies or standards have been developed for different sectors and purposes. Mapping the Standards of Evidence used in UK social policy provides an overview of these.

The Office for Students has published Standards of Evidence for Higher Education access and participation activities. The aim is to promote a more rigorous approach to doing and using impact evaluation. The standards are accompanied by a self-assessment tool that can be used to assess the quality of your evaluation plans and methods and identify areas for improvement. There is also guidance on strengthening evaluation, including case-studies and links to further resources. The guidance is designed for people who already have some experience of evaluation and want to make their work more robust.

Reusing existing data

Colleges, universities and other administrative bodies already hold lots of useful data about students. This provides a potentially valuable resource for evaluating the impact of interventions on key outcomes such as participation in higher education, attainment and retention.

While not focusing specifically on evaluation, From Bricks to Clicks explores how higher education providers can use data analytics to better support their students. It introduces concepts such as big data and learning analytics, considers ethical and security issues, provides an overview of relevant data sources and provides examples of analytics in use.

Using Data offers a series of case studies of using data and an evidence-based approach to improve transition, induction and retention into higher education for STEM students.

The Higher Education Statistics Agency (HESA) collect, process and publish data on all aspects of higher education, including students and graduates. As well as publications and open data (including performance indicators), they can provide custom data extracts and analytical reports.

UCAS, the Universities and Colleges Admissions Service also provide statistics on undergraduate applications. Their STROBE service can provide information on the application history of named individuals, allowing you to track and assess the impact of interventions. However, a substantial proportion of undergraduates studying in Scotland (mainly those in colleges) do not apply through UCAS and so are not included in their statistics.

Higher education institutions in England are increasingly using tracking databases, such as HEAT (Higher Education Access Tracker) to link data on intervention participants with administrative data, such as school attainment and enrolment in higher education. This allows institutions to evaluate longer-term outcomes.

Collecting new data - survey research

You may need to collect new types of data to inform your evaluations, for example, by designing and conducting surveys.

How to… develop a questionnaire, produced by The National Foundation for Education Research (NFER), introduces designing questionnaire surveys, and includes information on different question types and the pros and cons of online versus paper surveys.

The Widening Participation Research and Evaluation Unit at the University of Sheffield have published a range of resources on Good practice when designing evaluation questionnaires. They also offer questionnaires that can be used with young people and their teachers for evaluating one-off or stand-alone activities.

Managing and sharing data

It is important that you aware of the responsibilities you have for protecting any data you collect on your participants, as well as the options for managing and sharing this data.

The General Data Protection Regulations (GDPR) creates new duties on data controllers to ensure that data subjects have control over how their data is used. Many of the resources linked here pre-date GDPR, so it is important that you check them against current legislation. The Information Commissioner’s Office provides comprehensive guidance and resources on the topic.

Once you have invested in collecting data, you may want to make it available for reuse in future. Managing and sharing data, a guide from the UK Data Archive sets out the benefits of sharing research data and provides detailed guidance with examples of good practice in documenting, formatting and storing your data. This is accompanied by a wealth of companion material, such as template consent forms.

Experimental and quasi-experimental methods

To understand whether your intervention has ‘worked’ – whether it has achieved its aims of delivering better outcomes for students – you will most likely want to use some form of quantitative evidence (outcome information that can be measured numerically such as rates of participation in higher education). The resources below provide guidance on the types of quantitative evidence that are the most robust.

A particular challenge for evaluation is being able to say with confidence that any changes observed are caused by a particular intervention. While we might have evidence of a change happening, there are often a number of factors as well as the intervention that could have caused this change. Quality in policy impact evaluation by HM Treasury explores this issue and provides guidance on the types of evaluation design that are most appropriate to attributing any changes measured to the intervention being investigated.

Being able to provide evidence of a causal effect of an intervention is a requirement to meet Level 3 of the Standards of Evaluation practice by the former OFFA (Office for Fair Access) – see The Evaluation of the Impact of Outreach for more information.

Arguably the best way to attribute impacts to your intervention is through a randomised control trial (RCT) – although these can be challenging to implement. Test, Learn, Adapt: Developing public policy with randomised controlled trials, written by the Behavioural Insights Team,  gives a good overview of the topic, debunks some myths and includes real world examples.

Carole J Torgerson and David J Torgerson’s Randomised trials in education: an introductory handbook outlines the main issues an evaluator needs to consider when designing and conducting a rigorous RCT.

NFER’s How to… Run randomised controlled trials is aimed at senior leaders and teachers in schools but provides a useful introduction to and practical guidance on running trials.

They have also produced A guide to running randomised controlled trials for educational researchers, a more detailed and technical guide for researchers.

Randomised control trials seek to eliminate systematic differences between the group receiving an intervention (the treatment group) and a comparator or control group. There are others ways to achieve a similar effect, such as propensity score matching and regression discontinuity design.

Another alternative is to use Synthetic controls. This working paper from What Works Scotland suggests an alternative way of providing a comparison for areas in which an intervention is being implemented. A synthetic control is derived using data on past trends in potentially comparable areas.

For all these methods you will likely need specialist evaluation expertise at an early stage.

 

Qualitative methods

To better understand why your intervention is or isn’t working, or to help you improve it, you may want to use qualitative evaluation methods. The resources below provide guidance on collecting and analysing qualitative data (that is, narrative or descriptive information, such as students’ feelings about higher education).

Qualitative data can be collected through surveys but interviews and focus groups will often provide more detailed insights. The University of Sheffield provide some practical tips to consider when planning evaluation focus groups and interviews.

NFER’s How to… use focus groups offer further detail, including developing questions, identifying a sample and managing a group.

Communicating findings

Once you have collected and analysed data to evaluate your intervention, you will want to share what you have learnt. It is important to think about the different interests, motivations and time constraints of different audiences and tailor your outputs accordingly.

How to… write up your research by NFER offers tips on describing qualitative and quantitative data and what to cover in a research report.

The Canadian Health Service Research Foundation’s Reader-Friendly Writing suggests a 1:3:25 approach – a 1 page outline of key message, a 3 page executive summary and 25 pages for the method and detailed findings.

Visualising data can be an effective and engaging way to communicate your findings. The Office for National Statistics provides clear and helpful guidance on Data Visualisation including creating charts and tables and best practice in using colour. The Government Statistical Service also provides guidance on Communicating Statistics including writing about statistics and effective tables and graphs.

Stephanie Evergreen and Ann K. Emery have produced a Data Visualization Checklist to help ensure high impact of data visualisations.

Using evidence

Producing more and better evidence is only of value if it is used to inform decision-making and practice.

Produced by the Alliance for Useful Evidence, Using Research Evidence: A Practice Guide aims to foster intelligent demand for research evidence from non-researchers. It makes the case for evidence-based decision-making, selecting the most appropriate evidence and judging the quality of research.

Scaling-up Innovations from What Works Scotland pulls together evidence from multiple fields and sectors on how small-scale innovation can be scaled up to create transformational change.