The detailed method of how the Toolkit was developed is outlined below.
Search of academic journals and online sources
A systematic search for literature was conducted using academic databases. We used an agreed set of search terms for each intervention to be reviewed. This includes synonyms for the intervention, the stage of the learner journey it is seeking to impact, target groups, and the type of evidence sought. Searches used combinations of terms from each set. We filtered results to include only evidence that is:
- Written in English
- Published since 2000
- Based on empirical evidence
- From studies located in the UK, North America, Australia or Western Europe.
Bibliographies were scanned for relevant references to minimise the risk of missing key sources of evidence.
Grey literature and the call for evidence
The searches for academic literature was supplemented by a search for publicly-available ‘grey’ literature, including evidence provided in response to our call for evidence.
To ensure we included evidence from the Scottish context, we held a call for evidence between May and November 2018. This call was publicised to Scottish higher education providers and other organisations who provide or fund activities and interventions designed to widen access to higher education. The call for evidence took the form of a short online questionnaire and organisations also had the option of sending us evaluation reports directly.
The information submitted to the call covered a wide range of interventions and took a variety of forms. Wherever possible, we included evidence in our reviews of interventions. In some cases submissions did not provide sufficiently detailed evidence or did not focus on evaluation of impact. The submissions were nevertheless useful in understanding the costs of interventions and provided useful examples which have been used as illustrative case studies in the Toolkit summaries.
Selection of material
From the search results, titles and abstracts were reviewed to select the most appropriate articles and reports based on their relevance to the overarching aims of the Toolkit and the selected interventions.
We prioritised meta-analyses and systematic reviews as these encompass numerous studies and involve some assessment of which studies to include/exclude based on quality. Beyond this, we prioritised evaluations that adopted the most robust methods (see assessing strength of evidence below) and evaluations of Scottish interventions.
From the material identified in the refined list of search results and from the call for evidence, we created an annotated bibliography to summarise key information about each source of evidence. This included the following:
- Country of origin;
- Intervention or activity evaluated;
- Target groups on which the activity was tested;
- Internal validity – strength of the evaluation methods used;
- External validity – how relevant the findings are to the Scottish higher education context;
- Cost of intervention, where stated;
- Impact of intervention on a core set of key outcomes;
- Effect size, where stated or could be calculated;
- Other relevant findings based on empirical evidence.
The annotated bibliographies were then used as the basis for writing up a summary of each intervention. For each intervention we assessed:
- The strength of evidence;
- The cost of the intervention and
- The direction of impact.
Assessing the strength of evidence
The standards of evidence used to assess the strength of individual studies are:
Level 1: Evidence of impact on those receiving an intervention treatment based on quantitative and/or qualitative data but without any comparator.
Level 2: Evidence of impact on those receiving an intervention treatment, though this does not necessarily establish any direct causal effect. This could be based on quantitative evidence with an appropriate comparator, such as a pre/post treatment change or a treatment / non treatment difference.
Level 3: An evaluation methodology which is more likely to provide evidence of causal effect of an intervention. Such methods will be those that use an appropriate control or comparison group and try to take account of unobserved as well as observed reasons why this group may differ from those receiving the treatment. Methods which meet level 3 include randomised control trials and quasi-experimental methods such as difference-in-differences and regression discontinuity design.
To provide an overall indication of the strength of evidence on a particular intervention we classified interventions according to the number and type of studies identified. We used the classification scale below to do this.
|Very Extensive||Three or more recent systematic reviews or meta-analyses of studies at level 3 in a UK, North American, Australian or Western European context.|
|Extensive||At least one systematic review or meta-analysis OR ten or more individual impact studies at level 3 in a UK, North American, Australian or Western European context.|
|Moderate||At least five individual impact studies at level 3 in a UK, North American, Australian or Western European context.|
|Limited||At least one study at level 3 in a UK, North American, Australian or Western European context.|
|Very Limited||Recent impact evaluation studies at level 1 and 2 only, in a UK, North American, Australian or western European context.|
The classification takes into account the fact that the evidence base for widening participation initiatives, particularly those relating to retention and success, is still developing and systematic reviews and meta-analyses are relatively rare in this field still.
Assessing the cost of interventions
Many of the studies we reviewed include little if any indication of the cost of delivering the intervention. The call for evidence was useful in this regard. Where information was available, we recorded the cost of delivering the intervention per learner. Where the interventions evaluated are delivered to all students, the cost per learner is spread across non-disadvantaged students and disadvantaged students alike. This may mean these types of interventions can appear less costly when compared to other interventions targeted solely towards disadvantaged students. However, cost is only one factor that users of the Toolkit should take into account when deciding on which interventions are most appropriate to their needs. If interventions available to all students are also effective in assisting disadvantaged students, these may well be the most cost effective option. Similarly, more expensive but targeted interventions may be more appropriate.
We then calculated the average and range of costs per learner for each intervention type and categorise them as follows.
|Very High||£1,001 or more per learner per year|
|High||£701 to £1,000 per learner per year|
|Moderate||£251 to £700 per learner per year|
|Low||£81 to £250 per learner per year|
|Very Low||£80 or less per learner per year|
We found relatively little evidence of impact which would allow a consistent calculation of the size of impact, particularly for interventions addressing retention, attainment and success. Instead, the Toolkit provides an assessment of the direction of impact suggested by the evidence reviewed. We classified the interventions based on whether all the evidence suggested a positive impact, a negative impact, no impact or the evidence was mixed. This does not indicate the size of the impact, only the direction – a positive impact could still be quite small. We have based our assessment of direction of impact only on level 2 or level 3 evidence. The classification of direction of impact should be considered alongside the assessment of the strength of evidence. Interventions may show positive impact but be based on limited evidence.
Interventions have been classified using the following scale.
|++||Overall the evidence reviewed suggests the intervention has a positive impact on outcomes. No studies reviewed suggest no or negative impact.|
|+||Most of the evidence reviewed suggests the intervention has a positive impact on outcomes, although some studies suggest no or negative impact.|
|+-||Most or all of the evidence reviewed suggest the invention has no impact on outcomes, although some studies suggest a positive impact or negative impact.|
|-||Most of the evidence reviewed suggests the intervention has a negative impact on outcomes, although some studies suggest no or positive impact.|
|--||Overall the evidence reviewed suggests the intervention has a negative impact on outcomes. No studies reviewed suggest positive impact.|
The search terms employed within the systematic search are detailed in the table below.
|Mandatory Terms||Additional Terms|
|Educational Level||Intervention||Type of Study||Target Group||Learner Stage|
|University / higher education / college||Mentoring / buddying / mentor / peer / learning mentor||Student||Meta-analysis / rapid evidence assessment / review / synthesis||Disadvantage / Socio-economic disadvantage||Retention / success / progression / outcomes|
|Financial support / financial aid / bursar* / scholarship / grant||Evaluation / impact||Widening participation / access|
|Longitudinal induction / extended induction / transition support / transition activities||Care leaver / care experience|
|Attainment support / learning support / study skills / academic support / study support / tuition||Gender|
|Employability guidance /careers advice /information, advice and guidance /IAG /employment advice /employment support||Adult learner / mature|
|Internship / work placement / work experience||Disab*|
|Application assistance / application information / UCAS application / application help / application guidance||Ethnicity / race / BME / BAME|
|Residential scheme / summer school||Under-represented|