What Works for Literacy Difficulties is intended to make it easier for schools or other education settings to make comparisons between literacy interventions.
In order to judge whether an initiative has really made a difference, it is not enough just to ask the participants – they will almost always say it has. This ‘feel-good’ factor is valid on its own terms, but doesn’t always correlate with measured progress, and certainly doesn’t convince policy-makers and funders. So it is essential to have quantitative data on the learners’ progress, measured by appropriate tests of (in this case) reading, spelling or writing.
But not just any test data will do: if the test provides only raw scores, the average gain may look impressive, but what does it mean? How good is it, compared with gains in other projects and/or with national norms? We need some way of comparing the impacts of different initiatives. The two forms of impact measure used in this report are ratio gains and effect sizes.
On each intervention’s page there is a summary table. The table indicates the potential impact of a scheme based on the analyses of data which have been made available.
Where a scheme has data available from more than one study, the table will show the largest impact measure obtained from across all of the available data, and is therefore suggestive of the potential impact of the scheme.