Imagine you’ve signed up for a 10k race. On the day of the race, you go to the desk to collect your race number only to be told by the organiser that things are a bit different this year. Rather than everyone starting off in a big group at the same time, you will be separated into pace groups, each of which will be identified by a distinct coloured vest. As an experienced runner with quick times in previous races, you are placed in the fast pace group and you are handed a purple vest to wear.
The race goes well: you run the 10k in 42 minutes, which is a PB and is well below the average time that day of 55 minutes. But that’s not what the race organiser is interested in. The race organiser wants to know how your race time compares to the average time of runners wearing purple vests. And this is not just based on the race you have run in; it is based on a series of races involving thousands of runners, held simultaneously across the country. The national average time of the purple vest group is 39 minutes, 3 minutes faster than your time. Sadly, you end up with a negative score of -3.
This is essentially how value added measures work. Here, we are comparing each runner’s race time to the national average time of their pace group. In education, we compare a pupil’s score to the average score of pupils nationally with the same start point. That start point could be based on any form of prior attainment: key stage 1 teacher assessments, phonics scores, EYFSP outcomes, key stage 2 scaled scores (as used for progress 8), and, of course, reception baseline (RBA) scores. We just need some way of differentiating pupils so we can make reasonably fair comparisons later on.
There are no KS1-2 progress measures for the 2023/24 (current year 7) and the 2024/25 (current year 6) cohorts due to the lack of KS1 results, the assessments having been cancelled because of the pandemic. Progress measures will return in 2026 when the current year 5 pupils – who have KS1 results – reach the end of KS2; and in 2027/28 the first round of progress measures calculated from the reception baseline will be published, when the current year 3 sit KS2 tests.
Moving from a world where progress was measured between key stages 1 and 2, which have comparable subjects, to one in which progress will be measured from the reception baseline, is understandably causing some confusion. How can we compare the results of an assessment made at the start of reception to those of a test taken in year 6? But we are not really comparing the outcomes of the two assessments. Rather, we are using the reception baseline to separate pupils into prior attainment groups – the pace groups of our analogy above – so we can more fairly compare their KS2 results. If this seems hard to accept, spare a thought for secondary schools where a student’s KS2 test scores in reading and maths are used to establish prior attainment groups for all GCSE subjects. Yes, this means that a student’s GCSE grade in, say, Design & Technology, is compared to the national average D&T grade of students with the same KS2 reading and maths scores. As with the RBA-KS2 measure, we are not comparing like-for-like.
So, pupils take the baseline assessment at the start of reception and, nearly seven years later their KS2 scores will be compared to the national average KS2 scores of pupils with the same RBA result. For example, a pupil scores 22 marks out of a maximum of 39 in the RBA. Seven years later, they sit the KS2 tests and in reading they attain a scaled score of 98 (which means that they have not met the expected standard score of 100). However, we are interested in how this pupil’s KS2 result (98) compares to other pupils with the same RBA score (22). Let’s imagine that nationally, on average, pupils that scored 22 on the RBA scored 93 in the KS2 reading test. 93 therefore becomes the progress benchmark for this particular group of pupils, and our example pupil has clearly exceeded that by 5 points. Their progress score is therefore +5. This is done for each pupil in the cohort and the school’s overall progress score is the average of the individual progress scores.
There is, however, a problem: we have no idea what pupils’ RBA scores are because the DfE decided not to release them. You could keep a record of the marks as you enter them onto the Baseline Portal (BeP) but the pupil’s true score will differ due to the variable weighting of each task. And if a pupil changes school, their RBA score will follow them, wrapped in a cloak of invisibility. The new school will be held to account for the progress the pupil makes despite no one knowing their start point. This differs from the current situation where teachers are aware of a pupil’s KS1 results, and the data can be transferred between schools. Essentially, when it comes to the future of progress, we’ll be racing at night without a torch.
And no one will know what vest they’re wearing.
Leave a Reply