Least Bad Is Not Good Enough

Least Bad Is Not Good Enough

Least bad is not good enough

Making the best of a bad job is not good enough for a national system of assessing young children and establishing school accountability. Yet that is what appears to be happening in this pilot year of commercial baseline assessment.

Early Excellence, the provider of the baseline assessment scheme (EExBA) which has cornered the market with 70% of schools in the pilot year, has designed a scheme that attempts to uphold principles of early years pedagogy while still meeting the government’s unjustifiable requirements. And the majority of schools have attempted to minimise the damage of baseline assessment by choosing the least-bad scheme on offer, largely rejecting the schemes which require one-off testing. Jan Dubiel of Early Excellence now argues that rather than oppose baseline assessment we should back their scheme. EExBA, however, for all its superior elements compared to its rivals, is not the answer.

Of course good practice includes on-entry assessment, and schools already had their own systems for getting to know reception children in order to support them accurately, as well as tracking progress as they move on through school. Government proposals for standardised baseline assessment, however, did not receive their endorsement – 66% of respondents in the consultation opposed the plans.

Although baseline assessment will not be statutory, schools under threat of being judged solely on challenging floor targets in seven years’ time feel pressure to adopt an approved scheme. The problems withEExBA – some of which are even greater for the other two schemes – fall into three main areas.

First, the requirements of implementing the assessment distort practice at the beginning of the reception year, when the priority should be on settling in and getting to know these very young children. There are no discrete tests with EExBA and no required activities, but a teacher with a class of 30 children will have to make nearly 1500 separate yes/no judgements in order to generate a single score for each child. The focus on ensuring sufficient evidence for each judgement risks distracting the teacher seeking to understand the children as unique individuals which is fundamentally what assessment at this stage is all about. Researchers from the Institute of Education are currently investigating teachers’ experience of the assessments in practice, which will shed light on the reality of impact on practice.

EExBA, with content far wider and more appropriate than its rivals, still misses essential features of existing early years assessment. Most current entry profiles, leading into the EYFS Profile, are based on a small number of best-fit judgements around a cluster of sample descriptors in Development Matters which are presented as examples of a much wider pool of learning that children may demonstrate. EExBA has mined some 240 of these sample descriptors from the relevant age range and picked out 49 as a tick-list of must-do statements. This narrows the complex picture of children’s learning and risks distorting teaching, since what is assessed tends to drive practice.

Secondly, the results of the assessment are not fit for purpose. There is no evidence that they are predictive of children’s later attainment. There is plenty of room for ‘gaming’ – teachers in some schools have been told to do the assessmentwithin the first two weeks and to record ‘no’ if in doubt in order to depress intake scores. The criteria and resultant scoring take no account of the fact that children do not learn in a linear and predictable way, and that there is a wide range of development at such a young age.It is a nonsense to set targets for future attainment from a low score which might be the same for a child who speaks little or no English, a child with special needs, a child who is nearly a year younger than some in the class, and a child who has not yet been exposed to some of the content covered in the scoring.

Thirdly, establishing a commercial system of scoring reception children for school accountability is fundamentally flawed. Why plunge this marker into the beginning of the reception year within the EYFS, when many schools will have had children enrolled from age 3 or even 2? Why hand millions of pounds to private companies for unproven schemes which cannot provide comparable, valid, reliable baseline measures, when schools already use assessment for meeting children’s needs and demonstrating progress as part of their normal practice?

Pilots are for testing out systems. We hope experience of baseline assessment in this pilot year will result in scrapping it all.
Better without Baseline Campaign