Back
10 min read

A levels and GCSE Results 2020: a (inequality) virus in the examinations system?

Following the cancellation of this year's A level and GCSE examinations due to the pandemic, Professor Jannette Elwood looks at how qualification regulators across the UK have had to fall back on alternative systems of awarding which only serve to make the inequalities in examination outcomes more evident.

A levels and GCSE Results 2020: a (inequality) virus in the examinations system?

Studying for A levels and GCSEs is stressful enough at the best of times, but how stressful must it be to have studied for them but then not get to do them; to be awarded examination grades for exams not taken?

The COVID-19 virus has turned so many things up-side-down that we used to take for granted and has made the mundane seem unpredictable and uncertain – no more so than significantly for many young people, the cancellation of A levels and GCSE examinations.  Not that young people’s examinations are a mundane thing; they are anything but. However, the cancellation of examinations this summer and the replacement processes implemented to award young people qualifications, have proven to be yet another traumatic impact of the pandemic on many young people’s lives and future life chances.

Young people across Great Britain and Northern Ireland are not alone in this: examinations in many countries across the world have also been cancelled leaving many education systems rushing to find alternative solutions so that young people can finish secondary education with the qualifications they have been working so hard to achieve.   Regulators for qualifications in the UK (such as Ofqual, CCEA (NI), WJEC (Wales) etc.), in association with examination boards, have had their well-honed processes thrown into disarray. They have had to act fast to realise robust and reliable alternative systems of awarding to allow young people to progress to the next phase and achieve their goals of a university place, a job, an apprenticeship or entry into the sixth form..

Yet here we are on the 12th August 2020, just days after the release of the Scottish examination results with a significant ministerial U-turn on downgraded results.  Also published today is the English government’s new ‘triple lock’ promise, a major shift in results policy in England, just 36 hours before the announcement of A level outcomes.  The triple lock promise allows students to either: (1) use their calculated grade; (2) appeal using their mock exams results; or (3) do their examinations this autumn.  Wales have also indicated that they have changed results published only days ago to Universities and NI are holding firm with their chosen processes having the added value of AS outcomes with which to estimate A level results.  What is becoming more apparent to young people and their families is that our assessment systems and processes are not always wholly fair for all students.   But this has always been the case, it is just that this year, the inequalities embedded in examination systems  have been become more magnified and, in the process, less acceptable to those on the receiving end of estimated grades.

What we know from research is that examination systems interact significantly with inequalities in society be these based on, for example, gender, ethnicity or socio-economic status or even the way in which young people are assessed (examinations or coursework for example); these issues are not new and are debated year after year when the results come out in August.  Those who provide examinations and those who regulate them are well aware of these inequalities and have worked to avoid their differential impact on young people and schools.  But what makes this year different is that, with the adjustments made for the pandemic, the inequalities in examination outcomes have become so much more evident, just as other inequalities in society have ( in health, employment, housing, social care,  etc.)

For the first time in mainstream examining in the UK, teachers were asked to provide estimated grades and rank orders for students across all subjects. Many teachers found this exercise stressful.  Teachers estimate grades all the time for examination boards and UCAS applications, etc. but they have never before been asked to rank order students within grades for high stakes examination outcomes, i.e. putting all those students who they think might have achieved an A grade in the subject as  rank 1, 2, 3 etc. within that grade.  Teachers’ unfamiliarity with this exercise introduced error into the revised system right from the start.  We know from research that teachers’ assessments can be biased and while they can overestimate some students’ grades, they can also underestimate others in relation to students’ gender, ethnicity and social class.

Teachers’ assessment decisions are not neutral and their unconscious biases have been shown to differentially impact in various ways: how (and which) students are entered for different levels  (or tiers) of examinations with different grade ranges available; whom  they think might be better students (boys or girls for example) when marking coursework or thinking about the traditions of their subjects such as science and maths;  and whom teachers identify as exceptional students within their own subjects.  Systems which prioritise teacher assessment nationally and internationally have been shown to be robust and complement examination systems by incorporating assessment data based on close knowledge of students’ work and  how students  improve over time.  Yet, these systems are based on the long-standing, high-quality professional development of teachers and their assessment practices and not the rushed use of teachers’ estimated grades and rank orders in high stakes settings that we have seen this year in the UK.  While teachers have undoubtedly done their best to be just and fair to all students, the challenges of making such decisions in haste have posed major consequences for the validity of the estimated outcomes.

Examination boards have also had to adjust well-established practices of how examination results are awarded.  In order for year-on year standards to be maintained and supported, examination boards and awarding authorities collectively adopted practices of moderation and review of teachers’ estimated grades and rank orders. Moderation is not a new process – it happens every year.  Examination results (and examiners’ performances in marking) are moderated and grade boundaries adjusted so that the overall distribution of grades stays within a tolerance level to prevent grade inflation (too many more A* grades than last year, for example).

School performance histories are also always considered so that rogue examiners or outlying performances in particular schools are identified and adjusted, always with the students’ best interests in mind.  However, these processes also have inherent biases which ignore the differential impact of examination structures and practices on particular groups of students.

So what is so different about this year?  This year it is personal.  This summer students’ individual grades are being adjusted in alignment with statistical models that not only include their prior attainment (where available) but also the past performance of the school they attend; results by algorithm.  Thus the inequalities inherent in the estimate of the teachers’ grades awarded to their students have been doubly impacted by a moderation process that seeks to align individual results with past performances of cohorts.  The model doesn’t allow for any consideration of school improvement or pupils’ own individual endeavours to be better than their school counterparts of past years.  While teacher assessments can be biased in relation to gender and ethnicity, the use of moderation and statistical models adjusted by past school performance brings social class inequalities to the fore and makes them so much more apparent than in previous years.

One of the most powerful consequences of this year’s examinations debacle is the mobilization of the voices of students decrying the unfairness of a system that has been forced on them in haste and which has fallen short, as they see it, of the fairness they demand from their examination systems.  The inequalities I outlined above are related to group dynamics and the impact of systems on groups of students, but what students have highlighted through their protests are:

(i) the inequalities faced by the individual when the system implemented affects them directly – i.e. their grades are lowered by an algorithm that bears no resemblance to their day-to-day sense of themselves and the grades they hope to have achieved; and

(ii) the sheer lack of student involvement in any discussions about the proposed model of assigned grades.

This lack of consideration of student voice in any qualification reform is also not new but the inequalities emerging this summer link directly to a sense of injustice and discrimination felt by students (based on social class, gender and ethnicity) and the lack of adherence to students’ right to be heard in incredibly important matters that affect them directly. The Government of Scotland’s U-turn seems to have responded to students’ anger and disappointment at an imposed system that they see as unfair. It has taken action – even though that action will mean pass rates will rise by at least 14% this year compared with last year.  These are unprecedented increases in higher grade outcomes, but as we are continually told by politicians and commentators, we are in unprecedented times.

But why wait for students’ anger and their disappointments to materialise before change occurs? We should be harnessing their creative energies and foresight now to make assessment systems fit for their changing world. We should not underestimate the impact that these seismic changes in decision making, exemplified in Scotland’s ‘change-of-heart’ U-turn on examination policy, will have on future examinations development.

While those responsible for the national systems of examinations across the UK are forming, breaking and changing policy commitments hours before results are announced in pursuit of avoiding significant political fall-out, we await with bated breath to see what the other jurisdictions of the UK will do in the light of published results and growing concern and even revolt by students across the board.   Students (and their parents/guardians) are very clear that they will not suffer the indignities of the fallout that such policy disintegration has created.

In subsequent reviews of examinations systems (as there surely will be after these events) students’ claims as equal stakeholders in any decisions that are made must now be afforded high priority.  Building students’ perspectives and participation into the formation of emerging examinations policies, will contribute to a fairer system of qualifications.

The ‘new normal’ must be to share authority about examinations and assessment changes with students and to rethink their involvement in decision making around what future examination systems might look like.

This is paramount to achieving true equality of opportunity.

 

Professor Jannette Elwood writes here in an independent capacity and not on behalf of her institution.  She has advised qualification regulators and awarding bodies based on independent research but is not an employee of any of these organisations and does not speak on their behalf. Professor Elwood is also President of the Association of Educational Assessment-Europe.

 

The featured image has been used courtesy of a Creative Commons license.