The furore in the UK over the “virtual” grades awarded to school students prevented from taking their A-levels, or their Scottish equivalents, because of lockdown is in full swing over here. Arguably, kids unjustly excluded from universities thereby are the lucky ones, given the way academia has become an indoctrination machine for identity politics and postmodernist superstition.
Still, it is easy to understand the harm that has been done to this generation through completely disrupting their education and their social interactions in their final year of school, and then cobbling together an assessment system in lieu of exams that judges young people statistically, rather than individually.
The controversy arises because the available individual “metrics” of mock A-level results and predictions by their own teachers were then adjusted using a complex statistical model, never validated in real life. This adjusted results to compensate for supposedly over-optimistic teacher assessments, previous performance of schools, areas of deprivation and who knows what other parameters. The most drastic result has been that some students’ predicted results have been lowered by two grades, losing them their university places.
On the other hand, say the authorities, disadvantaged children have done particularly well, gaining far more university places than in previous years. But that kind of “social justice” outcome is not the point, and not necessarily good either for the universities or students placed above their ability. The deal given to students throughout their school careers was that if they had the ability, and the application, they could earn themselves a place in higher education through A-level exams. It is not good enough for them to be told that, instead, an entirely new system of assessment has produced statistically acceptable results.
Here we go again with that modern bureaucratic obsession with prediction by modelling, which has proven itself to be at least as liable to human error as casting horoscopes or tossing a coin. The climate models have all grossly over-estimated surface temperature rises since they were instituted thirty years ago; the COVID models over-estimated deaths in the present pandemic by an entire order of magnitude; the models predicted the extinction of polar bears instead of their actual increase; and the political polls entirely failed to predict major political shifts like the Brexit vote and the Trump victory.
The astonishing thing is that those who run things have continued to stick with the climate models, the COVID algorithms, the extinction scenarios and the election predictions anyway, as if the sooths of the modellers really are science rather than highly expensive computerised peekstones. And in this case, of course, they have used this repeatedly discredited methodology to decide the educational fates of millions of our precious children.
Modelling, as it has come to be used for prediction by governments, is really a hubristic kind of secular Molinism. Theologically, you may already know, Molinism is a way of resolving God’s sovereignty with human free-will, by positing that God has infallible “middle knowledge” about what each individual would do in a particular “possible world,” so that he can know, and plan, what will happen in the world he finally creates despite the melee of absolutely free human choices.
One aspect of this would be that he might not judge people for what they do as the gospel teaches, but for what they would have done if only they hadn’t been aborted, brain damaged, denied the chance of hearing the gospel, or whatever else actually happened in the real world. Real life becomes pretty irrelevant when judgement depends on God’s secret knowledge of some other, virtual, universe.
The “A-level algorithm” appears to be an attempt to equal the Molinistic God in his ability to predict individual outcomes in a virtual world that is not, but would have been if things had been different.
But it goes without saying that in the real world of final exams all kinds of surprises happen. The girl who did badly in mocks suddenly realises she has to quit Twitter and hit the books, and gets straight A* grades. She gets to Oxford and ends up as a professor instead of a hairdresser.
The chap who was a model student until he got in with a crowd of dope-takers is so high on exam day that he writes out his name 500 times.
And you can envisage a million other stories, including your own, that show that the major events of our lives often result from external contingencies or internal choices.
The core problem in this instance seems to be that the education authorities, and the modellers themselves, have adopted the upside-down view of statistics against which I have railed over the years. This is expressed in the kind of comments I used to hear at BioLogos – “The events we call ‘chance’ are governed by statistical laws which can be investigated.” Now, if the outcomes of exams are thus governed by statistical laws, then it follows that statistical models will do a pretty good job of predicting actual outcomes for students, barring a few anomalies which one can try to sort out by an appeal process.
But that’s to put the cart well and truly before the horse. Statistical patterns found in something like examinations are the cause of nothing, but are the outcome of all those key individual choices and contingencies, unknowable to researchers, which constitute the real events. To apply any statistical tool to make predictions about individual outcomes is to ride roughshod over the actual causes involved, which are on the positive side natural ability, hard work, parental encouragement, ambition and so on, and on the negative side illness, poverty, individual luck in the questions on the paper, and other factors that may be considered to make an examination system “unfair” but which, in fact, are the universal experience of life. More importantly they are the factors for which the students all signed up when the course started.
If you doubt this, consider a skillful poker player, let’s call her Victoria, whose fortune is made by being in the right game at the right time, being dealt the right cards, but using them well and cleaning out some very rich people. But it doesn’t happen because COVID policies made card games illegal. Instead the Gambling Commission helpfully designs a statistical algorithm to make up the deficiency and to decide the outcomes of virtual games. It would be nonsensical for them to say that they have closely reproduced the statistical gains and losses of real games, when the whole reason for Victoria, who is not favoured by the algorithm, participating in poker was to make the statistics work for her.
Exams, comparably, are designed as the best available way to sort out the individuals most likely to benefit from a university education, and to reward them for their individual application. Any statistical interference in that must destroy that purpose, by reducing all those individuals to mere statistics.
The usual reply to such arguments is that, given what has happened to the educational system since March, the government had to do something to sort out university entrance. Well, the obvious thing would have been not to shut the schools without good evidence in the first place. Kids have already been traumatised by (model-induced) terror that the world is ending through climate change, told that their gender is up for grabs, and informed that their race, contrarily, makes them immutably evil. We might have spared them the loss of their education, too, if we had been less culpably risk-averse ourselves.
But once you have closed the schools, pretend exams cannot be made to replace the real exams around which students, schools and universities built their whole educational efforts, however much of a mess you’ve put yourself in. The government does not possess middle knowledge. Without it, sophisticated algorithms are no fairer to individuals than putting people into hospital using statistical modelling because COVID test-kits have run out would be.
One answer would be the labour intensive one of university interviews – with social distancing, of course – for those whose mock results and teacher assessments measured up. But that might upset the statistical predictions, or more likely just transfer them to the interview process itself, as Molinistic interview boards struggled to spot the able whilst meeting the targets for race, gender, poverty and sexuality diversity, and weeding out the unwoke deplorables exhibiting intellectual diversity.
It’s tough being a God nowadays.