Imperial College London’s epidemiology model, on which Britain and many other countries based their lockdown decisions, has come under scathing criticism for being buggy and producing random results. The secret model on which South Africa’s lockdown is based must be published for public and peer review without delay.

It hasn’t been a good week for Neil Ferguson, until recently beatified in the press as Professor Lockdown. Ferguson is the head of the department of infectious disease epidemiology at Imperial College London.

Superficially, his undoing was that he allowed a woman to visit him at home, breaking the UK’s social-distancing regulations for which he, himself, was the chief advocate. This caused him to resign his post in shame.

There was a lot more going on behind the scenes, however, culminating in a scathing review of the modelling code upon which he based the infamous Report 9 of 16 March 2020, which precipitated the UK’s dramatic about-turn from a herd-immunity strategy to a full lockdown response. Instead of mitigating the risk to the public healthcare system, the goal now became suppressing the pandemic altogether.

That report, which he delivered to Prime Minister Boris Johnson on the day it was published (probably along with a case of Covid-19, with which Ferguson was diagnosed only days later), predicted 510 000 deaths in the UK, and 2.2 million in the United States, in the absence of strict lockdown measures.

These terrifying numbers informed not only the UK’s response, but also those of other countries, including the US, Germany and France.

That something smelt vaguely of rat at Imperial College became evident only days after the report was published. On 22 March 2020, Ferguson tweeted: ‘I’m conscious that lots of people would like to see and run the pandemic simulation code we are using to model control measures against COVID-19. To explain the background – I wrote the code (thousands of lines of undocumented C) 13+ years ago to model flu pandemics…’

Only as good as its code

A forecasting model is only as good as its code, the assumptions on which it is based, and the data it is fed. Everyone knew that the data quality about Covid-19 was patchy and inconsistent. The original code, a single file of 15 000 undocumented lines of code that even the author apparently no longer understands, has never been released.

Suspicions about the quality of the model were confirmed, however, when a new version – substantially improved and rewritten with the help of Microsoft – was uploaded to GitHub, a collaboration platform for programmers.

Sue Denim, who lists senior software engineer at Google among her previous jobs, conducted a review of this code. Her report was scathing.

She called it ‘SimCity without the graphics’, in that it tried to model people’s behaviour and movements around the home, street, office and shops.

An alarming feature of the code was that because of serious bugs, it was non-deterministic. Even when fed with the same random number seed (a starting value that ought to produce the same sequence of pseudo-random numbers every time), running the model with the same inputs did not produce the same output, as it should have.

‘This problem makes the code unusable for scientific purposes, given that a key part of the scientific method is the ability to replicate results,’ writes Denim. The team from Imperial College were aware of the non-determinism, but appeared to believe that this didn’t matter, because models are in any case run many times with different random number seeds to arrive at average predictions.

Their explanation of why these buggy, random results occur has to do with how the code runs on multiple processor cores, but it turns out that even in single-processor mode, the bugs persist. It also appears to give different results depending on which computer you run it on.

Denim documents several other problems with the code, including the lack of unit testing, which is a standard protocol in software engineering for ensuring that code does what it is supposed to do, and making it possible to re-test the code consistently when changes are made.

Errors in the results

She also notes that much of the code consists of undocumented formulas which nobody understands. One consequence is that R0, a measure of how fast the virus spreads, is used as both an input and an output of the model, a practice which can lead to rapid divergence and large errors in the results.

Her conclusion is stark: ‘All papers based on this code should be retracted immediately. Imperial’s modelling efforts should be reset with a new team that isn’t under Professor Ferguson, and which has a commitment to replicable results with published code from day one.’

This is the most withering takedown of a computer model I’ve seen since reading the infamous HARRY_READ_ME.txt file leaked from East Anglia University’s Climatic Research Unit in 2009, which was damning about the data and code that went into modelling one of the world’s most important temperature datasets, and introduced the world to ‘fudge factors’, which apparently are necessary to produce the temperature data that the modellers wanted to see.

One might counter that Denim is not neutral in this discussion, since her post appears on a website entitled Lockdown Sceptics. This is true, but that doesn’t undermine the substance of her arguments.

One might argue that the projections, of half a million or so deaths in the UK, remain reasonable had no lockdown occurred, and as lockdown measures curbed the spread of the virus, the projections were reduced accordingly, so the bugs in the model were neither here nor there. This is indeed what the Imperial College team claims.

However, Ferguson – who works as an epidemiologist but is actually a theoretical physicist by training – has form producing wildly exaggerated projections about epidemics.

In 2005, he said bird flu might kill 200 million people. It killed 282 between 2003 and 2009.

In 2009, he said the most likely estimate of swine flu mortality was 0.4%, and that a ‘reasonable worst-case scenario’ would indicate 65 000 deaths in the UK. Swine flu killed 457 people in the UK, and the fatality rate was a mere 0.026%.

In 2001, Ferguson declared that a foot and mouth disease outbreak could kill 150 000 people, and the only way to curb it was to cull livestock even if they were healthy. Millions of head of cattle, sheep and pigs were slaughtered on his advice, costing the agriculture industry billions of pounds. In the end, only 200 people died.

‘Not fit for purpose’

Two highly critical peer-reviewed papers challenged the assumptions in Ferguson’s model, and Michael Thrusfield, professor of veterinary epidemiology at Edinburgh University, declared that it was ‘severely flawed’ and ‘not fit for purpose’.

In 2002, he said that up to 50 000 people could die of mad cow disease (bovine spongiform encephalopathy, or BSE), and three times that many if sheep got infected as well. Only 177 people succumbed to Creutzfeldt-Jakob Disease, the human equivalent of BSE.

He may not have been wrong on all these occasions, and perhaps these epidemics did not lead to disastrous consequences only because of the counter-measures Ferguson advocated.

Taken together, however, the history of alarming projections made by Ferguson’s epidemiological models, alongside the criticism of his Covid-19 model, make a strong argument for publicly disclosing the code and data for these models, so they can be reviewed by experts in programming and replicated by independent scientists in the academic literature.

The same is true in South Africa. President Cyril Ramaphosa’s decision to impose a hard lockdown was motivated – besides for a desire to speed up the socialist revolution – by epidemiological projections that anticipated that 40% of South Africans could get infected, causing more than 350 000 deaths.

Very few people, if any, noticed the rather massive disclaimer published by the National Institute for Communicable Diseases: ‘The National Institute for Communicable Diseases (NICD), a division of the National Health Laboratory Service, notes the alarm brought about by the publication of a preliminary prediction model of the COVID-19 Pandemic in South Africa. The South African Centre for Epidemiological Modelling and Analysis (SACEMA) model was a very preliminary static model developed to assess different modelling strategies and a wide range of scenarios. It did not, however, seek to make robust predictions regarding the likely course of the COVID-19 pandemic in the country. The confidence intervals around the estimates produced for the individual scenarios are wide. There is still much uncertainty regarding the likely trajectory of the pandemic in South Africa. As such, the NICD is gathering data on the local epidemic and has established a specialized advisory group of experienced modellers to develop more sophisticated and robust models for South Africa.’

Original model was rubbish

Essentially, that original model was rubbish. New models have been, or are being, developed, producing updated numbers. But nobody knows anything about them. We just need to take the government on its word that it is basing its decisions on good science.

Given how flimsy the Imperial College model turned out to be, there is no reason to be confident that South African epidemiological models are any better. Indeed, as recently as two weeks ago, Professor Shabhir Madhi, the former head of the NICD, declared them to be ‘back-of-the-envelope calculations’, ‘based on wild assumptions’.

The NICD told the Financial Mail that it could not make its models public due to a confidentiality agreement with the Department of Health. This is nonsense, of course. Publicly funded science, and the proceedings of government administration, should be openly accessible to the public. There is no reason to keep these secret, unless you’re trying to hide bad decisions based on bad science.

Unpublished models cannot be reviewed by public and academic experts. It’s as if the Department of Health hasn’t heard of ‘peer review’, or doesn’t like it if it has.

It is critical that the epidemiological models upon which government claims to base such grave decisions, with catastrophic consequences for lives and the economy, are open to scrutiny.

Their code, assumptions and data must be published in the academic literature, so that any errors they might contain can be rapidly detected and corrected. Even if they’re correct, we deserve the certainty of knowing that.

As long as the models stay secret, we’re flying blind, and probably right into a mountain.

The views of the writer are not necessarily the views of the Daily Friend or the IRR

If you like what you have just read, subscribe to the Daily Friend


contributor

Ivo Vegter is a freelance journalist, columnist and speaker who loves debunking myths and misconceptions, and addresses topics from the perspective of individual liberty and free markets.